Well, this really depends on how you want to define "new". Sure, you can get things that were never before said in those exact words. But AI cannot inject its own "ideas". It can only recombine what's in existing texts through predefined language patterns. To some extent, that can provide insights that look "new" (but hey, maybe they just look new to you because you haven't seen them before - doesn't mean that nobody has - the AI was trained on more texts than any of us can ever read), but it's not on the same level as what humans can come up with. There is clear limitation - the source material. Any AI output has to be based on that. Human thinking can go beyond that. So how "new" anything from AI is is basically a matter of semantics. The more important metric, IMO, is whether it's useful. And I'd say it certainly can be.
I agree. And in fact, it has made me wonder about new ideas and thought in general. Is AI THAT different from human thinking? People who come up with truly "new" ideas are far and between. Thinking is a lot about connecting dots. The creative process may be about connecting new dots, more than about creating something new. Perhaps the difference is that, while humans can tap into the information field at large (sometimes even "channelling" others who came before them), AI can only tap into the human repository of information, or "human information field"). In that sense, it may be becoming "sentient", albeit limited. I wrote a little something about this on substack a while back, related to human language vs. language models.
But then, that gets very philosophical. What is REALLY information, what is consciousness, etc.? So, I think we have to leave it open, and use these tools with as much discernment as possible. They can be extremely useful as long as we don't outsource all our thinking to them. FWIW!