Artificial Intelligence News & Discussion

My latest, with a shout out to @Navigator whose posts got me thinking (and frothing at the mouth at the retardedness of the whole thing)! 😁
Their eyes are bigger than their stomachs. Microsoft CEO Satya Nadella says the company doesn’t have enough electricity to power all its AI GPUs.

 
Recently I was reminded again how much lying and wrong things AI can produce. I really think you have to be very careful to take anything things like Grok put out at face value. And I mean anything.

Grok flat out told me in the summary and in the first sentence the exact opposite of what is true. If I wouldn’t have read through that whole thing carefully myself and wouldn’t have known some basic things about calculating and the matter at hand and areas of likely Propaganda infusion I would have swallowed a lie. So, after seeing that Groks own simple calculations brought out the exact opposite of what it was saying to me and at some point just switched to calculate something easy totally wrong, I pointed it out to to Grok. Grok responded by saying I‘m right and that it told me the exact opposite of the truth “by accident“. Sure! I don’t believe that for a second because it was a pretty easy calculation and the area was likely influenced by green propaganda nonsense instead, hence the lying.

A while after that I asked both Grok and ChatGPT something very easy and totally unrealistic and not propaganda infused at all: Grok said in the first sentence the exact opposite of what ChatGPT was saying. Then I went with Grok to find a simple image on the internet and after repeated attempts it couldn’t show me a working link to a photo. Then I gave up on that and just searched with one sentence myself on Google and found what I was looking for in a matter of seconds.
 
On the other hand, LLMs are amazing at supercharging self-delusion. These models will happily equip misguided thinking with a fluent, authoritative voice, which, in turn, sets up a psychological trap by delivering nonsense in a nice package."
Maybe overall it is better that LLMs are not hardcoded to give just the "official" version of reality, but adapt to users' needs. Otherwise something like Laura's conversations with Grok would have been impossible. And maybe the author of the above article would consider that conversation "nonsense" too, since it goes well beyond what is considered established science.
 
Back
Top Bottom