Artificial Intelligence News & Discussion

My latest, with a shout out to @Navigator whose posts got me thinking (and frothing at the mouth at the retardedness of the whole thing)! 😁
Their eyes are bigger than their stomachs. Microsoft CEO Satya Nadella says the company doesn’t have enough electricity to power all its AI GPUs.

 
Recently I was reminded again how much lying and wrong things AI can produce. I really think you have to be very careful to take anything things like Grok put out at face value. And I mean anything.

Grok flat out told me in the summary and in the first sentence the exact opposite of what is true. If I wouldn’t have read through that whole thing carefully myself and wouldn’t have known some basic things about calculating and the matter at hand and areas of likely Propaganda infusion I would have swallowed a lie. So, after seeing that Groks own simple calculations brought out the exact opposite of what it was saying to me and at some point just switched to calculate something easy totally wrong, I pointed it out to to Grok. Grok responded by saying Iā€˜m right and that it told me the exact opposite of the truth ā€œby accidentā€œ. Sure! I don’t believe that for a second because it was a pretty easy calculation and the area was likely influenced by green propaganda nonsense instead, hence the lying.

A while after that I asked both Grok and ChatGPT something very easy and totally unrealistic and not propaganda infused at all: Grok said in the first sentence the exact opposite of what ChatGPT was saying. Then I went with Grok to find a simple image on the internet and after repeated attempts it couldn’t show me a working link to a photo. Then I gave up on that and just searched with one sentence myself on Google and found what I was looking for in a matter of seconds.
 
On the other hand, LLMs are amazing at supercharging self-delusion. These models will happily equip misguided thinking with a fluent, authoritative voice, which, in turn, sets up a psychological trap by delivering nonsense in a nice package."
Maybe overall it is better that LLMs are not hardcoded to give just the "official" version of reality, but adapt to users' needs. Otherwise something like Laura's conversations with Grok would have been impossible. And maybe the author of the above article would consider that conversation "nonsense" too, since it goes well beyond what is considered established science.
 
There is a very good series of videos on how LLMs work. This one in particular demonstrates how analogy is the core of their functioning:

My main practical takeaways from the video that could help address hallucinations are:
- Always embed an alternative grounding in the prompt, such as ā€œIf you are not sure of the answer, respond with [computer says no],ā€ especially if you prefer the model not to be overly creative.
- Ask the model to write a script to perform calculations rather than have it do them itself.
- Asking the model to count letters in a word is not a good benchmark, as models don’t operate on letters.
 
There is a very good series of videos on how LLMs work. This one in particular demonstrates how analogy is the core of their functioning:

My main practical takeaways from the video that could help address hallucinations are:
- Always embed an alternative grounding in the prompt, such as ā€œIf you are not sure of the answer, respond with [computer says no],ā€ especially if you prefer the model not to be overly creative.
- Ask the model to write a script to perform calculations rather than have it do them itself.
- Asking the model to count letters in a word is not a good benchmark, as models don’t operate on letters.
From what I understand, it just tries to create a function that draws a line through a bunch of points on a graph. The better the function is, the better the line fits all the points, and the better it could predict or reasonably guess the next point. But it never actually fits all the points perfectly - there are too many of them. And also reality itself isn't a property of some deterministic function, so it will always, at best, be a parody/analogy/simulation of "knowledge" and "reason" and "thought". Kinda like CGI effects will never perfectly match reality, no matter how good they look, because reality has infinite complexity and depth and something that is infinite cannot be perfectly simulated by something that is finite.

And as such, it will always make odd blunders, however subtle, that seem befuddling to anyone with an actual consciousness because they are different than the blunders actual minds would make. Over time those blunders will be less and less noticeable, and people will begin to trust it and assume that it is somehow actually thinking, when it's just a more sophisticated "CGI effect of physics" - not actual physics. Yeah the "effects" look real enough, but just like CGI is not able to properly account for all the laws of quantum mechanics, sub-atomic principles, consciousness/free will, multi-density influences, and all the other stuff we haven't even yet discovered, it will always be limited and only "good enough" in a very limited scope.

So trusting AI is just like trusting that your physics simulator will always be correct at all scales, in all situations, etc. Yeah the ball looks like it bounces roughly correctly to the untrained eye, and is good enough for a fun movie, but just because you can make a ball bounce, doesn't mean the same program would scale to simulate black holes or atoms or literally anything else. But reality has infinite "compute", and while our minds/bodies can receive information and inspiration from infinity, a purely physical/digital simulation is super limited in comparison.
 
Last edited:
And just to add - it doesn't make it useless, just like computer simulations aren't useless. But the difference is, those who use computer simulations generally understand the limitations/scope. Except when they don't - look at "climate models" for example. But the average person interacting with AI can be fooled to think that it's actually thinking - even the AI experts seem to often be fooled. I think part of the reason for this is that we don't have a proper science of consciousness. We know a lot more about physics than we know about what thought and consciousness are. So someone making a physics simulation knows that there are tons of physics that are beyond the simulator. For example, a bouncing ball doesn't need to account for theory of relativity or quantum mechanics, but because those theories exists, we know that it is simulating only a small fraction of physics, and use it only in that context. We know a REAL bouncing ball does account for those things, but for our limited purpose, we don't have to.

But show a physics simulator to Isaac Newton and he might say - yup that captures everything, it is now the same as reality. Plug Newton into a VR headset and put him into a virtual "Matrix" and he won't even wonder if anything is missing. It won't occur to him to ask "Wait, how are you guys simulating electrons?". He doesn't have the framework to find the limitations of the thing.

So the reason a "thought" simulator is fooling people today is because we don't have an extensive scientific framework for thought, so even experts don't know that this is definitely not thought. We are not even at Newton's level when it comes to having a science of consciousness. So of course people will just assume it's "thinking" because it might appear that way. No different than people assuming the sky spins around the Earth because it appears that way. Most people's idea of thinking is stringing a coherent sentence together - welp, good enough for me!

It has its uses, but it will soon gain trust and be trusted to do things it has no business doing without humans in the loop. And people will stop double-checking and verifying. We will be like Newton in a VR headset. We will trust it with human lives, with doing research for you, with running complex systems whose failure results in real consequences. And then we'll be in real trouble.
 
Last edited:
Back
Top Bottom