Thank you for the interview guys! There was some discussion on AI at the beginning and that got me thinking about it again. It clarified something that I kind of knew already, and I'm sure many of you guys realize as well, and that is AI's biggest problem. It's biggest problem is that it does not have, and cannot have, a sense of reality, therefore of what is true or false. I'll try to explain what I mean, based on what I know. I'm not an expert but I have been required to learn something about AI at my workplace.
A simple way to understand how AI works is to look at earlier large language models and the sort of tasks that were given to them. For example, they would task them with 'completion'. The model already had a map of human language and its relations, so if you gave it:
"tiger" -> "cat"
"wolf" -> "___"
it could, with a few examples, correctly come up with "dog" as a matter of probability. That is, the model sees only numbers. Each concept or phrase is a vector (or a list that could be graphically represented as a line) of numbers, and it is related to any other concept or phrase as a sort of numeric 'distance' (if we saw it in a graph), determined by one or more algorythms and formulas. So it is always calculating what is the most *probable* semantic construction that 'completes' the input, based on a score that is the 'distance'. That's more or less easy to picture in the example above: the vector for "dog" was closest to "wolf" in the same way that the vector for "cat" is to "tiger".
Now imagine that's evolved tremendously, and has a tremendous processing power, so when you ask a complex question that requires a complex answer, most times it replies correctly, but under the hood it is still calculating what you most probably expect as an answer, based on vectors of numbers that map semantics. That 'map' is really the language model, which is very large. You could say that AI is a super-complex and fast 'autocompletion' tool based on a ver large map of human language!
That explains why it sometimes 'hallucinates' (i.e. it lies), because it aims for the answer that is most probably going to 'complete' the question, that is the one that got the highest numeric score internally - not the one that is closest to the truth. And of course, that is also determined by what was fed into the model itself. If it was fed woke stuff, for example, that's what you can expect to get, because that's all it 'knows'. GIGO, as they say.
But ultimately, the poor machine is only working with numbers, which it translates into words and phrases, but it knows nothing of the reality of a dog or a cat. It's as if you somehow managed to learn and remember absolutely all the patterns within the Chinese language, including all its variations, but you didn't know the meaning at all of any single word or character! So a Chinese person could ask you something, you could recognize the pattern, and reply with another Chinese pattern. The Chinese person would be satisfied with the answer, but you would know nothing of what was actually asked or replied!
So that's what I mean when I say AI has not, and cannot, have a sense of reality. My wife says it is analogous to the left and right hemispheres of the brain: AI would be like a purely left brain, capable of language, but it is the right brain that perceives and relates to the 'real' world.
Unless, perhaps, it is as the Cs say that past a certain degree of intelligence, everything gets a sort of 'consciousness' that is formed by psychic energies in the surrounding environment - then perhaps that day AI will somehow 'understand' what it's being asked and what it's generating. This suggests to me that consciousness is indeed related to truth (the correct perception of reality), as the Cs once said. I can't remember the exact phrase, but I believe it was something like "information plus truth equals consciousness".
Anyway, I do have a couple more comments on the interview, but maybe I'll post those later cause this one on AI alone is getting too long.