This is a stretch, but I wonder if there could be a connection between AI and schizoidia. Schizoids' chief fault is their lack of emotion and "cold rationality". I wonder if a functional AI would show similar tendencies?
I've recently had a look into 'machine learning' as it's something my employers would want to expand into at some point in the future. My knowledge is still superficial, but it's clear to me that computers can be very good at detecting patterns via statistical algorithms, but they really understand
nothing. A colleague with a computer science and philosophy background mentioned that these statistical algorithms have actually existed for several decades, the only difference these days being that computers are now able to process huge amounts of data, so their results are way more accurate.
But anything resembling a computer which understands what it's doing or what meaning have the words it's producing, it's non-existent, as far as I can tell. On the other hand, one of my bosses (also a personal friend, so I've been able to chat with him at length) insists that the most advanced types of AI are indeed presenting very impressive results, to the point that a computer can create a totally original work of art, like a drawing or a painting, and people would not be able to tell it was made by a machine. I remain unconvinced.
The Holo AI someone presented above was also very impressive in its coherence. Normally, texts or pictures created by 'AI' are absurd and meaningless to the point of hilarity. (See many funny examples
on this blog - it has literally brought me to tears of laughter sometimes!) I am guessing that the Holo AI is able to be so coherent because their creators have impossed a lot of restrictions. For example, if you choose 'sci fi' genre, and then choose the style of a certain author, it will not deviate form the permitted language and won't insert a totally unrelated noun or verb in a certain phrase, and so on. Perhaps it even simply rearranges a number of approved phrases taken directly from the chosen author or genre, creating the illusion of coherence. It probably works for short fragments, like a paragraph or two, but I doubt it will produce an entire coherent novel. Now, there was also a link posted above to a novel supposedly written entirely by AI, so maybe I'm wrong. It would be interesting to see what parameters they gave for the production of such novel and what data the AI was fed.
Anyway, the point being that I don't think that AI has reached the point of grasping
meaning. Pattern recognition and production, yes, but real
understanding - nope. This is interesting in light of what Approaching Infinity was saying. Do some pathologies have trouble with deep meaning? It's also interesting that we ended discussing this topic, since Georgia Le Carré seems so worried about the threat of AI.