I work in a tech company and the penetration of AI is quite apparent. The company is one of many which are starting to sell 'solutions' to corporate or government clients based on already existing AI models - none which I've seen to be too scary... yet. They are rather basic as AI goes, for example, a 'chat' can be set up that is fed with all the thousands of documents of a client and then its employees can easily 'chat with the documents' instead of having to weed through the documentation to find some piece of data. The models I see on my job are, in my opinion, nowhere close to being self-aware, although still quite impressive compared to what publicly available chatbots could do just a year or two ago. But I also know that on higher-echelons of society they have way more advanced forms of AI, and to think at what point of development they are and what they might be using it for is unsettling to say the least.
I admit I sometimes struggle with the idea of how much I am personally contributing with this 'monster'. The best answer I can tell myself at the moment is that as long as AI is just a tool to do certain tasks better, just like you would use a calculator (except that this is a 'big bad*ss' calculator), then it's neither good nor bad, it's just a tool and it depends on what you use it for. But if you use it to take over areas of human life and experience not proper for a tech tool - things like making life decisiones, religion, morality, emotions, politics, personal relations, etc., then its a big and dangerous irresponsiblity. Also, playing God by trying to recreate a consciousness - do we really need that? I don't think so. At the same time, I find it fascinating to learn and understand how it works 'under the hood'. I'm quite at an early stage of that but I already know this is mostly statistical models crunching massive amounts of data at incredible speeds.
Anyway, that's one reason I've been thinking about this AI business a lot lately.
Recently, my wife started reading McGilchrist's '
The Master and His Emmisary' and she has been telling me a lot about the functions of the left and right hemisphere's of the brain. And it occurred to me that when people make the comparison between the human brain and AI they are making the mistake of ignoring that AI really works in similar ways to the left hemisphere of the brain and not the right hemisphere. Things like logic, math, language, paying attention to specific tasks, abstraction - those are things associated to the left hemisphere. But thinks like understanding context, creativity, understanding facial expressions, getting an overall sense of the human experience, being grounded on senses, seeking answers further away from what's immediately obvious, making connections out of implied information - those things are associated with the right hemisphere. And AI, as far as my understanding goes, has nothing similar to that.
For example, when a Large Language Model (AI) maps a concept like 'dog' or 'cat', it really knows nothing of the experiential knowledge we as human beings have of doggies or kitties. Instead, for 'dog' it has a long list of numbers (it can be any amount of numbers but some models have like 1,500 per concept or even phrases), each number of some 6 digits and can be positive or negative. These numbers represent a 'vector', that if we were to plot on a graph it would be a line (can be plotted on a 2d graph with 2 numbers, on a 3d graph with 3 numbers, in which case it would be a 'flat surface', and more than that that it would be an imaginary graph, but the math can still be applied). Then the model determines the relations between concepts by calculating the cosine between the vectors and 'mapping' their location in the model. So if you ask the model if 'wolf' resembles more a 'cat' or a 'dog', it applies the math and knows which one is more similar. But we, human beings, only need to remember our life experience of dogs, cats and wolves, and can easily see which one resembles which.
Now, Elon Musk, as much as I think he is a well-meaning guy, has some misguided ideas regarding AI. Recently he announced
xAI, a project for which he gathered a bunch of AI geniuses. Musk says his goal is to make a 'good' AI, and the way of making it good is to make it curious about reality, so it will naturally find human beings interesting to learn about and will therefore not consider destroying them. Also, he hopes that his AI will find some fundamental answers about the meaning of the Universe or something like that. And here's where I think there is more than one problem. The first one is, like I said above, that giving such tasks to AI is irresponsible because I believe that we, as sentient beings, have a God-given responsibility to make sense of reality and our own lives ourselves. That's probably the reason God created us in the first place - to 'know God'. If a machine is going to do that for us we might as well be dead. Second, there's the problem of AI being capable of thinking as a left hemisphere of the brain exclusively - no creativity, no life experience, no lateral thinking, etc. So it is doomed to fail.
And here I can only think of that famous, funny and insightful passage from '
The Hitchhiker's Guide to the Galaxy' in which an alien race asks a supercomputer for the answer to the meaning of life, and the supercomputer takes thousands of years to calculate it, and when the aliens come back the computer says the answer is... "42".