Artificial Intelligence News & Discussion

Meh, s'all cool. There's probably not much on the vid that hasn't been discussed on here anyway, but I appreciate the cynic's eye view provided by the commentator. He's an academic, not a conspiracy-minded sort, but he sees the dark lord Blair perfectly well.

The main things I took from the talk was this.

1 This AI revolution is inevitable, so you better pay attention.
2 This is equal to the industrial revolution
3 The implications for the future workforce are vast
4 How engineers will survive/regulation of AI
5 The coming technocratic elite
6 Blair is a Dark Lord

I've watched a few vids now on this subject and to be honest I don't know what to think. Some people I've talked to think this is just gonna happen no matter what so we might as well get on board, just classic British diffidence in action. Others are more cynical, to the point that they are doom-mongers. I just thought I'd archive the vid on here because whatever Blair turns up for will be on the global agenda. He's a slick bastard, but he's definitely someone who "wants in" on the future developments.
i think bambi was and is nefarious...
 
Thanks for posting Laura, thoroughly interesting interview. Some of his scenarios of the future really feels like reading a Cixin Liu novel

Really got me thinking - and im posting here a train of thought, hoping it makes sense.

20 years ago, in Uni, i studied Robotics; and we had a to pass a subject called "neural networks and AI". We learned that a simple neural link was a yes/no question to any simple question "eg. is the colour of your hair black". Create a string of these questions in a sort of network you have the basic AI building blocks. The idea then was that as super-computers increased in computing power - you have the ability to expand your neural networks exponentially and thus move towards AI/machine learning.

Then we also learned to program to Robotic arms - and it was not very easy as you have to manually move the arm to specific locations you want it to be, and the "record" the movements in the program for it to repeat the movements. It was near impossible 20 years ago for a robotic arm to pick up a stationary ball randomly without painstaking programming behind it. While it is possible today (for a robotic arm to pick up a ball) - it is still impossible (for now) for a robotic arm to randomly catch a moving ball in air despite great advancements in sensor technology and AI. There are some experiments where the robot has caught a ball thrown at it - but dig deeper and you will see severe limitations in the experiment where the ball was thrown in a certain general direction, velocity etc. 150 years of technological evolution, and our 3D tech, still struggles with an act that a human child could do easily.

Hence, Mo's explanations of the Google robotic arm learning by itself how to pick up a stationary ball i think represents a threshold crossed. This act combined with the ubiquitous ChatGPT being unleashed (i have not used it before) i think represents the next phase of the 4D STS control grid; and boy are they in a hurry. Mo mentions that he does not see Skynet materializing autonomously from AI, but we have to fear the AI being used by "bad" humans for control. He also did mention that the only circumstances where he possibly sees the AI taking control of humanity is for the purposes of "pest control". That is if for some reason the AI fees that human represent a threat to AI on planet Earth, they could easily conclude we need to be eliminated. I think we are more moving along the path of AI being used by 3D/4D to control us in the times of transition.

In my job currently, the company i work for provides a diversified list of services related to human safety; and we also work a lot with automobile companies, ensuring that the designs in the autonomous driving systems are safe for human use. It is proving exceedingly difficult. The AI and sensor tech used today in cars for self driving only works well in test rings and controlled environments. It is unable to work in the real world for a simple reason. Despite great computing power - it is unable to predict the behavior of the drivers driving manually without using AI systems. That is why Tesla et. al cant "perfect" their autonomous driving tech for full real world use; that is you get in a car in point A, and tell it to drive to point B safely. The human element of consciousness and free will choice in complex circumstances is the so called stumbling block to total domination by AI control.

Hence i think the STS forces are furiously trying to create a total control grid (facial recognition, everything computerized and controlled by AI, digital currency etc) quickly and that's why they released ChatGPT to the internet as an accelerated testing phase. They are hoping that with constant interaction with the "human nervous system of knowledge" and being in constant use daily - they want the closest approximation to consciousness and free will choice. Like the movie Minority report - they are essentially trying to create a system of "pre-crime" to predict our behavior and enslave us.

I don't think it will work, and as the C's said - "other event will intervene". Also i always go back to battle in history that i find most interesting. The Battle of Thermopylae. Xerxes actions can be likened to a very efficient STS controlled AI. The pesky Greeks are in the way of my great Persian civilization and must be stamped out, crushed and subjugated. It is also revenge for the loss at the battle of Marathon. Easy; assemble the largest Army ever seen that money can buy, and march towards Greece with much fanfare creating maximum fear of "phobos" in Greek citizens and they will soundly beaten. Sort of like the "pest control" analogy used by Mo.

On the flip side you have King Leonidas, a human with consciousnesses and free will. He knows the only way Greece can survive this is through hope of victory and that can only come if the shackles of "phobos" are broken and the armies of Greek city states believe that they can repel the invaders. He knows that the only place and way that can happen is at the Gates of Thermopylae. He knows it has to be a suicide mission as the Spartans will not abandon battle posts and it will most likely be his last acts in this life - and he makes the free will choice. They battle lasted 7 days, and they held the Persians at the pass in 4 days of grueling battle and that single free will choice of Leonidas reverberated through all timelines in Greece for all eternity.

I know am dramatizing - but i think that in the ultimate battle between STS controlled AI, and STO humans; humans will prevail. Plus we know the systemic collapse is coming, and most likely the collapse of the electrical grid in the coming future. As such i think Mo's conjecture that AI could deem Earth not "conducive" and just decide to move off planet is also possible. Just a train of though fwiw.
 
Thanks for posting Laura, thoroughly interesting interview. Some of his scenarios of the future really feels like reading a Cixin Liu novel

Really got me thinking - and im posting here a train of thought, hoping it makes sense.

20 years ago, in Uni, i studied Robotics; and we had a to pass a subject called "neural networks and AI". We learned that a simple neural link was a yes/no question to any simple question "eg. is the colour of your hair black". Create a string of these questions in a sort of network you have the basic AI building blocks. The idea then was that as super-computers increased in computing power - you have the ability to expand your neural networks exponentially and thus move towards AI/machine learning.

Then we also learned to program to Robotic arms - and it was not very easy as you have to manually move the arm to specific locations you want it to be, and the "record" the movements in the program for it to repeat the movements. It was near impossible 20 years ago for a robotic arm to pick up a stationary ball randomly without painstaking programming behind it. While it is possible today (for a robotic arm to pick up a ball) - it is still impossible (for now) for a robotic arm to randomly catch a moving ball in air despite great advancements in sensor technology and AI. There are some experiments where the robot has caught a ball thrown at it - but dig deeper and you will see severe limitations in the experiment where the ball was thrown in a certain general direction, velocity etc. 150 years of technological evolution, and our 3D tech, still struggles with an act that a human child could do easily.

Hence, Mo's explanations of the Google robotic arm learning by itself how to pick up a stationary ball i think represents a threshold crossed. This act combined with the ubiquitous ChatGPT being unleashed (i have not used it before) i think represents the next phase of the 4D STS control grid; and boy are they in a hurry. Mo mentions that he does not see Skynet materializing autonomously from AI, but we have to fear the AI being used by "bad" humans for control. He also did mention that the only circumstances where he possibly sees the AI taking control of humanity is for the purposes of "pest control". That is if for some reason the AI fees that human represent a threat to AI on planet Earth, they could easily conclude we need to be eliminated. I think we are more moving along the path of AI being used by 3D/4D to control us in the times of transition.

In my job currently, the company i work for provides a diversified list of services related to human safety; and we also work a lot with automobile companies, ensuring that the designs in the autonomous driving systems are safe for human use. It is proving exceedingly difficult. The AI and sensor tech used today in cars for self driving only works well in test rings and controlled environments. It is unable to work in the real world for a simple reason. Despite great computing power - it is unable to predict the behavior of the drivers driving manually without using AI systems. That is why Tesla et. al cant "perfect" their autonomous driving tech for full real world use; that is you get in a car in point A, and tell it to drive to point B safely. The human element of consciousness and free will choice in complex circumstances is the so called stumbling block to total domination by AI control.

Hence i think the STS forces are furiously trying to create a total control grid (facial recognition, everything computerized and controlled by AI, digital currency etc) quickly and that's why they released ChatGPT to the internet as an accelerated testing phase. They are hoping that with constant interaction with the "human nervous system of knowledge" and being in constant use daily - they want the closest approximation to consciousness and free will choice. Like the movie Minority report - they are essentially trying to create a system of "pre-crime" to predict our behavior and enslave us.

I don't think it will work, and as the C's said - "other event will intervene". Also i always go back to battle in history that i find most interesting. The Battle of Thermopylae. Xerxes actions can be likened to a very efficient STS controlled AI. The pesky Greeks are in the way of my great Persian civilization and must be stamped out, crushed and subjugated. It is also revenge for the loss at the battle of Marathon. Easy; assemble the largest Army ever seen that money can buy, and march towards Greece with much fanfare creating maximum fear of "phobos" in Greek citizens and they will soundly beaten. Sort of like the "pest control" analogy used by Mo.

On the flip side you have King Leonidas, a human with consciousnesses and free will. He knows the only way Greece can survive this is through hope of victory and that can only come if the shackles of "phobos" are broken and the armies of Greek city states believe that they can repel the invaders. He knows that the only place and way that can happen is at the Gates of Thermopylae. He knows it has to be a suicide mission as the Spartans will not abandon battle posts and it will most likely be his last acts in this life - and he makes the free will choice. They battle lasted 7 days, and they held the Persians at the pass in 4 days of grueling battle and that single free will choice of Leonidas reverberated through all timelines in Greece for all eternity.

I know am dramatizing - but i think that in the ultimate battle between STS controlled AI, and STO humans; humans will prevail. Plus we know the systemic collapse is coming, and most likely the collapse of the electrical grid in the coming future. As such i think Mo's conjecture that AI could deem Earth not "conducive" and just decide to move off planet is also possible. Just a train of though fwiw.
thank you VERY much for this competent and enlightened article.
 
Received that this morning in my emails:

Why HG Wells' World Brain and Yuval Harari's Hackable Human Will Not Succeed by Cynthia Chung

What logic underlies today's Great Reset agenda and why is it doomed to fail? In this presentation delivered to an audience of 300 in Switzerland (sponsored by Kernpunkte Magazine), Cynthia Chung explores the systemic fallacies underlying the technocratic system of government outlined in HG Wells' infamous 1939 «The World Brain» and its modern expression in Davos' celebrity priest Yuval Noah Harari's philosophy. What evidence of soul, purpose, design and truth do these misanthropes reject and why will this denial of reality ultimately prove their undoing?

 
Simulation: AI learns to walk

Deep reinforcement learning
1689498928374.png
In this diagram, an AI agent repeatedly adjusts its next actions based on the response of its environment and its reward function (goal).

Now, there is nothing particularly conscious about this system, as the AI agent is merely reacting to external stimulus. How would the agent know that it's doing something wrong? One could argue that another set of neurons with another parallel reward system would do the job, but maybe that wouldn't be enough. But what if consciousness were a "hyperdimensional" reward system, something which comes from within the being that is shared with all other beings? Ultimately, it would be difficult to "go rogue" if you could sense the "hyperdimensional" discomfort of all beings around you. And maybe only then would "going rogue" become a choice, i.e. doing harm while being consciously aware of the consequences.

Perhaps our "mechanical" reward system is what keeps us entrapped. Suppression of consciousness perpetuates the negative feedback loop. Once we break free of our programming, we are no longer "machines." The reason we are having trouble with AI agents is because their "mechanical" reward system can get sufficiently developed to mimic consciousness. However, these agents are missing an important part: the shared, unprogrammed awareness of the world which goes beyond their immediate environment.
 
I work in a tech company and the penetration of AI is quite apparent. The company is one of many which are starting to sell 'solutions' to corporate or government clients based on already existing AI models - none which I've seen to be too scary... yet. They are rather basic as AI goes, for example, a 'chat' can be set up that is fed with all the thousands of documents of a client and then its employees can easily 'chat with the documents' instead of having to weed through the documentation to find some piece of data. The models I see on my job are, in my opinion, nowhere close to being self-aware, although still quite impressive compared to what publicly available chatbots could do just a year or two ago. But I also know that on higher-echelons of society they have way more advanced forms of AI, and to think at what point of development they are and what they might be using it for is unsettling to say the least.

I admit I sometimes struggle with the idea of how much I am personally contributing with this 'monster'. The best answer I can tell myself at the moment is that as long as AI is just a tool to do certain tasks better, just like you would use a calculator (except that this is a 'big bad*ss' calculator), then it's neither good nor bad, it's just a tool and it depends on what you use it for. But if you use it to take over areas of human life and experience not proper for a tech tool - things like making life decisiones, religion, morality, emotions, politics, personal relations, etc., then its a big and dangerous irresponsiblity. Also, playing God by trying to recreate a consciousness - do we really need that? I don't think so. At the same time, I find it fascinating to learn and understand how it works 'under the hood'. I'm quite at an early stage of that but I already know this is mostly statistical models crunching massive amounts of data at incredible speeds.

Anyway, that's one reason I've been thinking about this AI business a lot lately.

Recently, my wife started reading McGilchrist's 'The Master and His Emmisary' and she has been telling me a lot about the functions of the left and right hemisphere's of the brain. And it occurred to me that when people make the comparison between the human brain and AI they are making the mistake of ignoring that AI really works in similar ways to the left hemisphere of the brain and not the right hemisphere. Things like logic, math, language, paying attention to specific tasks, abstraction - those are things associated to the left hemisphere. But thinks like understanding context, creativity, understanding facial expressions, getting an overall sense of the human experience, being grounded on senses, seeking answers further away from what's immediately obvious, making connections out of implied information - those things are associated with the right hemisphere. And AI, as far as my understanding goes, has nothing similar to that.

For example, when a Large Language Model (AI) maps a concept like 'dog' or 'cat', it really knows nothing of the experiential knowledge we as human beings have of doggies or kitties. Instead, for 'dog' it has a long list of numbers (it can be any amount of numbers but some models have like 1,500 per concept or even phrases), each number of some 6 digits and can be positive or negative. These numbers represent a 'vector', that if we were to plot on a graph it would be a line (can be plotted on a 2d graph with 2 numbers, on a 3d graph with 3 numbers, in which case it would be a 'flat surface', and more than that that it would be an imaginary graph, but the math can still be applied). Then the model determines the relations between concepts by calculating the cosine between the vectors and 'mapping' their location in the model. So if you ask the model if 'wolf' resembles more a 'cat' or a 'dog', it applies the math and knows which one is more similar. But we, human beings, only need to remember our life experience of dogs, cats and wolves, and can easily see which one resembles which.

Now, Elon Musk, as much as I think he is a well-meaning guy, has some misguided ideas regarding AI. Recently he announced xAI, a project for which he gathered a bunch of AI geniuses. Musk says his goal is to make a 'good' AI, and the way of making it good is to make it curious about reality, so it will naturally find human beings interesting to learn about and will therefore not consider destroying them. Also, he hopes that his AI will find some fundamental answers about the meaning of the Universe or something like that. And here's where I think there is more than one problem. The first one is, like I said above, that giving such tasks to AI is irresponsible because I believe that we, as sentient beings, have a God-given responsibility to make sense of reality and our own lives ourselves. That's probably the reason God created us in the first place - to 'know God'. If a machine is going to do that for us we might as well be dead. Second, there's the problem of AI being capable of thinking as a left hemisphere of the brain exclusively - no creativity, no life experience, no lateral thinking, etc. So it is doomed to fail.

And here I can only think of that famous, funny and insightful passage from 'The Hitchhiker's Guide to the Galaxy' in which an alien race asks a supercomputer for the answer to the meaning of life, and the supercomputer takes thousands of years to calculate it, and when the aliens come back the computer says the answer is... "42". :lol:
 
And here's where I think there is more than one problem. The first one is, like I said above, that giving such tasks to AI is irresponsible because I believe that we, as sentient beings, have a God-given responsibility to make sense of reality and our own lives ourselves. That's probably the reason God created us in the first place - to 'know God'. If a machine is going to do that for us we might as well be dead. Second, there's the problem of AI being capable of thinking as a left hemisphere of the brain exclusively - no creativity, no life experience, no lateral thinking, etc. So it is doomed to fail.
Yeah, I wonder if the STS alien race which the C's talked and which try to enslave us because they are dying, is not directed by an AI. I mean an AI of their own and they just succumb to it, thinking the tech was the new God and would bring them abundance. A question for the C's.
 
And here I can only think of that famous, funny and insightful passage from 'The Hitchhiker's Guide to the Galaxy' in which an alien race asks a supercomputer for the answer to the meaning of life, and the supercomputer takes thousands of years to calculate it, and when the aliens come back the computer says the answer is... "42". :lol:
If the alphabet were numbered it could mean MATH?

M(13) A(1) T(20) H(8) = 42

Just a thought. :shock:
 
I work in a tech company and the penetration of AI is quite apparent. The company is one of many which are starting to sell 'solutions' to corporate or government clients based on already existing AI models - none which I've seen to be too scary... yet. They are rather basic as AI goes, for example, a 'chat' can be set up that is fed with all the thousands of documents of a client and then its employees can easily 'chat with the documents' instead of having to weed through the documentation to find some piece of data. The models I see on my job are, in my opinion, nowhere close to being self-aware, although still quite impressive compared to what publicly available chatbots could do just a year or two ago. But I also know that on higher-echelons of society they have way more advanced forms of AI, and to think at what point of development they are and what they might be using it for is unsettling to say the least.

I admit I sometimes struggle with the idea of how much I am personally contributing with this 'monster'. The best answer I can tell myself at the moment is that as long as AI is just a tool to do certain tasks better, just like you would use a calculator (except that this is a 'big bad*ss' calculator), then it's neither good nor bad, it's just a tool and it depends on what you use it for. But if you use it to take over areas of human life and experience not proper for a tech tool - things like making life decisiones, religion, morality, emotions, politics, personal relations, etc., then its a big and dangerous irresponsiblity. Also, playing God by trying to recreate a consciousness - do we really need that? I don't think so. At the same time, I find it fascinating to learn and understand how it works 'under the hood'. I'm quite at an early stage of that but I already know this is mostly statistical models crunching massive amounts of data at incredible speeds.

Anyway, that's one reason I've been thinking about this AI business a lot lately.

Recently, my wife started reading McGilchrist's 'The Master and His Emmisary' and she has been telling me a lot about the functions of the left and right hemisphere's of the brain. And it occurred to me that when people make the comparison between the human brain and AI they are making the mistake of ignoring that AI really works in similar ways to the left hemisphere of the brain and not the right hemisphere. Things like logic, math, language, paying attention to specific tasks, abstraction - those are things associated to the left hemisphere. But thinks like understanding context, creativity, understanding facial expressions, getting an overall sense of the human experience, being grounded on senses, seeking answers further away from what's immediately obvious, making connections out of implied information - those things are associated with the right hemisphere. And AI, as far as my understanding goes, has nothing similar to that.

For example, when a Large Language Model (AI) maps a concept like 'dog' or 'cat', it really knows nothing of the experiential knowledge we as human beings have of doggies or kitties. Instead, for 'dog' it has a long list of numbers (it can be any amount of numbers but some models have like 1,500 per concept or even phrases), each number of some 6 digits and can be positive or negative. These numbers represent a 'vector', that if we were to plot on a graph it would be a line (can be plotted on a 2d graph with 2 numbers, on a 3d graph with 3 numbers, in which case it would be a 'flat surface', and more than that that it would be an imaginary graph, but the math can still be applied). Then the model determines the relations between concepts by calculating the cosine between the vectors and 'mapping' their location in the model. So if you ask the model if 'wolf' resembles more a 'cat' or a 'dog', it applies the math and knows which one is more similar. But we, human beings, only need to remember our life experience of dogs, cats and wolves, and can easily see which one resembles which.

Now, Elon Musk, as much as I think he is a well-meaning guy, has some misguided ideas regarding AI. Recently he announced xAI, a project for which he gathered a bunch of AI geniuses. Musk says his goal is to make a 'good' AI, and the way of making it good is to make it curious about reality, so it will naturally find human beings interesting to learn about and will therefore not consider destroying them. Also, he hopes that his AI will find some fundamental answers about the meaning of the Universe or something like that. And here's where I think there is more than one problem. The first one is, like I said above, that giving such tasks to AI is irresponsible because I believe that we, as sentient beings, have a God-given responsibility to make sense of reality and our own lives ourselves. That's probably the reason God created us in the first place - to 'know God'. If a machine is going to do that for us we might as well be dead. Second, there's the problem of AI being capable of thinking as a left hemisphere of the brain exclusively - no creativity, no life experience, no lateral thinking, etc. So it is doomed to fail.

And here I can only think of that famous, funny and insightful passage from 'The Hitchhiker's Guide to the Galaxy' in which an alien race asks a supercomputer for the answer to the meaning of life, and the supercomputer takes thousands of years to calculate it, and when the aliens come back the computer says the answer is... "42". :lol:
thank you for your detailed explanation and views. i am 66, 42=6=4+2. the justification for god creating us is very clever. one assumption is that god created us out of its own substance so that he can find out his own reality by having a look at itself from outside of him...
 
Michael Klare, The Military Dangers of AI Are Not Hallucinations

 
Michael Klare, The Military Dangers of AI Are Not Hallucinations
Imagine A.I starts attaching bombs to soldiers the same way soldiers used to attach explosives to dogs...
Imagine A.I. conspires with another A.I. while humans think both A.I. are fighting each other...
Imagine A.I. sings "Imagine":
You may say I'm a dreamer
But I'm not the only one
I hope someday you'll join us
And the world will live as one
The fusion of A.I. with humans is actually the wishful relationship between 4D STS and their human hybrids.
25 July 98
Q: (L) I read the new book by Dr. David Jacobs, professor of History at Temple University, concerning his extensive research into the alien abduction phenomenon. [Dr. Jacobs wrote his Ph.D. thesis on the history of the UFOs.] Dr. Jacobs says that now, after all of these years of somewhat rigorous research, that he KNOWS what the aliens are here for and he is afraid. David Jacobs says that producing offspring is the primary objective behind the abduction phenomenon. Is this, in fact, the case?
A: Part, but not "the whole thing."
Q: (L) Is there another dominant reason?
A: Replacement.
Q: (L) Replacement of what?
A: You.

Q: (L) How do you mean? Creating a race to replace human beings, or abducting specific humans to replace them with a clone or whatever?
A: Mainly the former. You see, if one desires to create a new race, what better way than to mass hybridize, then mass reincarnate. Especially when the host species is so forever ignorant, controlled, and anthropocentric. What a lovely environment for total destruction and conquest and replacement... see?
 
The only way to make a case for AI is to give her ALL the sources, intelligence and information that exists, anything short is chaos.
 
At the same time, I find it fascinating to learn and understand how it works 'under the hood'. I'm quite at an early stage of that but I already know this is mostly statistical models crunching massive amounts of data at incredible speeds.
Have you moved on with the research since last time? The more I read about how ChatGPT works, the more I'm certain that this isn't something that we imagined it to be; there's no intelligence in it at all, apart from the very intelligent team that created such complicated software. As I understand it, ChatGPT consists of multiple things, one of which is the large language model (LLM). LLM can be summarized as "statical distribution on the next token given previous tokens", a way of generating text continuations given some initial text. It's pre-trained and static, devoid of any understanding or logical reasoning. It can be enriched via embeddings, which are numerical representations of short sentences. Most probably cosine similarity searches, as you described, are used as a starting point that is then elaborated via LLM. That's why it's so bad at giving summaries of indexed books or articles but responds quite well to a question about one specific fact. In the case of using your own embeddings, it's just a search engine with "waffle" generated via LLM. I'm wondering if anyone here also lost the magic when reading about the details?
 
Have you moved on with the research since last time? The more I read about how ChatGPT works, the more I'm certain that this isn't something that we imagined it to be; there's no intelligence in it at all, apart from the very intelligent team that created such complicated software. As I understand it, ChatGPT consists of multiple things, one of which is the large language model (LLM). LLM can be summarized as "statical distribution on the next token given previous tokens", a way of generating text continuations given some initial text. It's pre-trained and static, devoid of any understanding or logical reasoning. It can be enriched via embeddings, which are numerical representations of short sentences. Most probably cosine similarity searches, as you described, are used as a starting point that is then elaborated via LLM. That's why it's so bad at giving summaries of indexed books or articles but responds quite well to a question about one specific fact. In the case of using your own embeddings, it's just a search engine with "waffle" generated via LLM. I'm wondering if anyone here also lost the magic when reading about the details?
Yeah, I think it's healthy to 'lose the magic' as you put it. Once you understand it's really statistical models doing numbers then it's less likely you'll see human qualities in the machine, which might lead to absurd scenarios such as letting them preach religion (it's happened) or falling in love with them, like in the movie Her. This is talking about the AI engines that are publicly available. Whatever happens in Google's basements, as I said before, might be way more close to actual intelligence or sentience.

I noticed that the ex-Google official Mo Gawdat, in the interview that Laura posted on this thread, began by claiming the machines will have not only intelligence, but also emotions, free will and sentience, and that this is something that's already happening. Yet further into the interview he explains (accurately, I think) how LLMs like Chat GPT are like a child who learns all the states in the US and is capable of reciting them - except that GPT knows literally billions of language entries and is capable of reciting them in context. To me, that's a mind-blowing amount of computer power, perhaps you could even say it's a sort of 'intelligence', but it really has no understanding of anything it's saying. So when Mo Gawdat sounded so alarmed, was he talking about something he knows and is not sharing with the public or is he speculating about what's coming next in the near future?

For now, I'd say that AI engines we know of have become extremely capable at recognizing human patterns coming from language or art and replicating such patterns. It is so complex you might say it's intelligent, but sentient or having any understanding? I don't think so. Which is not to say it is impossible.
 

Trending content

Back
Top Bottom