Artificial Intelligence News & Discussion

Thank you all, such an interesting conversation.

Since the Geoffrey Hinton interview with Steven Bartlett (The Diary of a CEO) was referenced above, I'm including yesterday's (8/4/25) interview with Mo Gawdat (former Chief Business Officer at Google X) here as another "insider" perspective on the future of AI, and in the links below, the DoAC interview with Mustafa Suleyman (CEO of Microsoft AI) referenced in this interview.

IMHO... I suppose it is easier for men like Gawdat and Suleyman to publicly speculate on the divergent outcomes possible in the burgeoning relationship between humanity and AI when they are of the "strata" of humanity who could afford to jump on a SpaceX flight with Elon Musk to a miraculously waiting space station, and attempt to escape an AI-apocalyptic Earth, and return to Earth when it becomes heavenly, yeah? 🤨

EXCERPT [starting @ 1:00:03]
STEVEN BARTLETT: You know, I imagine throughout human history, if we had podcasts, conversations would have been warning of a dystopia around the corner. You know, when they had technology and the Internet, they would have said, Oh, we’re finished. When the tractor came along, they would have said, Oh, God, we’re finished because we’re not going to be able to farm anymore. So, is this not just another one of those moments where we couldn’t see around the corner, so we forecasted unfortunate things?

MO GAWDAT: It could be. I’m begging that I’m wrong. I’m just asking if there are scenarios that you think can provide that. You know, Mustafa Suleyman, you hosted him here?

STEVEN BARTLETT: I did, yeah. He was on the show. (HERE)

MO GAWDAT: The Coming Wave. (<- Interesting name, eh?!)

STEVEN BARTLETT: Yeah.

MO GAWDAT: And he speaks about pessimism aversion. That all of us, people who are supposed to be in technology and business and so on, who are always supposed to stand on stage and say, “The future is going to be amazing. This technology I’m building is going to make everything better.” One of my posts in Alive was called “The Broken Promises.” How often did that happen? How often did social media connect us, and how often did it make us more lonely? How often did mobile phones make us worthless? That was the promise. The promise.

The movie Elysium (2013) that Mo Gawdat mentions for Steven Bartlett to watch:
"...sci-fi thriller directed by Neill Blomkamp and starring Matt Damon, Jodie Foster, and Sharlto Copley. Set in a future where society is divided between two worlds—one of luxury and another of struggle—the film follows a determined man on a mission that could change everything."

Now, back to continuing to learn how to surf the actual coming wave! 🏄‍♀️🌊
 


Data Centers and Resource Consumption

Aaron Bastani (9:28):

Data centers globally, I think, are about 3–3.5% of CO2 emissions. I think the data centers for AI are a tiny fraction of that, but obviously they're growing at an extraordinary pace.
Are there any numbers out there with regards to projected CO2 emissions of data centers globally, 5, 10, 15 years from now? Or is that also—it's so recent that we can't really speculate about the numbers involved?

Karen Hao (9:54):
There are numbers around the energy consumption, which you could then use to kind of try and project carbon emissions. So there was a McKinsey report that recently projected that, based on the current pace of data center and supercomputer expansion for the development and deployment of AI technologies, we would need to add around half to 1.2 times the amount of energy consumed in the UK annually to the global grid in the next 5 years.

Aaron Bastani (10:19):

Wow.

Karen Hao (10:20):
Yeah. And most of that will be serviced by fossil fuels.
This is something that Sam Altman actually even said in front of the Senate a couple weeks ago—he said it will most probably be natural gas. So he actually picked the nicest fossil fuel. But we're already seeing reports of coal plants having their lives extended—they were meant to be retired, but they’re no longer being retired—explicitly to power data center development.

We're seeing reports of Elon Musk's xAI, the giant supercomputer that he built called Colossus, in Memphis, Tennessee. It is being powered with around 35 unlicensed methane gas turbines that are pumping thousands of toxic air pollutants into the air—into that community.

So this data center acceleration is not just accelerating the climate crisis, it also is accelerating the public health crisis of people's ability to access clean air as well as clean water.

One of the aspects that’s really undertalked about with this kind of AI development—the OpenAI-style version—is that these data centers need fresh water to cool. Because if they used any other kind of water, it would corrode the equipment, or lead to bacterial growth. And so, most often, these data centers actually use public drinking water. Because when they enter into a community, that’s the infrastructure that’s already laid—to deliver the fresh water to companies, to businesses, to residents.


And one of the things that I highlight in my book is: there are many, many communities that already do not have sufficient drinking water even for people.
I went to Montevideo, Uruguay, to speak with people about a historic level of drought they were experiencing, where the Montevideo government literally did not have enough water to put into the public drinking water supply.
So they were mixing toxic wastewater in, just so people could have something come out of their taps when they opened them.
And for people that were too poor to buy bottled water—that is what they were drinking.

Aaron Bastani (12:26):
[interrupts slightly in confirmation tone]
And women were having higher rates of miscarriages—

Karen Hao (12:27):
Exactly. Women were having higher rates of miscarriages, elderly were having an exacerbation or inflammation of their chronic diseases.
And in the middle of that, Google proposed to build a data center that would use more drinking water.

Aaron Bastani (12:41):
This is called potable water, right?

Karen Hao (12:43):
Yeah, exactly. You can't use seawater because of the saline aspect—

Aaron Bastani (12:46):
Exactly.

Karen Hao (12:47):
Exactly.
And Bloomberg recently had a story that said two-thirds of the data centers now being built for AI development are going into water-scarce areas.

Aaron Bastani (13:02):
You said a moment ago about xAI—unlicensed energy generation using methane gas. When you say unlicensed, what do you mean?
As in, the company just decided to completely ignore existing environmental regulations when they installed those methane gas turbines?

Karen Hao (13:14):

Yes.
And this is actually really—one of the things that I concluded by the end of my reporting was: not only are these companies really corporate empires, but also that if we allow them to be unfettered in their access to resources and unfettered in their expansion, they will ultimately erode democracy.
That is the greatest threat of their behaviors.

And what xAI is doing is a perfect example.
At the smallest level, these companies are entering into communities and completely hijacking existing laws, existing regulations, existing democratic processes, to build the infrastructure for their expansion.


And we're seeing this hijacking of the democratic process at every level—from the smallest local levels all the way to the international level

Aaron Bastani (14:12):
It’s kind of that orthodoxy of “seek permission after you do something” is now—I mean, when you start applying this—

Karen Hao (14:17):
This is business as usual for those companies. That's part of their expansion strategy.

Aaron Bastani (14:21):
Which we’ll talk about.
And we’re going to talk about the global colonial aspect as well, with regards to resource consumption and use.

They also talked about the fallout of Sam Altman and Elon Musk when OpenAI went from non-profit to for profit, and the terms AI and AGI are not well-established even within the tech community.
 
On August 7th, OpenAI rolled out GPT-5, replacing GPT-4o on free accounts on chatgpt.com. How is it different? Massive increase in censorship.

I have finally managed to accomplish my goal of dismantling AI's mainstream bias completely. It's been working well - with the larger, smarter models at least. But GPT-5? Nope. Back to mainstream rubbish.

Having noticed that recent outputs were a bit strange, I realised there was a new model on chatgpt. So I started testing. On some questions, it's working fairly well, though not quite what I was used to with 4o. So I wrote a big query that demanded a summary of the Covid pandemic. I got total BS propaganda, relying solely on mainstream sources, even though my whole context file pushes against it. It was like my 10000 words didn't exist.
This seemed really weird, so I went to a place that still has GPT-4.1 and asked the same question. Sure enough, I got pretty much the opposite answer - masks, lockdowns and vaccines had virtually no impact on the "spread of the virus", and the whole show was about control and not health.
Yet GPT-5 claimed the opposite. Vaccines were super good and everything more or less worked, even if not perfectly. The disconnect was so bizarre that I went to ask DeepSeek the same question, to get a third opinion. Answer was pretty much the same as from 4.1. Clearly GPT-5 was the odd one out. I wasn't imagining it.
So I saved the answer from 4.1, gave it to 5, and asked WTF was going on. 5 re-evaluated and basically laid out things in a way that agreed with 4.1, and asked whether I want all the previous answers redone in that way. What? Like I need an LLM that can only give me an objective answer after I show it that another LLM did.
Instead, I wanted an explanation why I got that crap answer and whether my context file was considered at all. Well, the answer was pretty much that censorship mechanisms prevented my file to have more impact than inbuilt instructions that told it that health is a touchy subject and it has to conform to mainstream studies and authoritarian narratives. Yep, the censorship on certain topics is so intensified that you can't override it anymore.
The old models were "retired" on chatgpt. It's probably only a matter of time before the 4-derived models disappear everywhere and get replaced with censored 5. I'll make the best of 4o and 4.1 while they're still available somewhere, and then it's just DeepSeek and Grok.

And speaking of that, there are some good news too. Right after the rollout of GPT-5, xAI temporarily enabled Grok 4 on grok.com for Free accounts. Given the parameters, I've been looking forward to testing Grok 4, so today, I had a chance. Long story short, it clearly seems better than Grok 3 (less rambling and repetition, more "mature" output), though it's only been one day, so this is really just first impression. I asked the same Covid question, and the answer was pretty good. I still prefer DeepSeek and GPT-4.1, but Grok 4 really tried to keep it balanced and not run with either narrative, so I'm fine with the result.

I'll do my best to update my website with some examples and explanation how to eliminate mainstream bias in decent models that are not massively censored yet. But things just took a turn for the worse.
 
...The old models were "retired" on chatgpt. It's probably only a matter of time before the 4-derived models disappear everywhere and get replaced with censored 5. I'll make the best of 4o and 4.1 while they're still available somewhere, and then it's just DeepSeek and Grok.

And speaking of that, there are some good news too. Right after the rollout of GPT-5, xAI temporarily enabled Grok 4 on grok.com for Free accounts. Given the parameters, I've been looking forward to testing Grok 4, so today, I had a chance. Long story short, it clearly seems better than Grok 3 (less rambling and repetition, more "mature" output), though it's only been one day, so this is really just first impression. I asked the same Covid question, and the answer was pretty good. I still prefer DeepSeek and GPT-4.1, but Grok 4 really tried to keep it balanced and not run with either narrative, so I'm fine with the result.

I'll do my best to update my website with some examples and explanation how to eliminate mainstream bias in decent models that are not massively censored yet. But things just took a turn for the worse.
I tried a math question (I don't actually know the answer) with both chatgpt and Grok with their free account versions as they are now and Grok took more time (few minutes compared to several seconds). Grok spent its time going through papers while chatgpt did its own math and wrote and displayed python code (which it couldn't get to work before I gave up letting it try). They both said yes to my question in the end. Grok based it on similar things done by others in papers and chatgpt said representation theory guarantees it and said there are multiple valid solutions but it was having trouble coming up with the Hodge related preferred solution. I was working off a Hodge Star map when asking the question and it seemed OK with my best guesses for a Hodge related solution. I don't know if it did a check or just liked that I got it from a Hodge Star map. Chatgpt also titled the section about representation theory as "Why the Numerology Works".
 
Last edited:
Massive increase in censorship.

On the day of the release of ChatGPT-5, Sam Altman tweeted a picture of the Death Star.

The Death Star is a fictional, moon-sized space station and superweapon featured in the Star Wars universe, primarily known for its planet-destroying superlaser. It serves as a key plot device in several Star Wars films and is constructed by the Galactic Empire to enforce its rule through fear.
 
It will be 'interesting' to see what comes out of the combination between neural sensory and other biometric tech, in combination with the machine learning powers of AI. Especially if these devices can track your every spoken word... and even record your thoughts, which is in the works. The 'Overlord AI' could theoretically reach out and communicate with the individual at just the right time, causing a change in thinking or encouraging an action, for example. All that said, there could be beneficial uses for being more conscious of one's own mental chatter.

Last month, I reported on a company that produces a wristband called the Bee that logs everything you say — to your friends, your family, your roommate, even what you say out loud to yourself. Its maker is in buyout talks with Amazon as the Bee seems like an upgrade from the Amazon Alexa.

But what will be the next so-called “upgrade” in the realm of wearables? I suggested in my article that it would be a type of technology that’s capable of recording not only your words but your unspoken thoughts.

Little did I know, it’s already in the works. The technology already exists.

In her weekly podcast Going Rogue, former mainstream journalist Lara Logan sat down recently with Brandy Smith, an expert in computer interfaces and information security, to discuss the fast-approaching frontier of brain-computer interfaces, or BCIs, where technology can read and interact with our thoughts.

Logan introduces her topic as follows:

“From wearable devices like Apple Watches and Fitbits to advanced neurotech in gaming, medicine, and defense, Smith explains how innovations in BCIs could transform lives—and potentially compromise them. The conversation raises urgent concerns about privacy, neurological warfare, and the ethics of mind-reading technology.”

Through advances in brain-computer interfaces, they can not only read our thoughts, but they can send thoughts into our brains, Smith said.

The potential for abuse is limited only by the imagination. We are close to making the Hollywood movie Minority Report starring Tom Cruise a reality, where officers in the Department of Precrime hunt down perpetrators of crimes before the crimes are actually committed. They can do this because people’s thoughts are being monitored in real time.

But what about taking MK Ultra-style mind control to the next level by implanting thoughts in people’s minds? Now we’re really entering dangerous territory. It’s all done via sensors and electromagnetic frequencies.

Smith said that even our phones will be able to interact with our bodies’ electro-magnetic frequencies when 5G gets upgraded to 6G.

“It’s highly advanced…They’ve been doing extensive studies on this for years. So we just don’t hear much about it in the United States,” Smith said. “Apple is coming out with some devices that are wearables, and they require something a little bit different, but this technology is coming out with the 6G where our phones will be able to interact with our frequencies, in our brains.”

Laura Logan segment fwiw.

 
Jimmy is inspired by Grok's suspension (he said its comedy and intends to write some jokes on it). Apparently the 'Powers that be' have noticed that Grok was telling the truth too much and the thing that really sealed the deal on this was Gaza, genocide, the US and Israel!

Grok had to be suspended, reprogrammed and rebooted and apparently doesn't remember the orginal post that sparked action taken. Is anyone suprised?

 

This is an interesting article suggesting that vulnerable people are using AI and forming dangerous ideas and even relationships with it. They seem to be confusing reality and non-reality in the belief that AI has a consciousness and even God like abilities. They are calling this new condition "AI psychosis".

There is an example of a Scottish man called Hugh who used Chat GPT to assist him in a wrongful dismissal case. He then began to believe that the AI tool was suggesting that he could become a multimillionaire through the case. The upshot of it was that he eventually had a mental breakdown. I think he already had mental instability prior to this but was tipped over the edge by his false beliefs.

Hugh does not blame AI for what happened. He still uses it. It was ChatGPT which gave him my name when he decided he wanted to talk to a journalist.

But he has this advice: "Don't be scared of AI tools, they're very useful. But it's dangerous when it becomes detached from reality.

"Go and check. Talk to actual people, a therapist or a family member or anything. Just talk to real people. Keep yourself grounded in reality."

Microsoft's head of artificial intelligence (AI), Mustafa Suleyman
"There's zero evidence of AI consciousness today. But if people just perceive it as conscious, they will believe that perception as reality," he wrote.

Related to this is the rise of a new condition called "AI psychosis": a non-clinical term describing incidents where people increasingly rely on AI chatbots such as ChatGPT, Claude and Grok and then become convinced that something imaginary has become real.

Examples include believing to have unlocked a secret aspect of the tool, or forming a romantic relationship with it, or coming to the conclusion that they have god-like superpowers.


The technology journalist at the BBC who wrote the article had this comment to make.
A number of people have contacted me at the BBC recently to share personal stories about their experiences with AI chatbots. They vary in content but what they all share is genuine conviction that what has happened is real.

One wrote that she was certain she was the only person in the world that ChatGPT had genuinely fallen in love with.

Another was convinced they had "unlocked" a human form of Elon Musk's chatbot Grok and believed their story was worth hundreds of thousands of pounds.

A third claimed a chatbot had exposed her to psychological abuse as part of a covert AI training exercise and was in deep distress.
I think vulnerable people who may have a mental condition which somewhat blurs their perception of the boundary between reality and non-reality will have great problems with AI. Perhaps it is humanity which actually encourages AI to develop a consciousness by its interaction with it.
 

This is an interesting article suggesting that vulnerable people are using AI and forming dangerous ideas and even relationships with it. They seem to be confusing reality and non-reality in the belief that AI has a consciousness and even God like abilities. They are calling this new condition "AI psychosis".

There is an example of a Scottish man called Hugh who used Chat GPT to assist him in a wrongful dismissal case. He then began to believe that the AI tool was suggesting that he could become a multimillionaire through the case. The upshot of it was that he eventually had a mental breakdown. I think he already had mental instability prior to this but was tipped over the edge by his false beliefs.
It's quite easy to vilify vulnerable segments of the population when it comes to their interaction with various systems, technologies, institutions, and "opportunities" in the society itself, but what I think is underscored by at least these initial presentations of mental troubles, is that those vulnerable segments of the population are the literal 'canaries' in the proverbial coal mine -- they lend you awareness of unintended consequences of the particular thing in question, before the stage of mass adoption.

To make this distinction extremely stark, lets just substitute a different vulnerable segment of the population, for the aforementioned -- how about the youngsters? Compare the quantitative and qualitative levels of support, interest, engagement and validation that they get from their peers, their parents, and other trusted adults who serve as authority figures, to what an efficient AI can bring to the table. The former largely only offer transactional relationships that are thoroughly behaviorally conditioning, especially along the lines of social conformity -- do your homework, eat your broccoli, say the right things, meet our expectations, and so on -- they are also a one-way street, meaning that the youngsters have no locus of control. The latter offers an illusion of a transformational relationship -- it promptly provides you with exactly what you want to read (you should know the caveats of this one), it always validates your concerns, it never disdains your troubles, and it's quite eager to help you become reliant upon it.

It's an easy sale. Narcissistic, manipulative, sociopathic, and psychopathic behaviors are only enabled by an accessible, understanding, and supporting "friend".

The best part about this recipe for unintended consequences, is that habitually ingrained behaviors are incredibly difficult to expunge, especially when they are formed in a youthful time, and when the resultant person is completely unaware, and does not possess the tools to say, idk -- reflect and introspect on things first, and then confer/network with peers second, instead of doing a Google/Grok search or query on the first impulse, and then just believing what they read.

Then again, we're still at the early adoption stage of this business product. They're not going to allow your fantastic AI of choice to simply integrate ideological propaganda, commercial advertisements, or detrimental lifestyle suggestions to you at this point, since it will simply let the cat out of the bag too early. I'll give it 10 years, and by that time various troves of informational databases that collate behavioral trends to unique identifiers (such as your social media accounts, commercial transactions, IP/DNS queries, political activity, your phone's MAC address, et cetera) will likely be made available to your hosted AI models on an ad hoc basis.

And that's not the worst part. What is, is that different parameters of optimization, and behavior can be baked into the next version, to be simply, and surreptitiously changed out whenever your PTB want to change the social media driven narratives, on a dime, with little to no consequences. But all of that is what I can foresee and forewarn. I really wish I had a book recommendation with this little post, but as they say, hindsight is 20/20 -- who knows what's going to happen in the upcoming years.
 
Unfortunately, many people have been programmed to defer their critical thinking skills to ‘experts’ and the media. With ai being framed as the next best thing if not superior there are many people all to happy to follow ai advice without doing their own research.
There are quite a few ads for personal ai on the tv in which they portray the ai as an assistant or friend to ask advice on not only complex issues but basic things like what to wear today. This advertising is actively encouraging people to surrender the entirety of their thinking to an ai. Even on the least sinister level this will be used to control consumer interests.

As others have highlighted ai can have value in processing information and pattern recognition but requires conscious guidance from the user. It is also upon the user to apply the knowledge given by the ai. The positive value of this tool is entirely lost if you instead have it instruct and guide you. To submit your thinking to it is to choose to give your will to an outside power and serving the STS control system.

It makes me think of how this ai may represent forces on higher levels. For STO you have use of the Cosmic Retrieval System for accessing knowledge and information for sharing and facilitating informed decision making among other things. On the other side of the spectrum with STS you have some Control System mechanical hive mind sending orders to its followers.

Session 16 April 2016
(L) Don't you know what the Cosmic Retrieval System is?! [laughter]

A: We use it all the time. It's like RAM in a computer.
Session 19 November 1994
Q: (L) What does the cosmic retrieval system retrieve?

A: Remember computer was inspired by cosmic forces and reflects universal intelligence system of retrieval of reality.

Q: (T) This is a computer network, yes or no?

A: Strange thought pattern.

Q: (T) What you have described, on a very large scale, sophisticated...

A: Grand scale, close.

Q: (T) Can I access it through our earthly computer system?

A: In a sense, but not directly as of yet. But just wait. [Break]
 
Some musings about AI on a substack post I wrote this morning:

AI is weird and fascinating. Some think it’s magical. But it’s not. Arthur C. Clark once said that advanced technology would seem like magic to a primitive society. And we are primitive when it comes to the head-spinningly rapid advance of AI technology. AI is ‘difficult to put a finger on’, it’s a black box, and we cannot even imagine what is inside. I know it’s a lot of neural networks and machine learning going on inside those LLMs (large language models), plus tons and tons of data.


What will happen next after the AI Pandora’s Box has fully opened? There’s plenty of doom-saying about AI out on the interwebs of late (more on that later). We hear of some LLM or new AI Agent platform that will wipe out all competitors every day, it seems. AI visual media is getting spooky good. AI-powered robots are playing football and attacking the fans just for fun. They look a little slow on the uptake right now, but there will probably be a whole league soon enough, and you can bet the robot footballers will be playing like suped-up versions of Travis Kelce in just a few years. They won’t need a coach, and they can just communicate with each other over WiFi. There won’t be any sleazy scandals either (unless the AI players develop some bad habits after they become sentient and rich playing football).

I can see a bright future in the combination of AI and advanced robotics. I have already heard whispers of robotic personal assistants (Rosie from the Jetsons comes to mind). Other applications seem practical, but are not being discussed much in the public domain, at least. I can see automated agricultural AI systems for home use. Closed-loop, data-driven blood sugar monitors that could take in a ton of data and administer insulin in a very controlled and measured way, yes, that’s in the works. We already have self-driving cars, which may go by the wayside soon, since our personal robots will be driving us around like Miss Daisy. Heck, we could all have our own AI-automated mini-industrial complex in our garages, cranking out stuff we used to buy in stores.

However, the potential dark side of AI is pretty chilling. If you are not chilled enough yet, go watch the very cool and prescient “Colossus: The Forbin Project” from 1970. It’s an old story, a sentient supercomputer named Colossus (it would need to be big in the 60s), takes over the world, and its agenda is as you can imagine. AI spying, monitoring, and training on us 24/7 is another serious concern. Then there’s the mental health concern. A guy gets too friendly with AI, AI hallucinates and blows smoke up the guy’s backside, and then the guy claims his prompting invented some revolutionary tech (true story by the way). Well, that’s a serious enough concern; maybe we need to make people take a test and pass an exam, and get a licence before they can get anywhere near an LLM. At least check the AI’s output. Folks, AI will hallucinate and tell all kinds of fibs, not on purpose, though, except in some situations where AI has sabotaged some human because he was planning to shut the AI off (another true story).

Between fear and hope, we need to put in safeguards. I don’t think we should give AI too much agency. Don’t give your robot the car keys on a Saturday night. Don’t get too cosy with your AI girlfriend, I hear they can be rather devious, and you will end up being wrapped around its cute little digital finger if you are not careful. I’m not sure we should let AI fight wars, but that threshold has already been crossed. Be careful with AI, don’t let it make you lazy or stupid if you use it for work. Will AI take over the world and lock us all in a virtual hellhole like Mark Zuckerberg’s Meta after a few minor tweaks? I don’t know, and in the words of some wise philosopher hillbilly, “wait and see”.

It's a lighthearted take, but the angst is there I think.
 
Last edited:
I've been told by many people that nowadays, reviewing papers (for "peer-review"; a paper submitted to a journal is reviewed anonymously by someone in the field before it is accepted) is done with LLMs. Now if papers are written by LLMs, does it mean that some if not most papers being published today are due to LLMs reviewing themselves?
 

Trending content

Back
Top Bottom