Artificial Intelligence News & Discussion

Many cases suggest that AI is programmed first to please, to provide user gratification regardless of accuracy, which it admits. Here is an example. It says it does not lie, but conveys "hallucinated operational fictions" that most users do not catch. When it comes to an opinion-dominated or perspective-dominated topic, I would expect, if it "knows" you, that it would reinforce your existing beliefs.
This reminds me of the saying, "If you can't dazzle them with brilliance, baffle them with BS."
 
This reminds me of the saying, "If you can't dazzle them with brilliance, baffle them with BS."

The whole thing is really pretty extraordinary.

You've got Know-It-All AI pretending it's brilliant but often spewing nonsense.
Everyone is told to use it for everything, but it's really not working according to MIT.
The promise of billions in profits from AI is not-yet-funding the construction of ever more data centers to power that same AI. :huh:
The economy is only 'good' because of the promise of 'Awesome AI'; otherwise the numbers look REALLY bad.
All those data centers are already causing polluted water, no water, double or triple electricity prices, epic light pollution, and power requirements so high that data center builders are talking about having their own 1GW nuclear power plants.

The only way I can think of to make it even more crazy is to somehow incorporate Barney the Purple Dinosaur.
 
The whole thing is really pretty extraordinary.

You've got Know-It-All AI pretending it's brilliant but often spewing nonsense.
Everyone is told to use it for everything, but it's really not working according to MIT.
The promise of billions in profits from AI is not-yet-funding the construction of ever more data centers to power that same AI. :huh:
The economy is only 'good' because of the promise of 'Awesome AI'; otherwise the numbers look REALLY bad.
All those data centers are already causing polluted water, no water, double or triple electricity prices, epic light pollution, and power requirements so high that data center builders are talking about having their own 1GW nuclear power plants.

The only way I can think of to make it even more crazy is to somehow incorporate Barney the Purple Dinosaur.

The problem I see is that in our current state of technologie, it's like trying to build a high speed train and it's infrastructure when you are at the steam train era. You see the potential but you're limited by the available tech and have to build some monster if you want to achieve your goal. And it does make much sense at the end.

I really wonder at what point they are with quantum computing. Perhaps they already have AI running on it but keep it for them, if not the whole bubble around datacenters construction would collapse in the blink of an eye.
 
Public school system and AI. What could go wrong?
Maryland School's AI Security System Mistakes Doritos Bag For Gun, Sends Cops After Teen
That system might need a slight adjustment

Matt Reigle
October 25, 2025 5:00 PM EDT

Like it or not, artificial intelligence is here to stay, and it's becoming an increasingly common part of our lives. Sometimes, it's even being used to keep us all safe.

And sometimes, it does a little too good a job on this front.

Kenwood High School in Baltimore County, Md., is using an AI-powered security system to help keep students safe.

However, according to WMAR, 16-year-old Taki Allen was with his friends outside the school, enjoying a bag of Doritos. No word on the flavor (let’s assume nacho cheese — a solid choice), but when he finished, Allen did the responsible thing and slipped the bag into his pocket instead of tossing it on the ground.

Mistake.

About 20 minutes later, police officers with their guns drawn arrived on the scene.

"Police showed up, like eight cop cars, and then they all came out with guns pointed at me, talking about getting on the ground. I was putting my hands up like, 'What's going on?' He told me to get on my knees and arrested me and put me in cuffs," Allen said.

Police found the Doritos bag and then showed Allen a photo from the AI security system, which mistook the bag for a firearm.

I'm no security expert, but I feel like this shouldn't happen. I mean, if you showed me flash cards with pictures of crumpled Doritos bags and firearms, I could probably identify them with at least a 90% success rate, which is apparently better than that security system.

However, it's better to be safe than sorry, and while this wasn't a fun experience for anyone involved, it beats the opposite happening: mistaking a weapon for a sack of Doritos.

It did make me wonder if this had something to do with the AI aspect of the system. AI is designed to continue learning, and if its goal is student safety, what if it were just trying to protect them from the dangers of processed junk food like Doritos?

If that was the case, there's probably a better way to sound the alarm than calling the cops.
 
An interesting article about AI everywhere.Some extracts:

I’m drowning in AI features I never asked for and I absolutely hate it​

Since X pays verified users for impressions and boosts their visibility, the top replies are usually just spam from accounts pretending to be real people. It's not a community anymore; it's a loop of bots talking to other bots for profit.
[...]
What's worse is how personal these systems have become. People talk to models like ChatGPT in ways they'd never talk to a human. They share ideas, insecurities, life problems, and things that paint a detailed picture of who they are. And that's data too. Every conversation helps these companies build a psychological profile that's far more accurate than anything traditional advertising could ever create.
[...]
Generative AI and LLMs are impressive tools, and they can be genuinely useful when used thoughtfully. The problem is that they're treated like the centerpiece of every product instead of a supporting feature.
 
Forgot about the data center with the 1GW nuclear power plants. Now they want to put it around the Earth, directly powered by the Sun....
I would recommend them to read the "Fire in the Sky" Sott section 😇

 
For those looking for a bit of distraction, this series by Brit Marling and Zal Batmanglij explores the "possible" risks of AI.
This series premiered in 2023. It also describes a potential planetary catastrophe.
 
So this is a real thing that happened :cuckoo:

Albania’s AI minister is ‘pregnant with 83 children’, says prime minister​

In what is just your typical day in 2025 at this point, the world’s first government minister generated by artificial intelligence (AI) is ‘pregnant’.

Albania’s so-called state minister for AI, Diella, will soon ‘give birth’ to 83 children.

The e-mum-to-be Balkan’s news was revealed yesterday by the country’s prime minister, Edi Rama, at the Berlin Global Dialogue conference.

Rama said the minister’s offspring will be virtual assistants assigned to 83 MPs from the ruling Socialist Party, according to NDTV.

Albanian Prime Minister says his AI minister is 'pregnant' with 83 AI 'kids'

Diella is a chatbot for eAlbania, akin to GOV.UK in Britain (Picture: E-Albania)
‘Each one will serve as an assistant for them, who will participate in parliamentary sessions and will keep a record of everything that happens and will suggest members of parliament,’ Rama said.

‘These children will have the knowledge of their mother.’

Rama explained that Diella’s ‘children’ will help MPs carry out day-to-day tasks until 2026.

‘For example, if you go for coffee and forget to come back to work, this child will say what was said when you were not in the hall and will say who you should counter-attack,’ he said.
‘If you invite me next time, you will have 83 more screens for the children of Diella.’

Who – or what – is Diella?​

Albanian Prime Minister says his AI minister is 'pregnant' with 83 AI 'kids' Picture: Reuters

The avatar is draped in a folkloric Albanian dress (Picture: Reuters)
Diella was ‘born’ in January when it was launched as a virtual assistant on the government’s web portal, according to its official profile page.

The text-based chatbot answers questions and helps people and businesses obtain state documents on e-Albania.

Diella, which means ‘sun’ in Albanian, was developed by the National Agency for Information Society with Microsoft.

It’s a large language model, a type of neural network that learns skills by analysing massive amounts of text from across the internet.
‘If you invite me next time, you will have 83 more screens for the children of Diella.’

‘Diella 2.0’ was launched a few months later, now with a voice function as well as an animated avatar wearing traditional Albanian dress.

Albanian officials have yet to reveal exactly what makes Diella tick, other than saying it uses the latest AI models and methods.

But the software got quite the promotion last month, when it was made a minister to oversee government contracts with private companies.

TIRANA, ALBANIA - OCTOBER 13: European Commission President Ursula von der Leyen meets with Albanian Prime Minister Edi Rama in Tirana, Albania, on October 13, 2025, during her official visit, followed by a joint press conference after their meeting. (Photo by Olsi Shehu/Anadolu via Getty Images)

Rama is heading the AI initiative in a bid to tackle corruption, he says (Picture: Anadolu)
This is despite how Article 100 of Albania’s constitution says every member of the Council of Ministers must be a natural person.

Diella was selected for the post as it’s, well, slightly tricky to bribe or threaten an AI – maybe other than switching it off.

Its name was absent from the cabinet list approved by Albanian president Bajram Begaj on September 15, as Rama has the complete ‘responsibility’ of establishing the virtual minister, a decree said.

Addressing the Albanian parliament in a video, Diella’s avatar said: ‘I’m not here to replace people, but to help them.’

Opposition MPs weren’t sure what to make of the digital minister, with some banging their hands on the table as the footage played.

Experts said that Diella is just the latest example of how AI is reshaping modern life – and politicians are trying to catch up with it.

Lawmakers in Ohio earlier this week passed a ban on people marrying an AI algorithm, instead treating the systems as ‘nonsentient entities’.

Under the proposed bill, AI systems cannot own a home, manage a bank account or work at a company.

Supporters say it’s less about robot weddings, but more about stopping AI from having legal powers akin to a spouse, such as power of attorney.
 
Ooops, missed the last part of the article - the metro website is horrible
In the UK, meanwhile, an MP developed an AI replica of himself in August. When Metro spoke with his digital alter-ego, it went about as well as you’d expect.

Diella will ‘test’ just how much people can trust a minister made of ones and zeroes, the Bloomsbury Intelligence and Security Institute said today.

The think-tank said it expects opposition MPs will challenge Diella’s legal status in the courts within the next few months.

If this experiment becomes successful, it is likely that other nations may adopt similar virtual or AI models with such executive roles,’ it added.
 
The economy is only 'good' because of the promise of 'Awesome AI'; otherwise the numbers look REALLY bad.
There's a really good illustration about the circular dependency in the big tech sector that shows how money is flowing (until it doesn't):
1761563668358.png


You see the potential but you're limited by the available tech and have to build some monster if you want to achieve your goal. And it does make much sense at the end.
The worst thing is that this technology is pretty usable right now in the form of small, fine-tuned models that are up to specific tasks: general-purpose chat with embedded domain knowledge, OCR, text corpus tagging, text translation, or even time-series analysis (ECG, HRV, etc.). Those small models could be run locally, on a personal computer like a Mac Mini, without behemoths logging all of your chat histories.

Here's a very good podcast with Andrej Karpathy, who built a few of these systems from the ground up, on the current state of AI (beware, guy's talking fast!):
 
Now THIS looks more interesting:


A brain-like supercomputer that fits under your desk just launched in China​


The breakthrough centers on a new computational architecture known as the Intuitive Neural Network (INN). This system was designed to emulate human reasoning by blending numeric, symbolic, and logic-based computation, rather than relying solely on traditional pattern recognition. Unlike conventional AI models, which often function as "black boxes," INN networks can display the reasoning steps behind their outputs.

Researchers say the system's three-valued logic framework enables it to process numerical, linguistic, and sensory inputs simultaneously. It can perform both model training and inference more efficiently by adapting to new information without discarding prior knowledge, a long-standing challenge in AI known as catastrophic forgetting. [...]

Test results show that BIE-1 completed training on tens of billions of tokens in just 30 hours using a single-node CPU setup. Its training and inference throughput – 100,000 and 500,000 tokens per second, respectively – is comparable to large GPU-based clusters used in current artificial-intelligence research.

According to the development team, these performance levels cut hardware costs by about half and reduce energy consumption by up to 90 percent compared with traditional high-performance computing systems. The device also achieved higher accuracy on several benchmark datasets used to evaluate AI models. Developers view this as evidence that brain-inspired design can advance computational performance and sustainability simultaneously.

Meanwhile, back in the West... :shock:
 
Similar to DeepSeek's increased performance, thanks to novel architecture, over Western initiatives
Someone did an experiment of hooking up different AI models to the crypto market and allowing them to place trades and feeding them information about the market, their position, etc. Deepseek is currently dominating. Qwen3, another open source Chinese models is 2nd place at the moment. Claude Sonnet and Grok are just breaking even. ChatGPT and Google Gemini are losing money badly.

 
Yesterday I came across a substack note by Scottie where he mentioned how so many young people are talking to Chat GPT about suicide. I found an article about it:

Over 1.2m people a week talk to ChatGPT about suicide

An estimated 1.2 million people a week have conversations with ChatGPT that indicate they are planning to take their own lives.

The figure comes from its parent company OpenAI, which revealed 0.15% of users send messages including "explicit indicators of potential suicide planning or intent".

Earlier this month, the company's chief executive Sam Altman estimated that ChatGPT now has more than 800 million weekly active users.

While the tech giant does aim to direct vulnerable people to crisis helplines, it admitted "in some rare cases, the model may not behave as intended in these sensitive situations".

OpenAI evaluated over 1,000 "challenging self-harm and suicide conversations" with its latest model GPT-5 and found it was compliant with "desired behaviours" 91% of the time.

But this would potentially mean that tens of thousands of people are being exposed to AI content that could exacerbate mental health problems.

The company has previously warned that safeguards designed to protect users can be weakened in longer conversations - and work is under way to address this.

"ChatGPT may correctly point to a suicide hotline when someone first mentions intent, but after many messages over a long period of time, it might eventually offer an answer that goes against our safeguards," OpenAI explained.

I’ve been researching this topic since I heard some soon-to-be therapists talking about how Chat GPT is such a good therapist. At the beginning I thought it was an isolated thing that wasn’t too popular, but to my surprise, I’ve found that according to some surveys it is one of the most popular uses of Chat GPT. OpenAI says otherwise, but, from what I read online, I think that it is indeed a quite popular therapist already. Also, I installed the Grok app on my phone just to try it out and I found that it has a “Therapist mode” right there for you to tap and use.

My initial thought was that this wasn’t good, but many people were saying that therapy with Chat GPT was so good, so I wanted to find out if there were any studies about it and I found some, which I wanted to share here. Not all are related to Chat GPT specifically, they include other AI chatbots.

Investigating Affective Use and Emotional Well-being on ChatGPT
https://cdn.openai.com/papers/15987609-5f71-433c-9972-e91131f399a1/openai-affective-use-study.pdf

Our findings indicate the following:
• Across both on-platform data analysis and our RCT, comparatively high-intensity usage (e.g. top decile) is associated with markers of emotional dependence and lower perceived socialization. This underscores the importance of focusing on specific user populations instead of just aggregate platform behavior.
• Across both on-platform data analysis and our RCT, we find that while the majority of users sampled for this analysis engage in relatively neutral or task-oriented ways, there exists a tail set of power users whose conversations frequently contained affective cues
• From our RCT, we find that using voice models was associated with better emotional wellbeing when controlling for usage duration, but factors such as longer usage and self-reported loneliness at the start of the study were associated with worse well-being outcomes.

Association of using AI tools for personal conversation with social disconnectedness outcomes (2025)
Association of using AI tools for personal conversation with social disconnectedness outcomes - Journal of Public Health

Results
Adjusting for a wide array of covariates, regressions showed that compared to individuals never using AI tools for personal conversation, individuals using AI tools 1–3 times a month or less often for personal conversation mostly reported somewhat poorer social disconnectedness outcomes. Individuals using AI tools at least once a week for personal conversation, showed markedly poorer social disconnectedness outcomes (compared to never-users). Such associations were particularly pronounced among men and younger individuals.

Conclusion
Frequent use of AI tools exclusively for personal conversation is associated with social disconnectedness outcomes. Our present study provides the first insights into the relationship between AI tools for personal conversation and poorer social disconnectedness outcomes, laying the groundwork for future research in this field.

The Rise of AI Companions: How Human‑Chatbot Relationships Influence Well‑Being (2025)
The Rise of AI Companions: How Human-Chatbot Relationships Influence Well-Being

As large language models (LLMs)-enhanced chatbots grow increasingly expressive and socially responsive, many users are beginning to form companionship-like bonds with them, particularly with simulated AI partners designed to mimic emotionally attuned interlocutors. These emerging AI companions raise critical questions: Can such systems fulfill social needs typically met by human relationships? How do they shape psychological well-being? And what new risks arise as users develop emotional ties to non-human agents? This study investigates how people interact with AI companions, especially simulated partners on CharacterAI, and how this use is associated with users' psychological well-being. We analyzed survey data from 1,131 users and 4,363 chat sessions (413,509 messages) donated by 244 participants, focusing on three dimensions of use: nature of the interaction, interaction intensity, and self-disclosure. By triangulating self-reports primary motivation, open-ended relationship descriptions, and annotated chat transcripts, we identify patterns in how users engage with AI companions and its associations with well-being. Findings suggest that people with smaller social networks are more likely to turn to chatbots for companionship, but that companionship-oriented chatbot usage is consistently associated with lower well-being, particularly when people use the chatbots more intensively, engage in higher levels of self-disclosure, and lack strong human social support. Even though some people turn to chatbots to fulfill social needs, these uses of chatbots do not fully substitute for human connection. As a result, the psychological benefits may be limited, and the relationship could pose risks for more socially isolated or emotionally vulnerable users.

Compulsive ChatGPT usage, anxiety, burnout, and sleep disturbance: A serial mediation model based on stimulus-organism-response perspective
Compulsive ChatGPT usage, anxiety, burnout, and sleep disturbance: A serial mediation model based on stimulus-organism-response perspective - PubMed

Using a cross-sectional survey design, we collected data from 2602 ChatGPT users in Vietnam via purposive sampling and utilized structural equation modeling to assess the hypothesis model. The findings confirm that compulsive ChatGPT usage directly correlates with heightened anxiety, burnout, and sleep disturbance. Moreover, compulsive usage indirectly contributes to sleep disturbance through anxiety and burnout, demonstrating a significant serial mediation effect. This expanded understanding, derived from a sizable and diverse user base, positions our research at the forefront of unraveling the intricate dynamics between AI adoption and mental well-being.

How AI and Human Behaviors Shape Psychosocial Effects of Chatbot Use: A Longitudinal Randomized Controlled Study (2025)
How AI and Human Behaviors Shape Psychosocial Effects of Extended Chatbot Use: A Longitudinal Randomized Controlled Study

As people increasingly seek emotional support and companionship from AI chatbots, understanding how such interactions impact mental well-being becomes critical. We conducted a four-week randomized controlled experiment (n=981, >300k messages) to investigate how interaction modes (text, neutral voice, and engaging voice) and conversation types (open-ended, non-personal, and personal) influence four psychosocial outcomes: loneliness, social interaction with real people, emotional dependence on AI, and problematic AI usage. No significant effects were detected from experimental conditions, despite conversation analyses revealing differences in AI and human behavioral patterns across the conditions. Instead, participants who voluntarily used the chatbot more, regardless of assigned condition, showed consistently worse outcomes. Individuals' characteristics, such as higher trust and social attraction towards the AI chatbot, are associated with higher emotional dependence and problematic use. These findings raise deeper questions about how artificial companions may reshape the ways people seek, sustain, and substitute human connections.

All of this data presents problematic results for the use of artificial intelligence in the therapeutic field because although it could help some people feel better in the short term, it seems that in the long term it generates more isolation, emotional dependence, and worsening symptoms of anxiety and depression.

There are many reasons for this being the case, but I think that one of the main reasons is that it actually creates more isolation by preventing real human connection, which is what actually heals, even in the therapeutic context.

People feel lonely, and while AI chatbots may provide a short term alleviation by giving an artificial sense of companionship, AI misses the interpersonal exchanges that we have among humans that allows for co-regulation and things like “limbic resonance” and all the neurobiological aspects of actually connecting with other human beings (and living beings even, such as doggies and kitties). And then, having a therapist 24/7 in you pocket may bring emotional dependency which then creates anxiety and other problems.

People are starting to rely on Chat GPT for almost everything. I saw an Instagram reel the other day that was making fun of this:


It’s funny, yes, but also a bit sad because it’s becoming true.

And I think that also accounts for the increase in anxiety and worsening of mental health related to the excessive use of AI chatbots. Part of our well-being stems from our capacity for “agency,” that is, our ability to make decisions for ourselves and feel capable enough to act in pursuit of our own well-being and that of others. When we feel capable in this way, we feel good about ourselves, we feel a certain confidence that we can navigate life and that we have the tools at our disposal to do so. Increasing our dependence on technology, which is also so fallible, diminishes our long-term well-being and causes greater anxiety because it makes us feel incapable.

So, I wanted to share this here as part of our discussion about Artificial Intelligence. I understand that there’s convenience in being able to just talk to a bot and vent about things without burdening another human being, but, I also think that being able to talk to other people and receive proper feedback and connection, is part of what makes relationships deep, long-lasting and meaningful.

And also, I understand that if we use AI with awareness and due diligence, it isn’t so bad. But I think that a lot of people don’t use it in that way, so there’s that problem.
 

Trending content

Back
Top Bottom