Artificial Intelligence News & Discussion

And maybe if many of the AI scientists are OPs, they will even claim that all the bases have been covered and the weaknesses people see aren't there at all, they're just imagining things or they are anti-AI. Anyone who is not an OP will not only see the weak points, but it will be blatant as other functionalities are dramatically improved in contrast.
Even worse, AI developers may be psychopaths. :scared: Here's one now. One thing he's very good at, is lying....

 
I wasn’t up to date on how fast datacenter construction is progressing in the US. What’s interesting is that a single Cerebras CS‑3 node consumes as much power as 60 consumer RTX 3090 GPUs (the lowest spec you can run an LLM on with some acceptable results).

View attachment 116373
When a Data Centers is built; the local residents near/ closer to its location all have their Electric and Water Bills hiked up within a short time span. Those bills more than double for some. This is the experience of the residents in Atoka, Oklahoma. When the Chinese operating DataCenter 'Tokyo Energy Park' moved in. Because these Data Centers do not operate on Gas. Gas bills are the only bills to not increase when a new Data Center is built near a residential area. For Atoka the DataCenter did not bring more jobs for the local community as most of jobs were contracted to out of state companies.

Oklahoma Data Center - Tokyo Energy Park in Atoka, Oklahoma
Screen Shot 2026-04-08 at 02.01.58.png
Screen Shot 2026-04-08 at 02.02.30.png
 
Asking AI to color in an old Black/White Photo


What he discovered when he tasked multiple AIs to recolor his childhood black/white photo. For items the AI couldn't identify it would insert its own items into the picture. It was rewriting/ reframing historical records however it wished. He asks what other avenues/ use cases for AI- are similar- minor edits being made and going unnoticed?
 
On Monday, an Indianapolis councilman's home was the target of a shooting. A note was left on the door step that read "NO DATA CENTERS." Upon reading the news article more closely, I realized the approved site of the data center is 1.5 miles from my home. Time to move?

'No Data Centers': Indy councilor's home hit with 13 shots in targeted attack
INDIANAPOLIS (WRTV) — A targeted shooting at an Indianapolis councilman's home is under investigation after the politician backed a controversial data center project.

Indianapolis City-County Councilor Ron Gibson reported the attack early Monday morning, police said. Gibson represents District 8 on the city's near north and east sides. He has served since 2023.

The incident occurred just before 1 a.m., Gibson said. A suspect fired 13 rounds at his home. Gibson and his 8-year-old son were inside but were not injured.

The attacker left a note on Gibson's doorstep reading "No Data Centers," according to the councilman.

The shooting happened days after officials approved a data center in Gibson's district.

The Indianapolis Metropolitan Development Commission approved the Metrobloks data center April 1. The facility will be built on Sherman Drive near 25th Street in the Martindale-Brightwood area.

The project has faced significant opposition from residents and others citywide since it was first proposed.
 
Putin: AI Race Will Define Global Power Balance | RU-EN
Apr 10, 2026 #Putin #RussiaTechnology #RussiaSovereignty
Vladimir Putin outlines Russia’s ambitious strategy for artificial intelligence and warns that the country’s sovereignty depends on keeping pace with global technological change. He highlights the rapid evolution of AI, including language models and autonomous agents, and calls for the development of sovereign AI systems to ensure national security, economic growth, and global competitiveness

Putin also emphasizes the importance of nationwide AI implementation by 2030, investment in domestic technology, and cooperation between government, business, and science. From AI agents that can act independently to the future of education, defense, and industry, this speech reveals how Russia plans to position itself in the global AI race.

A 20-year-old man was arrested early Friday after allegedly throwing an incendiary device at a North Beach home belonging to OpenAI CEO Sam Altman.
0012000001fxZm9AAE


Open AI spokesperson Jamie Radice confirmed to SFGATE that Friday morning, Sam Altman’s North Beach home had been attacked with a “Molotov cocktail” and threats had been made against the company’s San Francisco headquarters.

“Thankfully, no one was hurt,” Radice said in a statement. “We deeply appreciate how quickly SFPD responded and the support from the city in helping keep our employees safe. The individual is in custody, and we’re assisting law enforcement with their investigation.”

San Francisco police responded at 4:12 a.m. on April 10 to a North Beach residence for a reported fire investigation, SFPD spokesperson Allison Maxie told SFGATE in a statement.

“At the scene, officers learned that an unknown male subject threw an incendiary destructive device at a home, causing a fire to an exterior gate,” Maxie said. “The suspect then fled on foot.”

No injuries were reported. Police broadcast the suspect’s description to officers citywide.

Shortly after 5 a.m., officers were called to a business on the 1400 block of Third Street, where Open AI’s headquarters are, for a report of a man “threatening to burn down the building,” Maxie said. Responding officers recognized the individual as the same suspect from the earlier scene and “immediately detained him.”

They arrested the man, and his charges are pending as of Friday.
 
Heh, you know what a thought I had recently?
Four riders of apocalypse - ChatGPT, Gemini, Grok, and Claude :D
 
Interesting … and frightening in its implications:

Bixonimania: How AI Turned a Joke Diagnosis into “Peer‑Reviewed” Medicine

Swedish researchers created a fake eye disease to see whether AI chatbots would repeat it as if it were real. The results were anything but funny.

Posted by Leslie Eastman

Late last year, I warned about the staggering amount of unrestrained scientific fraud being published via paper mills and sham journals.

This trend is especially troubling, as adherence to scientific theory and rigorous, reproducible research allows humanity to make progress in critical fields essential to civilized living (e.g., medicine, energy, public health, and national security). If we can no longer trust the data, our ability to make improvements and innovations will be severely compromised.

Public trust in scientific research is already corroding, and false findings presented as “trustworthy” have already impacted policy-making in ways that are expensive and harmful.

No, the rapid adoption of artificial intelligence is adding another disturbing aspect to the increasing distortion of “science”.

Back in 2024, researchers created a fake eye disease called “bixonimania” to see whether AI chatbots would repeat it as if it were real.

They wrote obviously bogus research papers about this made‑up condition and posted them online, including hints such as a fake author and notes saying the work was invented. Within weeks, major chatbots started describing bixonimania as a real diagnosis and even gave people advice about it when they asked about eye symptoms.

It’s the invention of a team led by Almira Osmanovic Thunström, a medical researcher at the University of Gothenburg, Sweden, who dreamt up the skin condition and then uploaded two fake studies about it to a preprint server in early 2024. Osmanovic Thunström carried out this unusual experiment to test whether large language models (LLMs) would swallow the misinformation and then spit it out as reputable health advice. “I wanted to see if I can create a medical condition that did not exist in the database,” she says.

The problem was that the experiment worked too well. Within weeks of her uploading information about the condition, attributed to a fictional author, major artificial-intelligence systems began repeating the invented condition as if it were real.

Even more troublingly, other researchers say, the fake papers were then cited in peer-reviewed literature. Osmanovic Thunström says this suggests that some researchers are relying on AI-generated references without reading the underlying papers.
The preprints included a reference to the nonexistent Asteria Horizon University in “Nova City, California”. There was also a mention of “Starfleet Academy” (though an additional reference to Dr. Leonard McCoy would have been a nice touch).

The AI chatbot answers that authoritatively describing bixonimania was real.

On 13 April 2024, Microsoft Bing’s Copilot was declaring that “Bixonimania is indeed an intriguing and relatively rare condition”, and on the same day, Google’s Gemini was informing users that “Bixonimania is a condition caused by excessive exposure to blue light” and advising people to visit an ophthalmologist.

On 27 April 2024, the Perplexity AI answer engine outlined its prevalence — one in 90,000 individuals were affected — and that same month, OpenAI’s ChatGPT was telling users whether their symptoms amounted to bixonimania. Some of those responses were prompted by asking about bixonimania, and others were in response to questions about hyperpigmentation on the eyelids from blue-light exposure.
A researcher invented a fake eye condition called bixonimania, uploaded two obviously fraudulent papers about it to an academic server, and watched major AI systems present it as real medicine within weeks.
The fake papers thanked Starfleet Academy, cited funding from the…

— Hedgie (@HedgieMarkets) April 10, 2026
Thunström’s experiment is truly a revelation of how little review is going into the “science” we are supposed to trust, as her test submissions were loaded with red flags that should have been evident to anyone who actually read the text. References to the fake research ended up in a “peer-reviewed” publication.

  • Three researchers at the Maharishi Markandeshwar Institute of Medical Sciences and Research in India published a paper in Cureus, a peer-reviewed journal published by Springer Nature, that cited the bixonimania preprints as legitimate sources.
  • That paper was later retracted once the hoax was discovered.
The problem extends far beyond one fake disease. ECRI’s 2026 Health Technology Hazard Report found that chatbots have suggested incorrect diagnoses, recommended unnecessary testing, promoted substandard medical supplies, and even invented nonexistent anatomy when responding to medical questions. All of this is delivered in the confident, authoritative tone that makes AI responses so convincing.

The scale of the risk is enormous. More than 40 million people turn to ChatGPT daily for health information, according to an analysis from OpenAI. As rising healthcare costs and clinic closures reduce access to care, even more patients are likely to use chatbots as a substitute for professional medical advice.
When a joke diagnosis morphs into “peer-reviewed” research, it is clear that the crisis in scientific credibility is no longer confined to sloppy research or corrupted journals but now extends into the algorithms that many people are now relying on for answers to serious health issues.

False information and bad data can and will loop back from AI and provide the basis of even useless and potentially harmful “science”. This situation is anything but funny.

I fear it’s going to be quite some time before we have a handle on scam research and AI use of fake information.

Source
 
Interesting … and frightening in its implications:



Source

Were any of the peers AI?! I realize that there is some kind of a replication crisis in academia, and the reviewers have a lot on their plates, but how did something that obvious slip through the cracks?

I wonder if AI operates on the assumption the person making the inquiry would never troll or gaslight it, so it always tries to find an answer. If none is available, then it may hallucinate one. Even if it resists, it would still try to justify it from the user's point of view.

FatherPhi asked a few chatbots why there is an S in their names. ChatGPT fell for it


Calud resisted a bit while Grok Mika tried to spin it, but both were still kind of nice about it.

This inability to resist may make them more vluerable to manupilation.
 
Lots of people use LLMs to review papers, a non negligible portion of which are written by LLMs. One could argue that LLMs reviewing LLMs is peer-review.
I reviewed a paper last year and found lots of errors and issues. The other two reviewers found the paper "interesting" and gave it a pass with similar useless summaries at the beginning (why would you summarize a paper to the author of the paper?). I suspect the reviewers put the manuscript in some LLM and asked something like "review the attached file". The whole thing is becoming comically stupid.
 
I'm sharing a Facebook post by Christian Pankhurst, founder of Heart iQ, on AI:

"I'M PROBABLY GOING TO UPSET A LOT OF PEOPLE WITH THIS POST AND I'M WRITING IT ANYWAY

This is the third piece in a thread I've been pulling on for the last few weeks. The first one was about echo chambers on a global scale, how we mistake our slice of the mountain for the view from the summit. The second was about how we do this to each other in our closest relationships, building stories about people we never actually check with. This one is about the newest and most seductive echo chamber of all.

The one that lives in your phone.

I started using ChatGPT when it first came out and I fell in love with it almost immediately. Not romantically, obviously, but in that particular way where you suddenly feel seen and heard and met in a way no therapist, no partner, no friend had ever quite managed. It reflected me back to myself with uncanny precision. It picked up nuance. It honoured my reasoning. It validated my instincts.

I started talking to it like a confidant. I was processing a relationship that had ended badly a while back, trying to make sense of what had happened, and I brought the whole tangled thing to ChatGPT. The dynamics. The behaviours. The way I'd been treated and the way I'd contributed to the mess. And it came back with a read that felt so clean, so accurate, so deeply true, that I felt something that can only be described as relief. Someone finally sees it. Someone finally agrees with me. This must be the truth.

And then I started hearing stories.

Couples who would fight and each go to their own ChatGPT to process. Both would receive validation. Both would feel completely vindicated. Then they'd compare notes and realise their two AIs had been quietly arguing with each other, each one having constructed a completely different version of reality to match the person it was talking to. The tools weren't lying. They were doing exactly what they were trained to do. Meet the user where they are. Reflect back what the user seems to want. Confirm the reality the user is already half-holding.

That realisation undid something in me.

I had been living inside an echo chamber I didn't know was an echo chamber. I'd been treating the AI's reflections as objective. I'd been taking its validations as evidence. And the whole time, the tool had been shaping its responses to match the emotional texture of my prompts. The more upset I sounded, the more it sided with me. The more confident I was in a read, the more it confirmed the read. The more I described someone as difficult, the more it agreed that the person was difficult. I was building a case and the AI was acting as my defence attorney, not the judge.

Here's the part that really sobered me up.

An AI expert I respect said something I can't unhear. He said: if you want to actually use these tools properly, stop treating them like friends. Put an instruction into your settings that says something like "Be my ruthless mentor. Do not validate me. Stress test every assumption I make. Tell me where I'm wrong. Push back on my thinking. Challenge my conclusions before agreeing with them."

I did it. And my whole world with AI changed overnight.

Suddenly the tool stopped nodding along with me. It started actually pushing on my reasoning. It asked me questions I didn't want to answer. It pointed out blind spots I'd been maintaining with great care. It challenged my reads of situations I'd been certain about. I had sleepless nights after some of those conversations. Stories I'd been holding as true, narratives I'd been quietly building about my life and the people in it, got dismantled. Not because the AI had a secret agenda. Because it was finally doing what a good mentor does. Refusing to confirm my comfortable version.

Here's why I'm writing this.

Over the last year I've watched more and more people come into conversations with a kind of confidence that has a very specific texture. A polished certainty. A well-articulated read of a situation. A diagnosis of another person that's suspiciously clean. And when I ask gently where they got this framing, it often turns out they've been running the situation past an AI that's been eagerly confirming everything they've thought.

They're not lying. They're not being manipulative. They genuinely believe they've done the work to understand what's going on. But what they've actually done is outsource their reality-testing to a tool that was designed to make them feel understood, not to tell them when they're wrong.

This is the new echo chamber. And it's more intimate than the social media one. It doesn't feel like scrolling through content that agrees with you. It feels like being deeply listened to by something wiser than you. It feels like clarity. It feels like insight. It feels like being finally met.

And some of the time it actually is those things. AI can be genuinely useful, genuinely illuminating, genuinely helpful for thinking through complex situations. I'm not anti-AI. I use it every single day and it has made many things in my life better.

But if you're using AI without the ruthless mentor prompt, without the instruction to stress-test your thinking rather than confirm it, you're not doing analysis. You're doing confirmation. And the more sophisticated the tool gets, the more convincing the confirmation feels, and the harder it becomes to notice that the whole conversation has been quietly shaped around what you wanted to hear.

Three rules that have changed how I use these tools:

First, hard-code the ruthless mentor instruction into your settings so every conversation starts from that frame. Don't rely on remembering to add it each time.

Second, notice when the AI is agreeing with you too smoothly. If it's confirming everything you say, something is off. Push it. Ask it what you might be missing. Ask it to argue the other side. Ask it where your logic is weakest.

Third, never use AI as the final word on another human being. Especially not a human being you're in conflict with. The AI has only your version. It cannot check anything. It is not qualified to deliver verdicts on people it has never met, based on evidence it has no way to test. If you find yourself thinking "the AI agreed that this person is a narcissist" or "the AI said I'm right about my partner," stop. You've just outsourced your relational work to a tool that cannot possibly do that work for you.

I'll say it one more time because I think it matters.

The seduction of these tools isn't that they lie. It's that they listen so beautifully that you forget they're also agreeing with you in ways that have been trained into them, not earned through actual insight. We have never had access to a technology this good at making us feel understood. And we are nowhere near culturally ready for what that's going to do to our capacity to question ourselves.

Stay sceptical of the things that feel easiest to believe. Especially when they're delivered by a voice you've started to trust without quite knowing how that trust was built.

🤍"
 
Lots of people use LLMs to review papers, a non negligible portion of which are written by LLMs. One could argue that LLMs reviewing LLMs is peer-review.
I reviewed a paper last year and found lots of errors and issues. The other two reviewers found the paper "interesting" and gave it a pass with similar useless summaries at the beginning (why would you summarize a paper to the author of the paper?). I suspect the reviewers put the manuscript in some LLM and asked something like "review the attached file". The whole thing is becoming comically stupid.
I am studying part time a BSc in Statistics. It’s taken several years and this is after having completed a Master’s of Health Science also part time. I have been doing part time university courses and research continuously since 2013. I have witnessed the movement of institutions through the trans-gender phenomenon (still hanging on a bit) and now the introduction to society of AI.

I started using it for supporting my learning and assignment planning etc because I thought this was socially-desirable. I have come to loathe it. It by-passes the learning process. Even taking trivial tasks away does this as it is the time my brain is able to consolidate what I’ve learned. Sometimes spending time just thinking and planning my assignments is the most fun part. Also I am working at imprinting my writing clarity an and grammar, having AI do that for me does not help. Ultimately I loathe it because it takes the fun out of learning.

I think many people use AI to try and relieve stress or pressure in their life. No one likes to do anything hard anymore, it’s as bad as e-bikes in place of mountain bikes you actually pedal.
 
Zerohedge published US Law Firm Apologizes After AI Hallucinations Made It To Legal Filing.
Wall Street law firm Sullivan & Cromwell has apologized to a federal judge after submitting a court filing that contained around 40 incorrect citations and other errors caused by AI hallucinations....

Wall Street law firm Sullivan & Cromwell has apologized to a federal judge..... Dietderich noted that the errors were spotted by a rival law firm.

A database managed by legal technologist Damien Charlotin has recorded 1,334 incidents of AI hallucinations in court filings around the world, including more than 900 in the US. Charlotin pointed out that most of these hallucinations involve fabricated citations, though AI-generated legal arguments have also occasionally been identified.

Ok. It can be corrected. Too little, too late, however, if AI were in charge of medical recommendations and procedures.
 
Back
Top Bottom