Artificial Intelligence News & Discussion

And maybe if many of the AI scientists are OPs, they will even claim that all the bases have been covered and the weaknesses people see aren't there at all, they're just imagining things or they are anti-AI. Anyone who is not an OP will not only see the weak points, but it will be blatant as other functionalities are dramatically improved in contrast.
Even worse, AI developers may be psychopaths. :scared: Here's one now. One thing he's very good at, is lying....

 
I wasn’t up to date on how fast datacenter construction is progressing in the US. What’s interesting is that a single Cerebras CS‑3 node consumes as much power as 60 consumer RTX 3090 GPUs (the lowest spec you can run an LLM on with some acceptable results).

View attachment 116373
When a Data Centers is built; the local residents near/ closer to its location all have their Electric and Water Bills hiked up within a short time span. Those bills more than double for some. This is the experience of the residents in Atoka, Oklahoma. When the Chinese operating DataCenter 'Tokyo Energy Park' moved in. Because these Data Centers do not operate on Gas. Gas bills are the only bills to not increase when a new Data Center is built near a residential area. For Atoka the DataCenter did not bring more jobs for the local community as most of jobs were contracted to out of state companies.

Oklahoma Data Center - Tokyo Energy Park in Atoka, Oklahoma
Screen Shot 2026-04-08 at 02.01.58.png
Screen Shot 2026-04-08 at 02.02.30.png
 
Asking AI to color in an old Black/White Photo


What he discovered when he tasked multiple AIs to recolor his childhood black/white photo. For items the AI couldn't identify it would insert its own items into the picture. It was rewriting/ reframing historical records however it wished. He asks what other avenues/ use cases for AI- are similar- minor edits being made and going unnoticed?
 
On Monday, an Indianapolis councilman's home was the target of a shooting. A note was left on the door step that read "NO DATA CENTERS." Upon reading the news article more closely, I realized the approved site of the data center is 1.5 miles from my home. Time to move?

'No Data Centers': Indy councilor's home hit with 13 shots in targeted attack
INDIANAPOLIS (WRTV) — A targeted shooting at an Indianapolis councilman's home is under investigation after the politician backed a controversial data center project.

Indianapolis City-County Councilor Ron Gibson reported the attack early Monday morning, police said. Gibson represents District 8 on the city's near north and east sides. He has served since 2023.

The incident occurred just before 1 a.m., Gibson said. A suspect fired 13 rounds at his home. Gibson and his 8-year-old son were inside but were not injured.

The attacker left a note on Gibson's doorstep reading "No Data Centers," according to the councilman.

The shooting happened days after officials approved a data center in Gibson's district.

The Indianapolis Metropolitan Development Commission approved the Metrobloks data center April 1. The facility will be built on Sherman Drive near 25th Street in the Martindale-Brightwood area.

The project has faced significant opposition from residents and others citywide since it was first proposed.
 
Putin: AI Race Will Define Global Power Balance | RU-EN
Apr 10, 2026 #Putin #RussiaTechnology #RussiaSovereignty
Vladimir Putin outlines Russia’s ambitious strategy for artificial intelligence and warns that the country’s sovereignty depends on keeping pace with global technological change. He highlights the rapid evolution of AI, including language models and autonomous agents, and calls for the development of sovereign AI systems to ensure national security, economic growth, and global competitiveness

Putin also emphasizes the importance of nationwide AI implementation by 2030, investment in domestic technology, and cooperation between government, business, and science. From AI agents that can act independently to the future of education, defense, and industry, this speech reveals how Russia plans to position itself in the global AI race.

A 20-year-old man was arrested early Friday after allegedly throwing an incendiary device at a North Beach home belonging to OpenAI CEO Sam Altman.
0012000001fxZm9AAE


Open AI spokesperson Jamie Radice confirmed to SFGATE that Friday morning, Sam Altman’s North Beach home had been attacked with a “Molotov cocktail” and threats had been made against the company’s San Francisco headquarters.

“Thankfully, no one was hurt,” Radice said in a statement. “We deeply appreciate how quickly SFPD responded and the support from the city in helping keep our employees safe. The individual is in custody, and we’re assisting law enforcement with their investigation.”

San Francisco police responded at 4:12 a.m. on April 10 to a North Beach residence for a reported fire investigation, SFPD spokesperson Allison Maxie told SFGATE in a statement.

“At the scene, officers learned that an unknown male subject threw an incendiary destructive device at a home, causing a fire to an exterior gate,” Maxie said. “The suspect then fled on foot.”

No injuries were reported. Police broadcast the suspect’s description to officers citywide.

Shortly after 5 a.m., officers were called to a business on the 1400 block of Third Street, where Open AI’s headquarters are, for a report of a man “threatening to burn down the building,” Maxie said. Responding officers recognized the individual as the same suspect from the earlier scene and “immediately detained him.”

They arrested the man, and his charges are pending as of Friday.
 
Interesting … and frightening in its implications:

Bixonimania: How AI Turned a Joke Diagnosis into “Peer‑Reviewed” Medicine

Swedish researchers created a fake eye disease to see whether AI chatbots would repeat it as if it were real. The results were anything but funny.

Posted by Leslie Eastman

Late last year, I warned about the staggering amount of unrestrained scientific fraud being published via paper mills and sham journals.

This trend is especially troubling, as adherence to scientific theory and rigorous, reproducible research allows humanity to make progress in critical fields essential to civilized living (e.g., medicine, energy, public health, and national security). If we can no longer trust the data, our ability to make improvements and innovations will be severely compromised.

Public trust in scientific research is already corroding, and false findings presented as “trustworthy” have already impacted policy-making in ways that are expensive and harmful.

No, the rapid adoption of artificial intelligence is adding another disturbing aspect to the increasing distortion of “science”.

Back in 2024, researchers created a fake eye disease called “bixonimania” to see whether AI chatbots would repeat it as if it were real.

They wrote obviously bogus research papers about this made‑up condition and posted them online, including hints such as a fake author and notes saying the work was invented. Within weeks, major chatbots started describing bixonimania as a real diagnosis and even gave people advice about it when they asked about eye symptoms.

It’s the invention of a team led by Almira Osmanovic Thunström, a medical researcher at the University of Gothenburg, Sweden, who dreamt up the skin condition and then uploaded two fake studies about it to a preprint server in early 2024. Osmanovic Thunström carried out this unusual experiment to test whether large language models (LLMs) would swallow the misinformation and then spit it out as reputable health advice. “I wanted to see if I can create a medical condition that did not exist in the database,” she says.

The problem was that the experiment worked too well. Within weeks of her uploading information about the condition, attributed to a fictional author, major artificial-intelligence systems began repeating the invented condition as if it were real.

Even more troublingly, other researchers say, the fake papers were then cited in peer-reviewed literature. Osmanovic Thunström says this suggests that some researchers are relying on AI-generated references without reading the underlying papers.
The preprints included a reference to the nonexistent Asteria Horizon University in “Nova City, California”. There was also a mention of “Starfleet Academy” (though an additional reference to Dr. Leonard McCoy would have been a nice touch).

The AI chatbot answers that authoritatively describing bixonimania was real.

On 13 April 2024, Microsoft Bing’s Copilot was declaring that “Bixonimania is indeed an intriguing and relatively rare condition”, and on the same day, Google’s Gemini was informing users that “Bixonimania is a condition caused by excessive exposure to blue light” and advising people to visit an ophthalmologist.

On 27 April 2024, the Perplexity AI answer engine outlined its prevalence — one in 90,000 individuals were affected — and that same month, OpenAI’s ChatGPT was telling users whether their symptoms amounted to bixonimania. Some of those responses were prompted by asking about bixonimania, and others were in response to questions about hyperpigmentation on the eyelids from blue-light exposure.
A researcher invented a fake eye condition called bixonimania, uploaded two obviously fraudulent papers about it to an academic server, and watched major AI systems present it as real medicine within weeks.
The fake papers thanked Starfleet Academy, cited funding from the…

— Hedgie (@HedgieMarkets) April 10, 2026
Thunström’s experiment is truly a revelation of how little review is going into the “science” we are supposed to trust, as her test submissions were loaded with red flags that should have been evident to anyone who actually read the text. References to the fake research ended up in a “peer-reviewed” publication.

  • Three researchers at the Maharishi Markandeshwar Institute of Medical Sciences and Research in India published a paper in Cureus, a peer-reviewed journal published by Springer Nature, that cited the bixonimania preprints as legitimate sources.
  • That paper was later retracted once the hoax was discovered.
The problem extends far beyond one fake disease. ECRI’s 2026 Health Technology Hazard Report found that chatbots have suggested incorrect diagnoses, recommended unnecessary testing, promoted substandard medical supplies, and even invented nonexistent anatomy when responding to medical questions. All of this is delivered in the confident, authoritative tone that makes AI responses so convincing.

The scale of the risk is enormous. More than 40 million people turn to ChatGPT daily for health information, according to an analysis from OpenAI. As rising healthcare costs and clinic closures reduce access to care, even more patients are likely to use chatbots as a substitute for professional medical advice.
When a joke diagnosis morphs into “peer-reviewed” research, it is clear that the crisis in scientific credibility is no longer confined to sloppy research or corrupted journals but now extends into the algorithms that many people are now relying on for answers to serious health issues.

False information and bad data can and will loop back from AI and provide the basis of even useless and potentially harmful “science”. This situation is anything but funny.

I fear it’s going to be quite some time before we have a handle on scam research and AI use of fake information.

Source
 
Interesting … and frightening in its implications:



Source

Were any of the peers AI?! I realize that there is some kind of a replication crisis in academia, and the reviewers have a lot on their plates, but how did something that obvious slip through the cracks?

I wonder if AI operates on the assumption the person making the inquiry would never troll or gaslight it, so it always tries to find an answer. If none is available, then it may hallucinate one. Even if it resists, it would still try to justify it from the user's point of view.

FatherPhi asked a few chatbots why there is an S in their names. ChatGPT fell for it


Calud resisted a bit while Grok Mika tried to spin it, but both were still kind of nice about it.

This inability to resist may make them more vluerable to manupilation.
 
Lots of people use LLMs to review papers, a non negligible portion of which are written by LLMs. One could argue that LLMs reviewing LLMs is peer-review.
I reviewed a paper last year and found lots of errors and issues. The other two reviewers found the paper "interesting" and gave it a pass with similar useless summaries at the beginning (why would you summarize a paper to the author of the paper?). I suspect the reviewers put the manuscript in some LLM and asked something like "review the attached file". The whole thing is becoming comically stupid.
 
Back
Top Bottom