Artificial Intelligence News & Discussion

Found this on X today, the scary results of a first study on the use of IA for writing texts (here ChatGPT) and the consequences on the brain after just 4 months.
It's an X thread, meaning that if you want to read all what the guy wrote, you need an X account to read the following post. Hopefully, there's a BOT on X which allows to "unroll" such a couple of consecutives posts and create a one single page (or kind of article), which is handy.

So here's the link to the first post :

and the link to the unrolled thread :

Some quotes :
Brain scans revealed the damage: neural connections collapsed from 79 to just 42.
That's a 47% reduction in brain connectivity.

When researchers forced ChatGPT users to write without AI, they performed worse than people who never used AI at all.

It's not just dependency. It's cognitive atrophy. Like a muscle that's forgotten how to work.

The productivity paradox nobody talks about:

Yes, ChatGPT makes you 60% faster at completing tasks.

But it reduces the "germane cognitive load" needed for actual learning by 32%.

You're trading long-term brain capacity for short-term speed.



The last quote well summarize the "danger".
Note that it concerns using it for writing texts, for letting the IA doing your "thinking then writing" job. In a way it's not surprising, but I can help to see the consequences on the future adult generation, when you read that since now +/- 2y a vast majority of all students around the world use IA to make their homework ... it's ... sad ! Poor sacrified generation, i see them as victims. Also, i can't help to think again about the movie "Idiocracy".
 
You could take screenshots of the tweets and post them, or copy the text and post for people who do not have an X account ;)
Yes, if we use only links to X, if X is down or if the tweet is deleted the information is lost. I did it for some X posts, but when it's a thread of multiple posts, anyone who has an X account can reply to the first post with this command :
@threadreaderapp unroll
(here's the example someone did for this thread)

This will generate an automated response with a link to a separated url which combines all the posts and which does not require an X account, this is the second url i gave.

Here's a copy/paste of the page, in quote mode (without the last post which is advertising). The lone thing missing is that there's a short 32s clip which is pasted here as an image :

BREAKING: MIT just completed the first brain scan study of ChatGPT users & the results are terrifying.

Turns out, AI isn't making us more productive. It's making us cognitively bankrupt.

Here's what 4 months of data revealed:

(hint: we've been measuring productivity all wrong)Image
83.3% of ChatGPT users couldn't quote from essays they wrote minutes earlier.

Let that sink in.

You write something, hit save, and your brain has already forgotten it because ChatGPT did the thinking.Image
Brain scans revealed the damage: neural connections collapsed from 79 to just 42.

That's a 47% reduction in brain connectivity.

If your computer lost half its processing power, you'd call it broken. That's what's happening to ChatGPT users' brains.Image
Teachers didn't know which essays used AI, but they could feel something was wrong.

"Soulless."
"Empty with regard to content."
"Close to perfect language while failing to give personal insights."

The human brain can detect cognitive debt even when it can't name it.Image
Here's the terrifying part: When researchers forced ChatGPT users to write without AI, they performed worse than people who never used AI at all.

It's not just dependency. It's cognitive atrophy.

Like a muscle that's forgotten how to work. <i></i>
The MIT team used EEG brain scans on 54 participants for 4 months.

They tracked alpha waves (creative processing), beta waves (active thinking), and neural connectivity patterns.

This isn't opinion. It's measurable brain damage from AI overuse.
The productivity paradox nobody talks about:

Yes, ChatGPT makes you 60% faster at completing tasks.

But it reduces the "germane cognitive load" needed for actual learning by 32%.

You're trading long-term brain capacity for short-term speed. <i></i>
Companies celebrating AI productivity gains are unknowingly creating cognitively weaker teams.

Employees become dependent on tools they can't live without, and less capable of independent thinking.

Many recent studies underscore the same problem, including the one by Microsoft:Image
MIT researchers call this "cognitive debt" - like technical debt, but for your brain.

Every shortcut you take with AI creates interest payments in lost thinking ability.

And just like financial debt, the bill comes due eventually.

But there's good news...Image
Because session 4 of the study revealed something interesting:

People with strong cognitive baselines showed HIGHER neural connectivity when using AI than chronic users.

But chronic AI users forced to work without it? They performed worse than people who never used AI at all. <i></i>
The solution isn't to ban AI. It's to use it strategically.

The choice is yours:
Build cognitive debt and become an AI dependent.
Or build cognitive strength and become an AI multiplier.

The first brain scan study of AI users just showed us the stakes.

Choose wisely.Image
Thanks for reading!
 
A study from MIT comparing brains using thinking, search engines, and ChatGPT. The basic conclusion is that using an LLM reduces brain activity and cognitive performance.


This study explores the neural and behavioral consequences of LLM-assisted essay writing. Participants were divided into three groups: LLM, Search Engine, and Brain-only (no tools).

Each completed three sessions under the same condition. In a fourth session, LLM users were reassigned to Brain-only group (LLM-to-Brain), and Brain-only users were reassigned to LLM condition (Brain-to-LLM).

A total of 54 participants took part in Sessions 1-3, with 18 completing session 4. We used electroencephalography (EEG) to assess cognitive load during essay writing, and analyzed essays using NLP, as well as scoring essays with the help from human teachers and an AI judge.

Across groups, NERs, n-gram patterns, and topic ontology showed within-group homogeneity. EEG revealed significant differences in brain connectivity: Brain-only participants exhibited the strongest, most distributed networks; Search Engine users showed moderate engagement; and LLM users displayed the weakest connectivity.

Cognitive activity scaled down in relation to external tool use. In session 4, LLM-to-Brain participants showed reduced alpha and beta connectivity, indicating under-engagement. Brain-to-LLM users exhibited higher memory recall and activation of occipito-parietal and prefrontal areas, similar to Search Engine users.

Self-reported ownership of essays was the lowest in the LLM group and the highest in the Brain-only group. LLM users also struggled to accurately quote their own work. While LLMs offer immediate convenience, our findings highlight potential cognitive costs.

Over four months, LLM users consistently underperformed at neural, linguistic, and behavioral levels. These results raise concerns about the long-term educational implications of LLM reliance and underscore the need for deeper inquiry into AI's role in learning.
 
Thanks Mechanic for adressing Ice age farmer info on other thread ;)

AI promises to be so powerful that it (like computers) fundamentally changes almost everything it touches. The idea that anyone could have this power is not attractive to the powers that be.

 
Translated from French:

I've just come out of my bachelor's and master's exams overwhelmed.

It's important to know: this year thousands of students will owe their marks or their diplomas to assignments written entirely by ChatGPT. Artificial intelligence is producing a new type of plagiarism that is almost impossible to prove.

I'm talking mainly about homework or files prepared at home, which often account for half of the final assessment. ChatGPT makes it possible for mediocre or poor students, if they fly under the radar, to get 17s by turning in papers that owe nothing to their merits.

Sometimes, the fraud is obvious: by chance, ChatGPT will have invented bibliographical references or said anything at all. But this is increasingly rare. Often, the teacher is left to his or her own conviction: he or she knows that the student does not have the level to hand in such an assignment.

To find out for sure, the student is sometimes called in for questioning. It is then confirmed that they do not know a single word of the assignment they have submitted. A Master's student who barely understands French, having submitted an impeccably written paper, knows nothing about the subject he has brilliantly dealt with. Another, who until then had a maximum mark of 7, situates Molière in the 15th century, has never heard of Diderot, submits a paper on Clément Marot worthy of an agrégé and swears up and down that he wrote it himself. "Si, si, Madame, I have read Marot's novels!"

But even in such cases, the fraud must be formally proven. What was easy in the case of ordinary plagiarism (all that was needed was to identify the sources) has become much more complicated with AI. And the disciplinary procedure is so long and uncertain that it becomes a deterrent.

As a result, the rate of unpunished fraud has grown exponentially since last year. The only way to curb it would be to do away with homework. We're getting there. But I wonder if it's not already too late.

Our baccalaureate is already worthless. It's almost natural that students who haven't learnt to think, to read difficult texts and to write with care should rush to a magic tool, a “thought slave”, which promises, for a modest subscription and a few well-intentioned “prompts”, to take care of all these tedious tasks better than they'll ever be able to do themselves. Thesis, antithesis, prothesis.

A privileged few will continue to follow selective courses; they too will be affected, but a little less quickly than the others, those who will never have been required to do anything and who will flock to humanities courses to earn diplomas that will no longer fool anyone.

Universities, which are already cruelly short of resources, will lose what little credit they had left.

Source
 
I came across this article a few weeks ago.

A Behavioristic View of AI

[T]he behaviors of searching for and evaluating evidence, along with thinking things through, are not reinforced by a system that just gives you the answer. Our parents directed us to look things up in the dictionary or encyclopedia; kids without such resources or such parents don’t develop those skills. The difference between searching for evidence and just getting the answer is a lot like the difference between catching a grounder and keeping your eye on the ball while playing the short hop. Good coaches reinforce the latter skills verbally and are prepared to scoff at a lucky catch in order to instill useful skills in the infielder.

It helped me connect some dots that seem obvious in hindsight, but escaped my grasp amidst all this mayhem.

Some countries are wasting no time indoctrinating their populations even further.

Saudi Arabia to introduce AI education at all grade levels starting this year
 
I came across this article a few weeks ago.

A Behavioristic View of AI



It helped me connect some dots that seem obvious in hindsight, but escaped my grasp amidst all this mayhem.

Some countries are wasting no time indoctrinating their populations even further.

Saudi Arabia to introduce AI education at all grade levels starting this year
That's also why the Cs say they won't give out answers like Halloween candy. You learn more when you get your answers yourself.
 
Translated from French:

Source
John Carter's of the Barsoom on the Substack latest piece, although more than a month old, was about this very topic, with interesting parallels drawn to what happened to monasteries in the late middle ages with the Gutenberg's invention of the type printing press.
Well worth of read, IMO.


From a personal/professional experience: the AI usage has become a thing also among the academia stuff, i.e. scientists. Not only are they when pressed by deadlines reaching to AI tools to write for them, for example summarizing several notes and not yet published papers they themselves wrote into usable piece for a conference proceeding, but also they turn to AI to do their work for them, like coding, programming and doing calculations. Without acknowledging that of course.

Although, I guess, this is already known and discussed on the Forum, that there have even been a complete scientific papers written basically solely by an AI.
 
Since the post I made concerns 2 different threads, it's been posted originally in the Mike Adams one, but I will link it here since it's also AI-related: Mike Adams health activist

A free open-source LLM (large language model) will be released
Topics: herbs, nutrition, foods, and photochemistry.
Subsequent expansions of our models will cover off-grid living, including solar, water, permaculture, self-reliance, off-grid survival, emergency medicine and similar topics. Essentially, this will become a "prepper's instant encyclopedia"

Because we are releasing this LLM for free, without any commercial interest, this allows us to train our model on a much larger content set than what for-profit corporations are legally allowed to do. Under our non-profit CWC data science initiative, we can legally train our LLM on any publicly published content under existing Fair Use U.S. law.

Enoch AI - finally released!
Free to use and built by Mike Adams
Access: Enoch AI

Claim: World's best AI engine on reality based prompts, built for only 2 million $

Data set used
- Dr. Joseph Mercola
- NaturalNews
- Greenmedinfo
- Children's health defence
- Alliance for natural Health
- Truth about cancer
- published science articles, video transcript,
- etc.

Says Grok 4 is best AI for solving problems, but they are trained with wokeism and DEI nonsense.



I've prompted GROK, GPT and Enoch the same thing (see below).
Grok performed the best. GPT 2nd.
Enoch is a big fail for this type of prompt: It took me 2 prompts for it to understand the question and take into account all the parameters correctly, and it still produced a below-average answer.
I won't bother pasting the results here. Try it yourself and see what it does.


Prompt:
Can you make 2 meal plans of 3 meals/day for 1 week, where I can get between 120 & 150g of protein per day?
One of the menus must meet all the conditions below, and the other must have a 14th condition: low FODMAP

Conditions:
1- No dairy.
2- Avoid wheat (NO couscous), but some pasta or bread is ok as a treat.
3- No transformed vegan products (No tofu).
4- Avoid high oxalate foods (ex: spinach and beets are banned in my house)
5- I hate chickpeas, and some legumes are unhealthy. I only use lentils (any type), pinto beans, and black beans. I prefer them when they can be sprouted.
6- When you add "veggies" to a meal, specify which ones. Avoid potatoes: too starchy.
7- Avoid foods high in mold and mycotoxins (such as certain types of nuts)
8- No brown rice: only basmati or jasmine
9- No sausages or cold cuts
10- Fish: no tuna (high mercury). Only sardines, salmon, and trout.
11- Eggs can be done in different ways, but if they are boiled, I only like "deviled eggs".
12- I don't separate egg white from yolk: it's a waste.
13- Organic Meat: I only use chicken (always boiled for bone broth), beef (mostly ground beef), pork, lamb (if on sale), and rarely turkey.
 
Just came across this article on the band The Velvet Sundown,

It's a sudden rise to fame, by any standards.

Two weeks ago, breezy psychedelic rockers The Velvet Sundown didn't exist. And now they've released two albums and collected more than half a million monthly listeners on Spotify.

It's almost too good to be true, and it probably isn't. All the evidence (or lack, thereof) points to The Velvet Sundown being an AI-generated outfit. There's no discernible real-world footprint, and plenty to suggest that they're a figment of someone's digital imagination.

All the images shared by the "band" on social media bear the hallmarks of AI: the hyperrealistic colours, the improbable shadows. And the band's music bears the sonic hallmarks of Suno, the music creation app that allows users to generate up to 500 songs a month for just $8: the thin, unimaginative percussion, the fact that the vocalist sounds slightly different on each song.

"There’s something quietly spellbinding about The Velvet Sundown," claims the band's Spotify bio. "You don’t just listen to them, you drift into them. Their music doesn’t shout for your attention; it seeps in slowly, like a scent that suddenly takes you back somewhere you didn’t expect.

"Their sound mixes textures of '70s psychedelic alt-rock and folk rock, yet it blends effortlessly with modern alt-pop and indie structures. Shimmering tremolos, warm tape reverbs, and the gentle swirl of organs give everything a sense of history without it ever feeling forced."

The bio goes on to say that the band was formed by singer and mellotron player Gabe Farrow, guitarist Lennie West, keyboardist Milo Rains, and drummer Orion "Rio" Del Mar, four musicians who only appear to exist within the context of the band.

And their songs are catchy as heck, and I suppose it was a matter of time, with AI getting more and more fidelity, they'd eventually come for Rock n' Roll, and I personally think it's a sad day. I recently watched a docuseries in which the history of rock and roll in Spanish was explored, and there was so much historical context, the meaning of not just the songs and their lyrics but of singing in itself was all shared by the singers and the listeners. It used to be a refuge within to what sometimes was very real oppression from dictatorial regimes.

Not that all rock was created as such, there was a lot of commercial products in music as well, specially when it was the thing to produce, but music specially has such a powerful and effective way to connect human beings to one another that seeing AI step in and grab it, it's sad, but here we are.
 
Back
Top Bottom