Artificial Intelligence News & Discussion

I haven't found a better thread where to post this, so here it goes.

For several months I was exploring how to change mainstream Wikipedia reader AI into something useful. Finally I have it worked out, with consistent and reliable results. I can ask about pretty much anything and get more or less the real story - Covid, vaccines, 9/11, NWO, Great Reset, Israel, Ukraine... I don't have to deal with any mainstream BS anymore. I just give AI the "red pill", and we roll. Not every LLM is suitable for this, but there are at least half a dozen free models that can do this just fine. So if you want to experiment with waking up your own AI, here's how: Red Pill AI

Everything is explained there, and there are many example conversations.

Very interesting, thanks. I haven't read the entire website, but what I DID read made a lot of sense. I tried to do a bit of that with deepseek, to no avail. Now I know why, after reading your prompt page. You should have a substack page with this, it would probably get more readers and be useful!
 
An interesting study on the impact of AI on the brain



Video Transcript Summarised by Grok (revised by me)
The video presents a critical perspective on the misuse of large language models (LLMs) like ChatGPT for learning and cognitive development. The speaker, a learning coach and former medical doctor, argues that over-reliance on AI tools can impair critical thinking, memory, and expertise development, supported by an MIT study and other research.


Key Points and Arguments
  1. Negative Cognitive Impact of AI Overuse:
    • The video cites an MIT study titled Your Brain on ChatGPT, which divided participants into three groups: an LLM group (using ChatGPT), a search engine group (using non-AI web resources), and a brain-only group (no external tools). EEG analysis showed that the LLM group exhibited lower brain activity, weaker connectivity, and reduced engagement. They also had poorer memory recall and produced lower-quality, generic essays.
    • A significant finding was the residual negative effect: even after discontinuing AI use, the LLM group’s cognitive performance didn’t recover to the level of the other groups, suggesting lasting impacts.
    • Other studies corroborate that higher AI use correlates with reduced critical thinking and learning ability.
  2. Illusion of Learning:
    • The speaker explains that learning requires effortful information processing, where the brain actively organizes, connects, and evaluates information to form a schema of knowledge. This process builds memory and expertise.
    • AI tools like ChatGPT allow users to bypass this effort by providing pre-organized, simplified answers, creating an illusion of learning. Users may understand the output but fail to retain or apply it effectively, especially for complex problem-solving.
    • Over time, bypassing information processing weakens the brain’s ability to develop these skills, making it harder to learn independently.
  3. AI Hallucination and Reliability:
    • LLMs generate text based on probabilistic patterns, not a source of truth, leading to potential hallucinations (inaccurate or fabricated information). Without prior expertise, users may not detect these errors, leading to incorrect learning.
    • A referenced Apple white paper highlights LLMs’ limitations in advanced reasoning, reinforcing their unreliability for deep, contextual problem-solving.
  4. AI’s Role in Raising Professional Standards:
    • The speaker argues that AI raises the bar for expertise. As AI can produce generic, mainstream answers, employers increasingly value individuals who can go beyond AI outputs to provide unique, high-quality insights.
    • Historical parallels are drawn: skills once considered impressive (e.g., using the internet or having a degree) are now baseline expectations. Similarly, AI proficiency will become a given, making deep expertise more critical.
  5. Proper Use of AI for Learning:
    • AI should be treated as an assistant, not a replacement for cognitive effort. Acceptable uses include summarizing broad concepts, finding resources, or exploring perspectives to save time on menial tasks.
    • Users should focus on cognitive engagement, challenging AI outputs, identifying gaps, and connecting ideas themselves. After gaining a general understanding, they should turn to primary sources (e.g., journals, books) for deeper learning.
    • Avoiding effortful processing undermines learning goals, and users must train their brains to process information effectively.
  6. Personal Anecdote and Broader Implications:
    • The speaker shares an example of a programmer who struggled to articulate a data analysis strategy after relying heavily on ChatGPT, illustrating the risk of cognitive offloading.
    • They emphasize that while AI seems like a time-saver, misuse can hinder career progress by limiting expertise development.
    • The speaker promotes their newsletter as a resource for learning strategies, drawing on their experience balancing medical school, businesses, and a master’s degree.
  7. Future Considerations:
    • The video speculates on a future where advanced AI (e.g., “ChatGPT 10.0”) delivers perfectly accurate, organized information. Even in this scenario, the speaker argues that expertise will remain essential, as AI will raise expectations for human value in professional settings.

Strengths of the Argument
  • Evidence-Based Claims: The MIT study and other research provide empirical support for the cognitive risks of AI overuse, making the argument credible.
  • Clear Explanation of Learning Process: The focus on information processing as the core of learning is well-articulated, grounded in cognitive science, and accessible to a general audience.
  • Practical Advice: The video offers actionable strategies for using AI effectively, balancing its benefits with the need for cognitive effort.
  • Engaging Structure: The chaptered format organizes the content logically, moving from evidence to implications to solutions.

Weaknesses and Potential Biases
  • Sensationalized Title: The title How ChatGPT Slowly Destroys Your Brain is alarmist and may exaggerate the study’s findings to attract attention, potentially undermining credibility.
  • Limited Study Details: The MIT study is referenced briefly, with no specifics on sample size, methodology, or statistical significance, which could weaken the argument’s rigor.
  • Self-Promotion: The pitch for the speaker’s newsletter feels slightly out of place and may suggest a commercial motive, though it’s framed as a free resource.

Broader Context and Relevance
  • The video taps into growing concerns about AI’s impact on education and cognitive skills, aligning with debates in academic and professional spheres.
  • It highlights a tension between AI’s efficiency and the need for human expertise, relevant to students, professionals, and educators navigating AI integration.
  • The emphasis on cognitive offloading resonates with research on technology’s effects on attention and memory (e.g., studies on smartphone use or GPS reliance).

Recommendations:
  • Use AI to save time on menial tasks but prioritize cognitive engagement.
  • Train information processing skills to build lasting expertise.
  • Verify AI outputs with domain knowledge to avoid learning inaccuracies.
Speaker’s Perspective: As a learning coach working with Google on AI development, the speaker advocates for balanced AI use, drawing on their experience helping learners and their own success in managing medical school, businesses, and a master’s degree.
 
Note to moderators: there is a bug with the editor.
I only wanted to bold titles, and after posting, all text from points 1 to 7 is Bold. I tried to edit, but I only see point 1 ( 2 to 7 do not appear), and it won't let me remove the Bold text.
I fixed it in word but lost some formating. You can erase this msg and the previous.
 
An interesting study on the impact of AI on the brain


Video Transcript Summarised by Grok (revised by me)
The video presents a critical perspective on the misuse of large language models (LLMs) like ChatGPT for learning and cognitive development. The speaker, a learning coach and former medical doctor, argues that over-reliance on AI tools can impair critical thinking, memory, and expertise development, supported by an MIT study and other research.


Key Points and Arguments

1- Negative Cognitive Impact of AI Overuse:


The video cites an MIT study titled Your Brain on ChatGPT, which divided participants into three groups: an LLM group (using ChatGPT), a search engine group (using non-AI web resources), and a brain-only group (no external tools). EEG analysis showed that the LLM group exhibited lower brain activity, weaker connectivity, and reduced engagement. They also had poorer memory recall and produced lower-quality, generic essays.

A significant finding was the residual negative effect: even after discontinuing AI use, the LLM group’s cognitive performance didn’t recover to the level of the other groups, suggesting lasting impacts.

Other studies corroborate that higher AI use correlates with reduced critical thinking and learning ability.



2- Illusion of Learning:

The speaker explains that learning requires effortful information processing, where the brain actively organizes, connects, and evaluates information to form a schema of knowledge. This process builds memory and expertise.

AI tools like ChatGPT allow users to bypass this effort by providing pre-organized, simplified answers, creating an illusion of learning. Users may understand the output but fail to retain or apply it effectively, especially for complex problem-solving.

Over time, bypassing information processing weakens the brain’s ability to develop these skills, making it harder to learn independently.



3- AI Hallucination and Reliability:

LLMs generate text based on probabilistic patterns, not a source of truth, leading to potential hallucinations (inaccurate or fabricated information). Without prior expertise, users may not detect these errors, leading to incorrect learning.

A referenced Apple white paper highlights LLMs’ limitations in advanced reasoning, reinforcing their unreliability for deep, contextual problem-solving.



4- AI’s Role in Raising Professional Standards:

The speaker argues that AI raises the bar for expertise. As AI can produce generic, mainstream answers, employers increasingly value individuals who can go beyond AI outputs to provide unique, high-quality insights.

Historical parallels are drawn: skills once considered impressive (e.g., using the internet or having a degree) are now baseline expectations. Similarly, AI proficiency will become a given, making deep expertise more critical.



5- Proper Use of AI for Learning:

AI should be treated as an assistant, not a replacement for cognitive effort. Acceptable uses include summarizing broad concepts, finding resources, or exploring perspectives to save time on menial tasks.

Users should focus on cognitive engagement, challenging AI outputs, identifying gaps, and connecting ideas themselves. After gaining a general understanding, they should turn to primary sources (e.g., journals, books) for deeper learning.

Avoiding effortful processing undermines learning goals, and users must train their brains to process information effectively.



6- Personal Anecdote and Broader Implications:

The speaker shares an example of a programmer who struggled to articulate a data analysis strategy after relying heavily on ChatGPT, illustrating the risk of cognitive offloading.

They emphasize that while AI seems like a time-saver, misuse can hinder career progress by limiting expertise development.

The speaker promotes their newsletter as a resource for learning strategies, drawing on their experience balancing medical school, businesses, and a master’s degree.



7- Future Considerations:

The video speculates on a future where advanced AI (e.g., “ChatGPT 10.0”) delivers perfectly accurate, organized information. Even in this scenario, the speaker argues that expertise will remain essential, as AI will raise expectations for human value in professional settings.


Strengths of the Argument
  • Evidence-Based Claims: The MIT study and other research provide empirical support for the cognitive risks of AI overuse, making the argument credible.
  • Clear Explanation of Learning Process: The focus on information processing as the core of learning is well-articulated, grounded in cognitive science, and accessible to a general audience.
  • Practical Advice: The video offers actionable strategies for using AI effectively, balancing its benefits with the need for cognitive effort.
  • Engaging Structure: The chaptered format organizes the content logically, moving from evidence to implications to solutions.


Weaknesses and Potential Biases
  • Sensationalized Title: The title How ChatGPT Slowly Destroys Your Brain is alarmist and may exaggerate the study’s findings to attract attention, potentially undermining credibility.
  • Limited Study Details: The MIT study is referenced briefly, with no specifics on sample size, methodology, or statistical significance, which could weaken the argument’s rigor.
  • Self-Promotion: The pitch for the speaker’s newsletter feels slightly out of place and may suggest a commercial motive, though it’s framed as a free resource.


Broader Context and Relevance
  • The video taps into growing concerns about AI’s impact on education and cognitive skills, aligning with debates in academic and professional spheres.
  • It highlights a tension between AI’s efficiency and the need for human expertise, relevant to students, professionals, and educators navigating AI integration.
  • The emphasis on cognitive offloading resonates with research on technology’s effects on attention and memory (e.g., studies on smartphone use or GPS reliance).

Recommendations:
  • Use AI to save time on menial tasks but prioritize cognitive engagement.
  • Train information processing skills to build lasting expertise.
  • Verify AI outputs with domain knowledge to avoid learning inaccuracies.

Speaker’s Perspective: As a learning coach working with Google on AI development, the speaker advocates for balanced AI use, drawing on their experience helping learners and their own success in managing medical school, businesses, and a master’s degree.
ol, businesses, and a master’s degree.
 
Back
Top Bottom