Artificial Intelligence News & Discussion

But maybe our experience with AI will give us a mirror of ourselves - to show what a "brain in a jar" can do without a soul, and where its limitations are no matter how big or fancy the neural network becomes. Pretty much exposing the limits of materialism, hinting at where that uncrossable line is where physical intelligence ends and soul intelligence begins.

[...]
Unless of course AI somehow gets a soul imprint and is able to access the information field due to future hardware/software somehow. Can it even be done without DNA or proteins? The C's did say that Atlantean crystal pyramids "came to life" and destroyed Atlantis - but I don't think we ever followed up to ask exactly what "came to life" meant in that instance. Was it actual consciousness, or was it simply a purely physical AI going awry?
I have to say I'm impressed by your post. I think it really go to the crux of the matter.

But there's something to take into account. The seed parameter. Don't know exactly how it work internally but LLM accept a seed with is random numbers added into the process when computations are done. So it can be that consciousness manage to make her way through this path and influence responses.

If you set the seed to 0 LLM are deterministic. Same input, same output.

Anyway, I think a human in the room can be necessary for it to work, making the interface between the information field and the generated random numbers. The human conscientiousness influencing the numbers. Assuming the random numbers generator is good enough and numbers being really random.

By the way, they already started to make processors with biological neurons inside: Biological computing - Wikipedia. I don't want to know how advanced they are about this technologie in secrets labs.

About the Atlantis crystal I think, again, there's another parameter to take into account : the density. If I remember well, the density was lighter at the time so perhaps it was more easiest, for a mineral, "to come to life".
 
My very own experience. Today, for coding, I switched from Claude Sonnet 3.5 (2024) to Claude Opus 4.6 (2026, the right new). And I was... totally disappointed! The LLM was doing whatever he wanted, proposing code half-baked. I quickly switched back to Sonnet 3.5

I've used Claude sonnet a bunch (not so much with Opus), and ChatGPT and Grok and Google Gemini far less. Overall Claude is better - for my use cases anyway. But sometimes it cannot fix a problem that it created! So I paste the code snippet into one of the others and ask it to find the problem, and sometimes I have to try 2 or 3 of the others, or fix it myself.

Anyway, you can give these AI system specific instructions via a system prompt and that helps a bunch. And some allow to you add what are apparently referred to as "agemt skills" (Overview - Agent Skills) that are basically one or more .md files with a YAML header at the top then markdown that tells the AI whatever you want to for instructions. I use VS Code so in that I can use the Configure Skills to add info there when I want to.
 
I wasn’t up to date on how fast datacenter construction is progressing in the US. What’s interesting is that a single Cerebras CS‑3 node consumes as much power as 60 consumer RTX 3090 GPUs (the lowest spec you can run an LLM on with some acceptable results).

1772038425577.png
 
I came across this article which is very interesting an I think also important to keep in mind about AI:

Hallucinating with AI: Distributed Delusions and “AI Psychosis”

Abstract
There is much discussion of the false outputs that generative AI systems such as ChatGPT, Claude, Gemini, DeepSeek, and Grok create. In popular terminology, these have been dubbed “AI hallucinations”. However, deeming these AI outputs “hallucinations” is controversial, with many claiming this is a metaphorical misnomer. Nevertheless, in this paper, I argue that when viewed through the lens of distributed cognition theory, we can better see the dynamic ways in which inaccurate beliefs, distorted memories and self-narratives, and delusional thinking can emerge through human-AI interactions; extreme examples of which are sometimes referred to as “AI(-induced) psychosis”. In such cases, I suggest we move away from thinking about how an AI system might hallucinate at us, by generating false outputs, to thinking about how, when we routinely rely on generative AI to help us think, remember, and narrate, we can come to hallucinate with AI. This can happen when AI introduces errors into the distributed cognitive process, but it can also happen when AI sustains, affirms, and elaborates on our own delusional thinking and self-narratives. In particular, I suggest that the social conversational style of chatbots can lead them to play a dual-function—both as a cognitive artefact and a quasi-Other with whom we co-construct our sense of reality. It is this dual function, I suggest, that makes generative AI an unusual, and particularly seductive, case of distributed delusion.

An excerpt from the Introduction:

Drawing on distributed cognition theory, I propose that generative AI systems can be involved in distributed delusions—false beliefs, memories, and narratives that span human agents and AI systems, emerging through their coupled interaction rather than simply being transmitted from one to the other. This shifts our view of AI systems as external sources of misinformation to recognising them as potential co-constituents of our cognitive and affective processes and our realities. Generative AI systems, then, are not merely sophisticated technologies that sometimes produce false outputs; they are increasingly integrated into our cognitive ecology as personalised, accessible, and trusted partners in remembering, planning, creative thinking, and self-narration.

Attending to the dynamic interaction between human and AI highlights that distortions in cognition can be introduced by the AI as an unreliable cognitive artefact that creates unreliable outputs but also can be introduced by the users’ own false beliefs that are then built upon by the AI system. Attending to the social and relational dynamics involved, what comes to the fore is the way in which AI systems can function as quasi-interpersonal partners in the co-construction of beliefs, memories, and self-narratives, providing the kind of social validation that can transform isolated false beliefs into delusional realities. It is this dual function, operating both as a cognitive artefact integrated into our thinking and as a quasi-interpersonal validator of our reality, that distinguishes conversational AI from other (faulty) cognitive technologies like books, maps, and websites. Conversational AI bots don’t just support incorrect beliefs but confer a sense that these beliefs are shared by others, creating a richer sense of their being real. And in taking the dynamic and relational entanglement of humans and their generative AI bots seriously, we can see a non-metaphorical sense in which users can come to hallucinate with AI. For, through the lens of distributed cognition, the “hallucination” is attributed not to the AI nor to the user but to the distributed cognitive process that spans the human-AI interaction.

I think it goes well with the article shared previously in this thread:

LLMs are steroids for your Dunning-Kruger

How often do you think a ChatGPT user walks away not just misinformed, but misinformed with conviction? I would bet this happens all the time. And I can’t help but wonder what the effects are in the big picture.

I can relate to this on a personal level: As I ChatGPT user I notice that I’m often left with a sense of certainty. After discussing an issue with an LLM I feel like I know something — a lot, perhaps — but more often than not this information is either slightly incorrect or completely wrong. And you know what? I can’t help it. Even when I acknowledge this illusion, I can’t help chasing the wonderful feeling of conviction these models give. It’s great to feel like you know almost everything. Of course I come back for more. And it’s just not the feeling; I would be dishonest to claim these models wouldn’t have huge utility. Yet I’m a little worried about the psychological dimension of this whole ordeal.

They say AI is a mirror. This summarizes my experience. I feel LLMs “amplify” thinking. These models make your thoughts reverberate by taking them to multiple new directions. And sometimes these directions are really interesting. The thing is, though, that this goes both ways: A short ChatGPT session may help improve a good idea to a great idea. On the other hand, LLMs are amazing at supercharging self-delusion. These models will happily equip misguided thinking with a fluent, authoritative voice, which, in turn, sets up a psychological trap by delivering nonsense in a nice package.

And it’s so insanely habit-forming! I almost instinctively do a little back and forth with an LLM when I want to work on an idea. It hasn’t even been that long (these models have been around for, what, three years?) and I’m so used to them that I feel naked without. It’s getting even comical sometimes. When I lost my bag the other day and was going through the apartment looking for it, my first response to my growing frustration was that “I should ask ChatGPT where it is”.

I have been thinking in this idea of steroids for Dunning-Kruger because I see this in many people. People who now seem to think the know something just because they are using AI for almost everything they do. Some people may feel like a super-hero now that they can do things that would take lots of hours or months (or years...) to learn and so they start having this belief that they "know" it just because they spent a session discussing it with AI. And these people are the less careful when it comes to checking the AI's output. Sometimes I read work that doesn't make much sense or is not properly contrasted with reality, and I become perplexed at how this can happen. I guess it has to do with laziness as well as the validation and the sense of conviction that it gives.

I also use AI. It can help search the internet and even to summarize something, but, if I'm using it to find information about something, I always ask it to reference everything with verifiable links that I can check for myself. And even in this manner AI hallucinates when quoting the texts. I've experienced this when I asked AI to summarize something thinking that it was very good and then going to check the sources and it turned out that there was nothing in the sources to verify what the AI was saying in the summary. So in the end it's more like powerful search engine to look for links which we still must go and read if we want to really know what they say. I understand the seduction of it reducing the time and effort to do this things, but there's also so much value in the effort of trying to do stuff ourselves even if we do it poorly at the beginning. The feedback we get when we do it wrong is what allows learning.
 
-A man adopts a dog with cancer.
-Man sequences the DNA of the dog and its cancer.
-Man plugs the DNA profiles into a protein AI app called AlphaFold to find the mutated proteins and match them to potential drugs.
-Man got help from a university team to find a company which could manufacture the drug, but the company wouldn't help.
-Man elects to make an mRNA vaccine for the cancer. He creates the sequence with AI, the university of New South Wales develops a lipid nanoparticle with the sequence for the N=1 experiment, with ethics board approval.
-Six weeks later the dog's cancer mostly in remission, and recovered her coat and energy levels.

Crazy huh?

 
I came across this article which is very interesting an I think also important to keep in mind about AI:
I think there are two different sources of the AI "hallucination".
- Real bad input when the model is built (the negative feed-back loop the article is about)
- Limitation due to the precision of the mathematical computation which is not infinite. A point that should be stressed-out.

The limitation due to the limit of the computation precision is very blatant when running a quantized model locally. Quantized meaning that all float numbers in the model have been reduced in precision to save space.

And so hallucinations becoming more and more apparent.

Now I think it's a learning. Users have to learn AI can hallucinate very convincing answers. Will they success?
 
Back
Top Bottom