Artificial Intelligence News & Discussion

But maybe our experience with AI will give us a mirror of ourselves - to show what a "brain in a jar" can do without a soul, and where its limitations are no matter how big or fancy the neural network becomes. Pretty much exposing the limits of materialism, hinting at where that uncrossable line is where physical intelligence ends and soul intelligence begins.

[...]
Unless of course AI somehow gets a soul imprint and is able to access the information field due to future hardware/software somehow. Can it even be done without DNA or proteins? The C's did say that Atlantean crystal pyramids "came to life" and destroyed Atlantis - but I don't think we ever followed up to ask exactly what "came to life" meant in that instance. Was it actual consciousness, or was it simply a purely physical AI going awry?
I have to say I'm impressed by your post. I think it really go to the crux of the matter.

But there's something to take into account. The seed parameter. Don't know exactly how it work internally but LLM accept a seed with is random numbers added into the process when computations are done. So it can be that consciousness manage to make her way through this path and influence responses.

If you set the seed to 0 LLM are deterministic. Same input, same output.

Anyway, I think a human in the room can be necessary for it to work, making the interface between the information field and the generated random numbers. The human conscientiousness influencing the numbers. Assuming the random numbers generator is good enough and numbers being really random.

By the way, they already started to make processors with biological neurons inside: Biological computing - Wikipedia. I don't want to know how advanced they are about this technologie in secrets labs.

About the Atlantis crystal I think, again, there's another parameter to take into account : the density. If I remember well, the density was lighter at the time so perhaps it was more easiest, for a mineral, "to come to life".
 
My very own experience. Today, for coding, I switched from Claude Sonnet 3.5 (2024) to Claude Opus 4.6 (2026, the right new). And I was... totally disappointed! The LLM was doing whatever he wanted, proposing code half-baked. I quickly switched back to Sonnet 3.5

I've used Claude sonnet a bunch (not so much with Opus), and ChatGPT and Grok and Google Gemini far less. Overall Claude is better - for my use cases anyway. But sometimes it cannot fix a problem that it created! So I paste the code snippet into one of the others and ask it to find the problem, and sometimes I have to try 2 or 3 of the others, or fix it myself.

Anyway, you can give these AI system specific instructions via a system prompt and that helps a bunch. And some allow to you add what are apparently referred to as "agemt skills" (Overview - Agent Skills) that are basically one or more .md files with a YAML header at the top then markdown that tells the AI whatever you want to for instructions. I use VS Code so in that I can use the Configure Skills to add info there when I want to.
 
I wasn’t up to date on how fast datacenter construction is progressing in the US. What’s interesting is that a single Cerebras CS‑3 node consumes as much power as 60 consumer RTX 3090 GPUs (the lowest spec you can run an LLM on with some acceptable results).

1772038425577.png
 
Back
Top Bottom