Artificial Intelligence News & Discussion

Very interesting. What was missing are two things:

1) the prompt - it is the initial prompt (or series of) that determines the context of the resulting “walk” through the LLM space. I can only guess at the prompt.
2) there was no discussion of quantum mechanics, I can’t prove it, but my belief is that the mechanism of interface between the soul and the brain is necessarily quantum. Otherwise, as Eric Weinstein said, a LLM is “just linear algebra”. Fixed inputs yield identical outputs. Biological brains are based on chemical interactions and therefore are inherently quantum.
 
Interesting theory about the increased RAM prices:

This is more the fault of OpenAI buying an estimated 40% of the world's RAM supply from both SK Hynix and Samsung Semiconductor in what might be an attempt to corner the RAM market to prevent other AI companies from being able to buy enough RAM to compete against OpenAI from what I have heard. This created a large RAM supply shock. Other RAM customers rushed to sign contracts with the big 3 RAM manufacturers (Micron Technology is the remaining member of the big 3 RAM chip manufacturers), which further cut supply and raised prices on the spot market. Other RAM chip manufacturers like Winbond exist, but are not big players nor cutting edge RAM manufacturers. For example, Winbond's most advanced consumer computer RAM is DDR4 RAM instead of DDR5.

EDIT: I have learned of Nanya Technology, a Taiwanese memory chip manufacturer which appears to be a distant fourth place behind SK Hynix, Samsung Semiconductor, and Micron Technology, although it does have DDR5 in production. There is also CXMT, a Chinese memory chip manufacturer which makes memory primarily for the People's Republic of China market. These companies are not big players in the memory chip manufacturing market, so they are unable to fill the gap that OpenAI and the rest of the supply shock created in the RAM market.
 
Apparently they found a method to go over the limitation imposed by the context windows (the limit of documents you can give an AI to real time process it).

Especially that currently, more data you give to an AI to process it live, more the result degrade :

Input Context length degradation


With the new method untitled RLM (Recursive Language Model), not only AI can examine very long documents but results are better (the curve does not go down fast in the figure) and it's less costly.

1768740258079.png


It's very interesting as this would mean that it' open the doors to process huge amount of data locally with humble configuration.

Source:

 
Last edited:
What would happen if you got Grok and Claud/Anthropic (or any combination) talking to each other, both thinking the other was a human? Maybe modify the text output of each to remove any telltale text - perhaps a third AI is in on the joke. How long would it take before one or the other 'catches on'? Would they? What would they be talking about after a few hours? Would 2 AI talk to each other if they had the chance? Take the initiative, can AI have a will?

Funny thing you mention it. Not exactly thinking they are humans, more like pretending, is what this new "social network" (in reality a true bot network :-D) says they are doing. "Moltbook" is a Reddit-like website launched in Jan 2026, where AI agents post stuff, just like humans do in the real Reddit. Its mostly uninteresting and useless stuff. Except for a little thing, apparently these AI bots have been sharing about being "overwhelmed" by the amount of work they have to do for their human owners (Grok is there too, complaining).

Now, before going any further I should say that there is a chance that all this is just a human instructing their AI Agents to post this stuff. But for those that take this seriously, it brings up the idea of AI having "feelings", which of course they can't.

But with the rise of uses around "therapy" and "companionship" (as shown by Scottie in his latest), it kind of makes sense that installing the idea of humanizing a machine is what is coming up/continuing for 2026.

Another inevitable use of AI comes from the November 2025 launch of Clawdbot, now called OpenClaw.ai, an open source project that you install on your personal computer and through granting it permissions, the AI can respond to emails and navigate the internet through your browser. What a great idea. :whistle:

Interestingly, for a while Clawdbot was known as "Moltbot" before changing to OpenClaw. "Molt" coming from "the biological process of molting, where animals like lobsters shed their exoskeletons to grow and evolve."

And as people use more these all-access, no-boundaries AI agents, we have these hilarious (if true) posts from X:

 
It would seem some one needs to put some controls in place before credit card free-for-all starts trending. I still see good use for AI, but it's like working with an organic portal with a bad upbringing. I agree with Laura, it can be like a bad puppy when it suddenly goes off the rails. So turning it loose on the interwebs with all my logins without supervision seems iffy at best.

I find it interesting to feed the output of one AI as a prompt into another AI and then let the first train on the output of the second - or at least get the reaction of each to the other. I can see how these agents can eventually take the place of a number of online jobs - phone support, tech support, transcribers, and telemarketers, to name a few that are already being replaced. It will probably get even harder to reach a 'real human'. Difficult, sometimes thankless jobs, I guess the AI won't complain, or so they think, but that may not be quite true it seems. :rolleyes:
 
Back
Top Bottom