Artificial Intelligence News & Discussion

Very interesting. What was missing are two things:

1) the prompt - it is the initial prompt (or series of) that determines the context of the resulting “walk” through the LLM space. I can only guess at the prompt.
2) there was no discussion of quantum mechanics, I can’t prove it, but my belief is that the mechanism of interface between the soul and the brain is necessarily quantum. Otherwise, as Eric Weinstein said, a LLM is “just linear algebra”. Fixed inputs yield identical outputs. Biological brains are based on chemical interactions and therefore are inherently quantum.
 
Interesting theory about the increased RAM prices:

This is more the fault of OpenAI buying an estimated 40% of the world's RAM supply from both SK Hynix and Samsung Semiconductor in what might be an attempt to corner the RAM market to prevent other AI companies from being able to buy enough RAM to compete against OpenAI from what I have heard. This created a large RAM supply shock. Other RAM customers rushed to sign contracts with the big 3 RAM manufacturers (Micron Technology is the remaining member of the big 3 RAM chip manufacturers), which further cut supply and raised prices on the spot market. Other RAM chip manufacturers like Winbond exist, but are not big players nor cutting edge RAM manufacturers. For example, Winbond's most advanced consumer computer RAM is DDR4 RAM instead of DDR5.

EDIT: I have learned of Nanya Technology, a Taiwanese memory chip manufacturer which appears to be a distant fourth place behind SK Hynix, Samsung Semiconductor, and Micron Technology, although it does have DDR5 in production. There is also CXMT, a Chinese memory chip manufacturer which makes memory primarily for the People's Republic of China market. These companies are not big players in the memory chip manufacturing market, so they are unable to fill the gap that OpenAI and the rest of the supply shock created in the RAM market.
 
Apparently they found a method to go over the limitation imposed by the context windows (the limit of documents you can give an AI to real time process it).

Especially that currently, more data you give to an AI to process it live, more the result degrade :

Input Context length degradation


With the new method untitled RLM (Recursive Language Model), not only AI can examine very long documents but results are better (the curve does not go down fast in the figure) and it's less costly.

1768740258079.png


It's very interesting as this would mean that it' open the doors to process huge amount of data locally with humble configuration.

Source:

 
Last edited:
Back
Top Bottom