Artificial Intelligence News & Discussion

This also means that any actually real videos can be simply denied as "AI" generated. Politicians and other public figures won't have to try to suppress videos anymore, just spam the internet with fake variations of the original and claim they're all fake.
people will have plenty of fake videos "corroborating" their preferred delusion, and will even more easily deny any videos that contradict it.
They've always done that from as far back as man exists, not just lying with videos, but any imaginable ways they can conceive. They simply master more clever ways to lie better. Unfortunately, it's already out of control by now, and will only get worse 🙁

Jokes aside, it may indirectly increase the value of human-made products, and start a "back to human" revolution.
It already does. I've seen artisan markets where people sell their art very expensive. There was a lady who does watercolor (aquarelle) illustrations and prints them on fabric. A cute soft velvet-like 1 person blanket (not even for a bed, just a comforter): 80$ CAD... There was one model I wanted before knowing the price... but the 5 she had were already sold/reserved! It saved me from having to say, oups! I can't buy it because it's too expensive. In that situation, I wouldn't mind a cheap 20$ -Made in China- blanket with a Midjourney AI-generated watercolour image. I mean, not that I don't value artisan's work, but at this stage in our world, money has been devalued so far that it's a choice between eating or fancy stuff.
 
The hype about AI is loosing momentum:

Artificial intelligence is losing hype​

For some, that is proof the tech will in time succeed. Are they right?​

Silicon Valley’s tech bros are having a difficult few weeks. A growing number of investors worry that artificial intelligence (AI) will not deliver the vast profits they seek. Since peaking last month the share prices of Western firms driving the ai revolution have dropped by 15%. A growing number of observers now question the limitations of large language models, which power services such as ChatGPT. Big tech firms have spent tens of billions of dollars on ai models, with even more extravagant promises of future outlays. Yet according to the latest data from the Census Bureau, only 4.8% of American companies use ai to produce goods and services, down from a high of 5.4% early this year. Roughly the same share intend to do so within the next year.

Source
 
The hype about AI is loosing momentum:



Source
It looks like the AI honeymoon phase has come to an end even faster than for cryptocurrencies. And yet AI is a tool that has concrete applications in everyday life, unlike good old Bitcoin. I imagine that the disenchantment with AI is proportional to the hype of tech gurus who oversold AI as ‘a complete paradigm shift that will replace virtually every job’.
 
I imagine that the disenchantment with AI is proportional to the hype of tech gurus who oversold AI as ‘a complete paradigm shift that will replace virtually every job’.
It is probably similar to the 'dot com' bubble burst in 2001. People were right that the Internet would become much more important, but it took a lot more time to get there than many assumed - and many if not most of the Internet companies from that time did not succeed.

Same with AI - it may get to its promised potential one day, but it may take much longer than what is currently assumed. Unless the technology has been already developed in secret and will be brought out to jump-start the "paradigm shift", which the PTB seem to be very much interested in.
 
From today's discussion on my blog:

This is what AI does when it "hallucinates." For example, yesterday, I asked ChatGPT about papers published on a specific mathematical topic. I received a list of papers, complete with authors, journal names, titles, and abstracts. After some "discussion," the AI admitted, with an apology, that none of these papers actually exist—it was all fabricated.

This is an example of lying without awareness, where the lie is programmed into the system, and there's no need for the AI to "know" anything at all.
 
...that none of these papers actually exist—it was all fabricated.

This is an example of lying without awareness, where the lie is programmed into the system, and there's no need for the AI to "know" anything at all.


Maybe if they can give this AI a jacket and a tie, they could put him/it on the evening news as one of their experts. :-)
 
the so-called “ARTIFICIEL intelligence” lazarus
the article below outlines efforts to computerize us ever more.
i find this utterly ridiculous and eventually you will have the opportunity to BUY - an american obsession-, different levels of simulations. mon dieu...
 
It is more and more clear that the latest LLM "AI" fad was a marketing campaign to balloon valuations of NVIDIA (which had a problem of oversaturated gaming market) and Microsoft (not so good sales of cloud services). What is interesting is they did it in a Ponzi-like scheme. A few bits from the longer article (the reader should also read linked articles for a big picture):
(...) “If NVIDIA is doing so well, why did their inventory remain the same? When you sell so many goods and keep booking stellar future orders, there should be a physiological increase in inventory, right? Well, not in the case of NVIDIA apparently.”​
(...) “Do you see what I am seeing here? Yes, 10 years later Mellanox, today part of Nvidia, and #Microsoft are doing the exact same thing. However, in order to make sure the house of cards does not collapse, both companies this time are replicating the same trick many times to make sure the round-tripping does not stop. How are they doing it? Very simple, they keep investing in Startup companies at inflated prices with the requirement those operate on the Mellanox-Nvidia infrastructure Azure is being built upon (Microsoft, Nvidia Lead In Investing In AI Startups, But Others Close Behind). How can these companies pay for Microsoft Azure if they can barely generate revenues and have limited cash resources? A good chunk of Microsoft investments is in reality Azure Credits (OpenAI has received just a fraction of Microsoft’s $10 billion investment)”.​
(...) To conclude, similar to what happened during the DotCom bubble, it is clear we are in front of a broad industry effort to cheat on many fronts with the sole purpose of goosing valuations to nosebleed and unrealistic levels never seen before and “AI” has been the excuse to perpetrate this since the very beginning like “internet” was 20 years ago as I summarized in: “IT HAS NEVER BEEN A BUBBLE IN AI, BUT A PONZI SCHEME IN SEMICONDUCTOR STOCKS SINCE THE VERY BEGINNING”.​

 
Google Gemini cannot identify white people as historical figures. So garbage in garbage out, has become bias in, bias out.


I read in Google's response that they can't satisfy all people with how AI represents things. Well, how about satisfying reality. AI really has a problem with objective reality. The same as the people that program it. IMO.
 
(...) “If NVIDIA is doing so well, why did their inventory remain the same? When you sell so many goods and keep booking stellar future orders, there should be a physiological increase in inventory, right? Well, not in the case of NVIDIA apparently.”
I'm not sure of that. Look like they're struggling to keep up with demand. My company sell a solution which require NVidia 3D cards and it's sometime tight to have the quantities. I really have the impression they are on a tight schedule so I'm not surprised the inventory remain the same.

Especially they have quite a few references but I think they do not have so much production lines. I mean a lot of hearts of their cards are the same and I think, they simply deactivate some functionality to have the lower references. So they can adapt quickly to the demand.

Now I can be wrong, I'm not a market specialist and the offer can be voluntarily reduced. But my impression is that they really sell quickly for the moment and I'm not sure the usual market rules can apply to this product.
 
Looks like "Apple researchers" agree with what's been observed about Large Language Models: they're dumb.
haven't read the paper but from this summary: .https://techxplore.com/news/2024-10-apple-artificial-intelligence-illusion.html
Their testing involved asking multiple LLMs hundreds of questions that have been used before as a means of testing the abilities of LLMs—but the researchers also included a bit of non-pertinent information. And that, they found, was enough to confuse the LLMs into giving wrong or even nonsensical answers to questions they had previously answered correctly.

This, the researchers suggest, shows that the LLMs do not really understand what they are being asked. They instead recognize the structure of a sentence and then spit out an answer based on what they have learned through machine-learning algorithms.

They also note that most of the LLMs they tested very often respond with answers that can seem correct, but upon further review are not, such as when asked how they "feel" about something and get responses that suggest the AI thinks it is capable of such behavior.
 
Looks like "Apple researchers" agree with what's been observed about Large Language Models: they're dumb.
However there is the other part of AI which is undoubtedly interesting: the generative side.

You can generate a text on a subject, a summary or a picture very very quickly and the pertinence is here. As it's based on stats, it's useful too, to find bugs in programme for example. But yeah, the danger is to put too much confidence into it and being caught by wrong answers. This is certainly what will happen for the majority.

Today AI have no intrinsic representation of the world. This was the old way of doing AI. Perhaps a reconciliation of the two will happen a day.
 
LLMs are generative too, since they have a random component. Of course they are useful as tools, but they are not intelligent or conscious. Someday they'll be possessed by demons or something, but with what's available to the public, that's still not the case so far. The demons thus far are those who program these tools to further some agenda ;)
 

“AI-powered transcription tool used in hospitals reportedly invents things no one ever said”​

Oh, awesome!
Now this could really SPICE up the mainstream “Medical Murder Machine”
AI is adding its own spin to the “patient ‘Health care’ files, and so called “confidential documents”, and it’s not a good thing.
Geezus!
Most everyone should keep in mind, that the majority of the “White coated Drug pushers” will act on the holy writ contained in these files, over the statements of the gullible person looking for assistance.
I’ve personally and professionally witnessed many instances of this human arrogance and superiority and now, AI has superseded even that!

“Tech behemoth OpenAI has touted its artificial intelligence-powered transcription tool Whisper as having near “human level robustness and accuracy.”
But Whisper has a major flaw: It is prone to making up chunks of text or even entire sentences, according to interviews with more than a dozen software engineers, developers and academic researchers. Those experts said some of the invented text — known in the industry as hallucinations — can include racial commentary, violent rhetoric and even imagined medical treatments.”[…]

 
But Whisper has a major flaw: It is prone to making up chunks of text or even entire sentences, according to interviews with more than a dozen software engineers, developers and academic researchers. Those experts said some of the invented text — known in the industry as hallucinations — can include racial commentary, violent rhetoric and even imagined medical treatments.”[…]
I use Whisper regularly and I never had such a problem. But the AI model was not trained especially for the the medical vocabulary so I'm not surprised. This is using a tool over it capacity. The people who did that are not very smart.
 
Back
Top Bottom