Artificial Intelligence News & Discussion

I use Whisper regularly and I never had such a problem. But the AI model was not trained especially for the the medical vocabulary so I'm not surprised. This is using a tool over it capacity. The people who did that are not very smart.
The problem with all these generative models is that they never know when they're unsure of something. This includes audio transcription generative models like Whisper. If you say something incomprehensible it will often make something up that it "thinks" you said. Humans do it to to some degree when they misheard something. But people will often say "this made no sense, or I don't know what I heard, or can you spell that for me and explain what this term is" while AI will just roll with a best guess no matter how wildly incorrect. Kids often do stuff like this, and certainly many adults as well. But not all, and what makes some people question things while others just accept things at face value isn't fully explained by science. Could it be an IQ thing? Could it be past life experiences and lessons? Why are some people so naive and make stuff up while others aren't?

Right now their entire bet is on one thing - bigger AI (more parameters), and feed it more data. They're hoping that by giving it a bigger brain and exposing it to more data, some critical threshold will be reached and things like wisdom and common sense will suddenly show up. And that idea isn't entirely faulty because it is supported by something called "emergent" abilities - that by scaling up the models, the data, and the compute, they found that new abilities that weren't explicitly trained into the models and didn't exist in smaller models suddenly appeared. There are some research papers on this topic:




And supposedly you can't predict when (at what models scale etc) any given ability will emerge, and the degree of that ability when it does emerge. But not everyone agrees that this is a real thing either - some suppose that the abilities only *appear* to emerge due to the limitation of testing for those abilities not being sensitive to pick up progress towards the ability - kinda like in a test when a teacher doesn't ever give partial credit even if the answer is 99% correct, and only scores it as correct when it's 100%. Then the test result won't differentiate between students who didn't get any part of the answer correct and those who got it almost right.

But can anything and everything "emerge" in such a way? Probably not, but it will be interesting to see where all this goes.
 
Yeah, once, by accident, I imputed Italian in Whisper using the French option. And it outputted something perfectly intelligible to my amazement. Not sure the translation was perfectly correct but it was kind of. The two languages are not so far from each other so I guess it played a role for sure. Anyway the AI did not stopped.

About bigger models, today I was reading an article about the increasing "learning power" needed by each new AI generation to have less errors than the previous training. It's exponential and the curve perfectly feet the prediction.

There was an interesting comment about this problem by a physicist:

I'm a physicist. This is just the intersection of linear algebra and information encoding. LLM's = data compression & lookup. But they do it all via maps 100% isomorphic to basic linear algebra... pretty dumb stuff. No real learning just a trivial map of concepts expressed in language (your brain uses highly nonlinear chemical and feedback systems in its encoding schemes it's possible the microtubules may even make use of qm processes). For non-trivial applications of statmech to encoding problems see the work on genetic codes by T. Tlusti (on arxiv).

The language encodes non trivial non linear concepts in syntax + vocabulary the approximate linear fit to the system is not this. This scaling is trivial and not anything special or meaningful ie they are not using their entropy calculations in an intelligent way here. It's actually very sad what's going on. From a physical perspective AI is 100% hype and 300% BS... well into not even wrong territory, just weaponized over fitting in excessively large dimensional models.

This is frustrating because they could use this compute to really learn something quantitative about our species use of language and how it works with our biology... instead they use vast resources to learn nothing... instead creating a system that randomly chooses a context-thread through a train of thought without any internal mapping to an underlying narrative involving the evolution of a narrative of sequenced thought. In short they squash the time axis into a flat map of ideas or concepts (like a choose your own adventure with a random data dependent map providing interpolation between various choices... a fixed percentage of which will be "hallucinations" because unlike real error correcting codes or naturally occurring encoding schemes this has no inbuilt dynamic feedback mechanism for error detection and correction).

The structure of our language and the meaning it represents allows you to formulate absurd sentences and concepts so we don't "hallucinate" unless we want to... we even tolerate absurdity as a meaningful sub class of encodings i.e. humorous language. The way the neural networks are trained precludes any of this reflexive or error correcting representations as it's complexity would necessarily grow exponentially with the data set. We cheat because we have hardwired physical laws into the operation of our neural networks that serve as a calibration and objective precise maps for ground truth (your brain can learn to throw and catch without thinking or learn to solve predictive anticipatory Lagrangian dynamics on the fly: aka energy management in a dog fight even defeating systems that follow optimum control laws and operate with superior initial energy states aka guided missiles) . You can even train systems like llms (ie deep learning) to solve some pretty hard equations on specific domains but the mathematics places hard limits on the error of these maps (like an abstract impedance mismatch, but worse)... you can even use this to make reasonable control laws for systems that satisfy specific stability constraints... but lyapunov will always win in the end. This isn't the case of trying to map SU(2) onto SO(3). It's like trying to map the plane onto a sphere without explicitly handling the topological defect and saying you don't really care about it anyway. With this approach you're gonna end up dealing with unpredictable errors at all orders and have no way of estimating them a priori... unfortunately enthusiasm and resources exceeds the education in both physcis and math for these efforts. The guys doing this stuff simply don't know what they don't know... but they should. The universities are failing our students.
 
Hi,
What is your experience in translating articles, books by chatAI etc?
It looks to me that translation process became very much accelerated and improved by use of AI assistants.
@Possibility of Being what do you think to reactivate a bit PRACowniA IV?
 
Hi,
What is your experience in translating articles, books by chatAI etc?
It looks to me that translation process became very much accelerated and improved by use of AI assistants.
The problem is sometime the AI miss the context and put a sentence off the mark in the middle of a good translation. Or sometime a word is completely off. Always reread whenever the text is a little long. The next great step would be AI pause an ask for precision when there's a doubt.
 
How AI can be used to have more control over the population.


Transcription:
It turns out, according to Gemini, Hitler had an excellent DEI policy.
Yeah.
Now, in reality, he did not.
And it's important to understand that in reality he did not.
But yeah, Gemini happily threw up black Nazis because it's, because they programmed it to
be biased.
They programmed it in a political direction.
There's this guy, David Rosado, who's been doing these analyses on the social media side,
where he shows the incidence rates of the rise of like all of the woke language, like
in the media.
There's similar studies that have come out for the AI, where there's studies that have
been done that basically show the political orientation of the LLMs, because you can ask
them questions, and they'll tell you.
And they're just like, nine out of 10 of them are like tremendously biased.
And then there's a handful that aren't.
And then there's tremendous pressure.
This is one of the threats from the government is, is the government basically going to force
our startups to come into compliance, not just with their trade rules, but also with
all of their, essentially a censorship regime on AI that's exactly like the censorship regime
that we had on social media.
Wow, that's terrifying.
Yeah, exactly.
And yes, and this is my belief and what I've been trying to tell people in Washington,
which is, if you thought social media censorship was bad, this has the potential to be a thousand
times worse.
And the reason is social media is important, but at the end of the day, it's, you know,
it's quote, just people talking to each other.
AI is going to be the control layer on everything, right?
So AI is going to be the control layer on how your kids learn at school.
It's going to be the control layer on who gets loans.
It's going to be the control layer on does your house open when you come to the front
door?
It's going to be the control layer on everything, right?
And so if that gets wired into the political system, the way that the banks did and the
way that social media did, like we are in for a very bad future.
 
It's here.

DeepL offer real-time voice translations. So now why bother leaning another language?

A: It overpowered them the same way your computers will overpower you.

 
OpenAI introduces Operator – an AI agent that does the research for you

OpenAI, the company behind ChatGPT, just announced Operator. It is a generative AI service that acts like an agent and performs tasks on your behalf. Using its own browser, Operator looks at a webpage and interacts with it by typing, clicking and scrolling on its own – no need for any input.

The rollout will be gradual, and the first to get it are ChatGPT Pro subscribers in the United States.

Operator can handle various repetitive browser tasks, and OpenAI claims it can fill out forms, order groceries, and even create memes. It can use the same interfaces and tools that humans interact with, and that would also help businesses, opening new engagement opportunities for them.

Operator is powered by a new model called CUA – Computer-Using Agent. It combines GPT-4o vision capabilities with advanced reasoning through reinforced learning. CUA is trained to interact with GUIs – graphical user interfaces with buttons, menus, and text fields people see on a screen.

When the service is stuck or needs assistance, it simply hands control back to you. You also need to manually input sensitive data, such as passwords or other verification forms.

Operator can work with services such as Doordash, Etsy, Booking.com, Uber, and Instacart, and it can do research through media partners like Associated Press and Reuters.

 
It's here.

DeepL offer real-time voice translations. So now why bother leaning another language?




Imagine some big wigs are using that to discuss something important, and someone hacks it to make one of the participants say something they didn't say.
 
An Open AI that does the research for you, so you use your brain less and less. :nuts:

From Laura’s Secret History of the World:
One of the principal historians of the Roman era, Julius Caesar, tells us that the Celts were ruled by the Druids. The druids “held all knowledge”. The Druids were charged with ALL intellectual activities…
[…]
What Caesar said was that the reason for the ban on writing was that the Druids were concerned that their pupils should not neglect the training of their memories, i.e. the Frontal Cortex, by relying on written texts. We have discussed the production of ligands and their potential for unlocking DNA . It seems to be very interesting that the very things that we have learned from the Cassiopaeans, from alchemical texts, from our own experiences, and from research - that “thinking with a hammer” is the key to transformation - was noted as an integral part of the Druidic initiation.
It is worth noting that, in the nineteenth century, it was observed that the illiterate Yugoslav bards, who were able to recite interminable poems, actually lost their ability to memorize once they had learned to rely on reading and writing. (~pg 104-05)
 
Imagine some big wigs are using that to discuss something important, and someone hacks it to make one of the participants say something they didn't say.

In a similar vein, the other day I had to attend a meeting with court interpreters. They informed us that the EU is ultimately planning on replacing all human interpreters with AI. Sure! Less money, less logistics, what can go wrong, other than interpreters losing their jobs?

The first thing I thought of was how it easy it would be to condemn innocent people if this spreads.
The Irani defendant says, "No, Sir, I didn't shoot the President."
AI says in English: "No sir, I actually shot the President."
Verdict: GUILTY!

That would only work behind closed doors, and with no speakers of the source language. But it would be sooo easy in the hands of the wrong people...
 
In a similar vein, the other day I had to attend a meeting with court interpreters. They informed us that the EU is ultimately planning on replacing all human interpreters with AI. Sure! Less money, less logistics, what can go wrong, other than interpreters losing their jobs?

The first thing I thought of was how it easy it would be to condemn innocent people if this spreads.
The Irani defendant says, "No, Sir, I didn't shoot the President."
AI says in English: "No sir, I actually shot the President."
Verdict: GUILTY!

That would only work behind closed doors, and with no speakers of the source language. But it would be sooo easy in the hands of the wrong people...
i can not grasp how STUPID the EU is. time to stop dei in europe, essentially in D.
 
Yeah, once, by accident, I imputed Italian in Whisper using the French option. And it outputted something perfectly intelligible to my amazement. Not sure the translation was perfectly correct but it was kind of. The two languages are not so far from each other so I guess it played a role for sure. Anyway the AI did not stopped.

About bigger models, today I was reading an article about the increasing "learning power" needed by each new AI generation to have less errors than the previous training. It's exponential and the curve perfectly feet the prediction.

There was an interesting comment about this problem by a physicist:
congrats to the physicists text. AND, since you have to check the ai translation, you should have a knowledge of BOTH languages. do you?? so, why not impose english as the only language and ensure that as many as possible learn english...
i can already hear the protestations of the french...but they have too many accents and too complex grammar, and german is even worse...
 
This is from Business Insider, September 2024. To be put in perspective with the Stargate project.

Billionaire Larry Ellison says a vast AI-fueled surveillance system can ensure 'citizens will be on their best behavior'​

Kenneth Niemeyer - Sep 15, 2024
  • Larry Ellison says AI will enable a vast surveillance system that can monitor citizens.
  • Ellison, the billionaire cofounder of Oracle, shared his thoughts on AI during a recent meeting.
  • Oracle, a software company, is aggressively pursuing AI projects.
Walking down a suburban neighborhood street already feels like a Ring doorbell panopticon.

But this is only the start of our surveillance dystopia, according to Larry Ellison, the billionaire cofounder of Oracle. He said AI will usher in a new era of surveillance that he gleefully said will ensure "citizens will be on their best behavior."

Ellison made the comments as he spoke to investors earlier this week during an Oracle financial analysts meeting, where he shared his thoughts on the future of AI-powered surveillance tools.

Ellison said AI would be used in the future to constantly watch and analyze vast surveillance systems, like security cameras, police body cameras, doorbell cameras, and vehicle dashboard cameras.

"We're going to have supervision," Ellison said. "Every police officer is going to be supervised at all times, and if there's a problem, AI will report that problem and report it to the appropriate person. Citizens will be on their best behavior because we are constantly recording and reporting everything that's going on."

Ellison also expects AI drones to replace police cars in high-speed chases. "You just have a drone follow the car," Ellison said. "It's very simple in the age of autonomous drones." He did not say if those drones would broadcast the chases on network news.

Ellison's company, Oracle, like almost every company these days, is aggressively pursuing opportunities in the AI industry. It already has several projects in the works, including one in partnership with Elon Musk's SpaceX.

Ellison is the world's sixth-richest man with a net worth of $157 billion, according to Bloomberg.

Ellison's children have made names for themselves in the film industry. His daughter Megan Ellison founded her production company Annapurna Pictures in 2011, and his son David Ellison is set to become CEO of Paramount after it completes its merger with Skydance Media.

source

The Stargate project:
January 22, 2025:
 
Back
Top Bottom