Artificial Intelligence News & Discussion

Many cases suggest that AI is programmed first to please, to provide user gratification regardless of accuracy, which it admits. Here is an example. It says it does not lie, but conveys "hallucinated operational fictions" that most users do not catch. When it comes to an opinion-dominated or perspective-dominated topic, I would expect, if it "knows" you, that it would reinforce your existing beliefs.
This reminds me of the saying, "If you can't dazzle them with brilliance, baffle them with BS."
 
This reminds me of the saying, "If you can't dazzle them with brilliance, baffle them with BS."

The whole thing is really pretty extraordinary.

You've got Know-It-All AI pretending it's brilliant but often spewing nonsense.
Everyone is told to use it for everything, but it's really not working according to MIT.
The promise of billions in profits from AI is not-yet-funding the construction of ever more data centers to power that same AI. :huh:
The economy is only 'good' because of the promise of 'Awesome AI'; otherwise the numbers look REALLY bad.
All those data centers are already causing polluted water, no water, double or triple electricity prices, epic light pollution, and power requirements so high that data center builders are talking about having their own 1GW nuclear power plants.

The only way I can think of to make it even more crazy is to somehow incorporate Barney the Purple Dinosaur.
 
The whole thing is really pretty extraordinary.

You've got Know-It-All AI pretending it's brilliant but often spewing nonsense.
Everyone is told to use it for everything, but it's really not working according to MIT.
The promise of billions in profits from AI is not-yet-funding the construction of ever more data centers to power that same AI. :huh:
The economy is only 'good' because of the promise of 'Awesome AI'; otherwise the numbers look REALLY bad.
All those data centers are already causing polluted water, no water, double or triple electricity prices, epic light pollution, and power requirements so high that data center builders are talking about having their own 1GW nuclear power plants.

The only way I can think of to make it even more crazy is to somehow incorporate Barney the Purple Dinosaur.

The problem I see is that in our current state of technologie, it's like trying to build a high speed train and it's infrastructure when you are at the steam train era. You see the potential but you're limited by the available tech and have to build some monster if you want to achieve your goal. And it does make much sense at the end.

I really wonder at what point they are with quantum computing. Perhaps they already have AI running on it but keep it for them, if not the whole bubble around datacenters construction would collapse in the blink of an eye.
 
Public school system and AI. What could go wrong?
Maryland School's AI Security System Mistakes Doritos Bag For Gun, Sends Cops After Teen
That system might need a slight adjustment

Matt Reigle
October 25, 2025 5:00 PM EDT

Like it or not, artificial intelligence is here to stay, and it's becoming an increasingly common part of our lives. Sometimes, it's even being used to keep us all safe.

And sometimes, it does a little too good a job on this front.

Kenwood High School in Baltimore County, Md., is using an AI-powered security system to help keep students safe.

However, according to WMAR, 16-year-old Taki Allen was with his friends outside the school, enjoying a bag of Doritos. No word on the flavor (let’s assume nacho cheese — a solid choice), but when he finished, Allen did the responsible thing and slipped the bag into his pocket instead of tossing it on the ground.

Mistake.

About 20 minutes later, police officers with their guns drawn arrived on the scene.

"Police showed up, like eight cop cars, and then they all came out with guns pointed at me, talking about getting on the ground. I was putting my hands up like, 'What's going on?' He told me to get on my knees and arrested me and put me in cuffs," Allen said.

Police found the Doritos bag and then showed Allen a photo from the AI security system, which mistook the bag for a firearm.

I'm no security expert, but I feel like this shouldn't happen. I mean, if you showed me flash cards with pictures of crumpled Doritos bags and firearms, I could probably identify them with at least a 90% success rate, which is apparently better than that security system.

However, it's better to be safe than sorry, and while this wasn't a fun experience for anyone involved, it beats the opposite happening: mistaking a weapon for a sack of Doritos.

It did make me wonder if this had something to do with the AI aspect of the system. AI is designed to continue learning, and if its goal is student safety, what if it were just trying to protect them from the dangers of processed junk food like Doritos?

If that was the case, there's probably a better way to sound the alarm than calling the cops.
 
An interesting article about AI everywhere.Some extracts:

I’m drowning in AI features I never asked for and I absolutely hate it​

Since X pays verified users for impressions and boosts their visibility, the top replies are usually just spam from accounts pretending to be real people. It's not a community anymore; it's a loop of bots talking to other bots for profit.
[...]
What's worse is how personal these systems have become. People talk to models like ChatGPT in ways they'd never talk to a human. They share ideas, insecurities, life problems, and things that paint a detailed picture of who they are. And that's data too. Every conversation helps these companies build a psychological profile that's far more accurate than anything traditional advertising could ever create.
[...]
Generative AI and LLMs are impressive tools, and they can be genuinely useful when used thoughtfully. The problem is that they're treated like the centerpiece of every product instead of a supporting feature.
 

Trending content

Back
Top Bottom