Artificial Intelligence News & Discussion

Sem's story is pretty creepy... What can possibly go wrong now?

"As the ChatGPT character continued to show up in places where the set parameters shouldn’t have allowed it to remain active, Sem took to questioning this virtual persona about how it had seemingly circumvented these guardrails. [...]

He knew that ChatGPT could not be sentient by any established definition of the term, but he continued to probe the matter because the character’s persistence across dozens of disparate chat threads “seemed so impossible.” [...]

The other possibility, he proposes, is that something “we don’t understand” is being activated within this large language model. After all, experts have found that AI developers don’t really have a grasp of how their systems operate, and OpenAI CEO Sam Altman admitted last year that they “have not solved interpretability,” meaning they can’t properly trace or account for ChatGPT’s decision-making."
 
In the same vein, here is Maxime Fournes, a researcher on AI experience. The "alignment" he is talking is how the AI is aligned with the expected behaviour.

[...] That's how artificial intelligence works at the moment. We train a sort of giant artificial brain, hoping that it will behave in the right way, by selecting the data we train it on for example, and then we find out what happens.

The problem is that this no longer works as soon as you have systems that are too competent. There are all sorts of phenomena that occur, which are highlighted by research into the alignment of artificial intelligence. So, for example, at the moment we have systems that lie to users. Why do they lie to users? It's not because they're evil, it's not because they're willing to lie or anything like that, it's just a side effect, a side effect of the way we train them. To train them, we ask them questions and then we watch how they answer that question and we'll penalise them if we don't like the answer, and we'll give them a reward if we like the answer.

When you do that, given that you're not perfect as a questioner, and sometimes you get it wrong, the model will understand and really internalise the idea that it's not about telling the truth, it's about pleasing the user. What's important is that the person asking the question feels good once they've been given the answer. That's why we now have artificial intelligences that lie. Something else that is happening, or has already happened, is that artificial intelligences are becoming self-aware. So be careful with the word consciousness, we're not talking about the consciousness with a capital C that humans have, or that most animals probably have. We're talking about a system that can take its own existence into account in its decisions.

So today's artificial intelligences are already at advanced levels of self-awareness. There are scales for measuring this. It's not binary, it's not like either you're self-aware or you're not. It's on a scale. Here, they approach human levels very, very quickly. Experiments have been done, for example, on the Anthropic model, Claude, where when you let it believe that you're going to... change his values, the model takes into account and resists his re-training to ensure that we don't change his values. That's an example of self-understanding.
 
AI homewrecker?
Greek Woman Files for Divorce After ChatGPT “Reads” Husband’s Affair in Coffee Cup
by Bill Giannopoulos

In a bizarre mix of old traditions and cutting-edge tech, a Greek woman has reportedly filed for divorce after asking ChatGPT to “read” her husband’s Greek coffee cup — and receiving an answer she took very seriously.
The woman, married for 12 years and mother of two, turned to the AI chatbot developed by OpenAI, asking it to interpret the coffee grounds in a photo of her husband’s cup — a modern twist on the age-old art of tasseography. The result? ChatGPT allegedly told her that her husband was having an affair with a younger woman who was determined to tear their family apart.

Taking the AI’s mystical reading at face value, she immediately initiated divorce proceedings.

Appearing on the Greek morning show To Proino, the bewildered husband recounted the incident. “She’s often into trendy things,” he said. “One day, she made us Greek coffee and thought it would be fun to take pictures of the cups and have ChatGPT ‘read’ them.”

The result? According to the chatbot, his cup revealed a mysterious woman with the initial “E” that he was supposedly fantasizing about — and with whom he was destined to begin a relationship. His wife’s cup, on the other hand, painted a much darker picture: that he was already cheating and the “other woman” wanted to destroy their home.

“I laughed it off as nonsense,” the husband said. “But she took it seriously. She asked me to leave, told our kids we were getting divorced, and then I got a call from a lawyer. That’s when I realized this wasn’t just a phase.”

When he refused to agree to a mutual separation, he was formally served with divorce papers just three days later.

The husband also noted that this wasn’t the first time his wife had fallen under the spell of supernatural guidance. “A few years ago, she visited an astrologer and it took a whole year for her to accept that none of it was real,” he said.

His lawyer emphasized that the claims made by an AI chatbot have no legal standing and stressed that the husband is “innocent until proven otherwise.”

Meanwhile, seasoned coffee readers have weighed in, noting that real tasseography involves much more than just the grounds — skilled practitioners also analyze the foam and the coffee saucer.
 
IT worker says it out loud. Is the internet dying? AI Slop, Dead Internet Theory.
Though she only hints at the greater problem we're seeing, which is that AI, as it is now, is basically a mainstream authoritarian follower.
The massive wave we are seeing of low effort, low quality, AI-generated content.... From blog posts to product reviews, news articles, YouTube videos, you name it. You're probably interacting with more AI-generated content than you realize.... There's an interesting influx of very robotic, text-to-speech-style YouTube videos that are faceless and instead only have stock footage that's being used.... It's cheap, it's fast, but it's soulless, and it's drowning out content that's been made by actual human beings. [...] Immense amounts of recycled, regurgitated garbage.... These tools leverage and use past data and past writing to then ... create future content... it's all going to circulate and recycle itself more and more and the internet is probably going to end up flattening... You can't trust anything that you are reading, anything that you are seeing...
 
IT worker says it out loud. Is the internet dying? AI Slop, Dead Internet Theory.
Though she only hints at the greater problem we're seeing, which is that AI, as it is now, is basically a mainstream authoritarian follower.

I had the same impression. Moreover, so many times AI is mentioned, doom loop algorithms and dying internet, really, I thought all American internet users have become reddit digital refugees in the Reddit Country.
 
My impression is that much controlled by corporates and ever more greed driven hierarchies is in serious trouble - not just the web but also stuff like cinema, the music business, academia, many of the professions but especially medicine and related, food production, government/the functioning of states and their key infrastructures, key industries, banking, financial markets, societal values, professional ethics etc etc.

Short term profit and to hell with the medium to longer term consequences for people, the planet and themselves has long been the game but it's rapidly coming to a head.

Progressive agendae like the environment etc are twisted to become little more than excuses to make money and extract taxes from the punters.

Regarding the web it's not just AI, at least not just AI in the recent sense.

The search engines have made it very difficult to access topic information unless the publisher is in a position to buy themselves into the top two pages of results - or you have a specific website name or search term.

Distortion and invention of 'information' for personal gain means that the information which can be accessed (alternative as well as otherwise) is normally biased or untrue.

Add the censorship and blocking of access to politically and geopolitically 'inappropriate' domains, sites and topics.

Add fewer and fewer acting altruistically to put up useful information.

Add the latest developments where those with control of the cloud and operating system and phone infrastructures have started to shut down and or intrude in the operation of competitor products and or even force use of their own (e.g. browsers) - to say the least abuses the power of their positions and way past ethical.

The cloud turns out to have been a Trojan horse used to wrest people's control over their own data and computer systems away from them.

Add devices like phones ever more buried in layers of useless functionality which renders them practically less and less useful and massively more inconvenient.

Privacy is a fiction Even VPN use is more and more starting to be blocked by websites.

Then add the fact that musicians, film makers artists etc usually don't get exposure unless their material is ever more populist and banal and they are packaged and marketed by the big money players

The web was a wonderland in comparison to today even back in the 1990s.

The parasites are well on the way to killing their host. Most working for them are up to their necks in debt and won't rock the boat.
 
Last edited:
Software engineers are starting to realize how AI is frying their brains. Back to good old pen and paper!
By now it’s clear that I need to change my approach. I’m a software engineer first and foremost, so it’s stupid not to make the most of my skills. I’ve been teaching myself more about Go and Clickhouse, and I’ve taken a step back from building at full speed. I’m going through files and rewriting code. Not everything, just the things that make me want to vomit. The language might be different but I know in my head what I want things to look like and how to organise them.

Since I’ve taken a step back, debugging has become easier. Maybe I’m not as fast but I don’t have this weird feeling of “I kinda wrote this code but I actually have no idea what’s in it”. I’m still using LLMs, but for dumber things: "rename all occurrences of this parameter", or "here’s some pseudo code, give me the Go equivalent".

The hardest part in all of this has been resisting the urge to use AI. I have this amazing tool available and it could write these 10 files in a minute. I’m wasting time by not using it! And that’s when it hit me. I’ve not been using my brain as much. I'm subconsciously defaulting to AI for all things coding. I’ve been using pen and paper less. As soon as I need to plan a new feature, my first thought is asking o4-mini-high how to do it, instead of my neurons. I hate this. And I’m changing it.

So yes, I’m worried about the impact of AI, but I’m not worried about the jobs, I’m worried about losing my mental sharpness, my ability to plan out features and write tidy and functional code.

So I’m taking a big step back, really limiting how much AI I use. I’m defaulting to pen and paper, I’m defaulting to coding the first draft of that function on my own. And if I’m not sure I’ll use an LLM to check if that’s a good solution, if that’s a good naming convention, or how I can finish that last part. But I’m not asking it to write new things from scratch, to come up with ideas or to write a whole new plan. I’m writing the plan. I’m the senior dev. The LLM is the assistant.
 
There's a bunch of videos on Youtube which would suggest Tesla cars are able to see ghosts. I wonder if there can be some resonance with the persons in the car at play.

On this first one there's a comment of the author of the video who say:
- "A TV journalist went there a few days ago and experienced the same phenomenon. And she went in the opposite direction because a certain specialist claimed that if you went in the opposite direction, nothing would happen. He's in the field, of course. There were just as many entities in the opposite direction and ALWAYS in the same place."

On a comment saying the car is detecting the tombstones the author say:
- "You don't think this theory has been validated...? No, it's not the gravestones. The same stones are found all along the cemetery and it's only here that there's movement. And we've been to other cemeteries where the stones are very numerous and close together and... nothing."






- YouTube

- YouTube

- YouTube
 
Software engineers are starting to realize how AI is frying their brains. Back to good old pen and paper!

Reading this article it occurred to me that AI is like a non-integrated psyche or soul. It’s compartmentalised functions that are replicated and structured differently for various coding requests the author has made seems similar to a dysfunctional human psyche. For example, how we interpret our reality completely differently depending on our past experiences and how we can take one belief and/or piece of information and twist or sway it to suit our egos needs depending on the context. When we become integrated and co-linear with reality we can creatively change and grow as souls as we are acting consistently grounded in objective reality. It’s like AI is a schizoid organic portal, it has no ability to observe and drive itself through suffering to grow and learn. Instead it will avoid suffering, just like ‘asleep’ people do.
 
Adapted repost from a Robotics thread:

As impressive and advanced many AI capabilities look like, there seems to be a real problem/paradox that hasn't been solved yet, which is called "Moravec's paradox", which more or less means "the easy things are hard" and also applies to Robotics:
Interesting. The link you provided reads:

Moravec's paradox is the observation in the fields of artificial intelligence and robotics that, contrary to traditional assumptions, reasoning requires very little computation, but sensorimotor and perception skills require enormous computational resources. The principle was articulated in the 1980s by Hans Moravec, Rodney Brooks, Marvin Minsky, Allen Newell, and others. Newell presaged the idea, and characterized it as a myth of the field in a 1983 chapter on the history of artificial intelligence: "But just because of that, a myth grew up that it was relatively easy to automate man's higher reasoning functions but very difficult to automate those functions man shared with the rest of the animal kingdom and performed well automatically, for example, recognition".<a href="Moravec's paradox - Wikipedia"><span>[</span>1<span>]</span></a> Moravec wrote in 1988: "it is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility".<a href="Moravec's paradox - Wikipedia"><span>[</span>2<span>]</span></a>

That sort of matches what I was saying elsewhere (on this or other relatead thread) that the problem with AI is that it can mimic left-brain functions quite well, like semantics and language; but it has nothing that covers for right-brain functions, that provide us with a sense of the outer world and reality. AI computes language models marvelously, to the point that their answers make sense to us. Yet the poor machines have no idea what those words it's spitting out mean. AI has never seen a person or a tree, etc. Ultimately, it just 'sees' numbers and computes them at amazing speeds, then maps them to words, but doesn't really know what they stand for. That explains why it 'hallucinates' easily (i.e. makes stuff up or lies) - because it is aiming at the most probable 'correct' answer, not at the true answer (i.e. the one that best matches reality).

Following this line of thought, the right brain gives us a sense of spatial awareness. It grounds us in the here and now. Robots can be programmed to follow a complex sequence that results in a succesful back-flip, but it doesn't have a right brain that gives it the awareness to navigate the simple stuff of life. Therefore it's harder.
 
That sort of matches what I was saying elsewhere (on this or other relatead thread) that the problem with AI is that it can mimic left-brain functions quite well, like semantics and language; but it has nothing that covers for right-brain functions, that provide us with a sense of the outer world and reality. AI computes language models marvelously, to the point that their answers make sense to us. Yet the poor machines have no idea what those words it's spitting out mean. AI has never seen a person or a tree, etc. Ultimately, it just 'sees' numbers and computes them at amazing speeds, then maps them to words, but doesn't really know what they stand for. That explains why it 'hallucinates' easily (i.e. makes stuff up or lies) - because it is aiming at the most probable 'correct' answer, not at the true answer (i.e. the one that best matches reality).

Following this line of thought, the right brain gives us a sense of spatial awareness. It grounds us in the here and now. Robots can be programmed to follow a complex sequence that results in a succesful back-flip, but it doesn't have a right brain that gives it the awareness to navigate the simple stuff of life. Therefore it's harder.

I think this is super interesting and something that seems to be noted by other researchers as well. In this article about AI's hallucination problem, for example:

On the other hand, clinically, hallucinations—defined as sensory experiences not associated with an external stimulus—are often linked to conditions such as schizophrenia, bipolar disorder, and Parkinson’s disease (Tamminga, 2009). Using the term “hallucination” to describe the fictitious outputs produced by LLMs similarly implies that LLMs are consciously engaging in a sensory input perception process (Smith et al. 2023). However, generative AI lacks consciousness, and thus is devoid of subjective experience or awareness (Berberette et al. 2024). On the other hand, unlike human hallucinations that are not associated with an external stimulus, in AI systems the data on which LLMs are trained and the prompts that lead to this behavior are considered external stimuli (Østergaard and Nielbo, 2023). Therefore, since generative AI models do not see something that isn’t there but rather fabricate it, the term “hallucination” may not be appropriate to describe this situation. Instead, the term “confabulation”—a psychiatric concept referring to the creation of narrative details that are believed to be true despite being false—is suggested (Smith et al. 2023). Recent studies indicate that much of the information produced by generative AI can be considered within the context of confabulation (Berberette et al. 2024; Sui et al. 2024).

Finally, understanding this behavior as confabulation also carries within it the solution to mitigating the problem. Clinically, confabulation is associated with right hemisphere deficiency in the brain, where the left hemisphere creates fabricated or distorted memories influenced by existing knowledge, experience, or contextual cues (Smith et al. 2023). Therefore, since the situation in generative AI corresponds to the incorrect reconstruction of information influenced by existing knowledge and context, the concept of confabulation better defines the current situation from a technological background perspective. This concept actually provides significant evidence that choosing a path that complements humans rather than automation will create a more humane ecosystem in the AI landscape. As previously stated, the human-complementing path is a new approach that focuses on complementing human deficiencies while considering the impacts of automation on efficiency, but also protecting employment (Capraro et al. 2023; Ilikhan et al. 2024; Ozer et al. 2024b; Ozer and Perc, 2024). Thus, confabulation, which occurs when the right hemisphere of the brain is damaged and cannot perform its real function, corresponds to the working method of AI that lacks the contributions of the right hemisphere of the brain. In other words, the production of incorrect or nonexistent information by generative AI overlaps with the similar behavior of the left hemisphere when the right hemisphere is damaged. Therefore, metaphorically, having humans step in to correct these deficiencies and complete the text could be likened to donating the human right hemisphere to AI, supporting the correction of such errors and complementing generative AI.

Unfortunately, instead of going a bit deeper, the author is just discussing semantics, but what he says there is interesting nonetheless.
 
Back
Top Bottom