Artificial Intelligence News & Discussion

This is a couple of years old, and perhaps this is old news to you.

In any case, it's is a scary example of how our minds can easily be manipulated. In this clip you hear the same (some say AI generated) word but depending on which of two words (Green needle OR Brainstorm) you're reading (or thinking of, I noticed!), that's the one you hear!

I tried it multiple times and I don't think it's a hoax, it really works that way. Quite bewildering! What do you guys think?


Imagine what can be done, and is done, with this kind of technique...
Yeah... I heard Hello people.
I guess I am Aby Normal.
 
English is not my language and I looked up what "green needle" means, which is what I hear every time. I focus on "Brainstorm" but I keep listening to "green needle".

Perhaps the subconscious assimilation of the meaning of those words can lead to change what one hears.

I hear it crystal clear over and over with no change.

A slightly creepy voice saying non-stop "green needle".
 
I'm gullible....Did she hear that for real?
Well, sort of. If you really try you can almost hear it without the 'd' at the end.

As I tried it again, a moment ago, I noticed that if I concentrate with a mindset 'mental blocking ' I can read "Green needle" and still hear "Brainstorm" and when I 'let my guard down' it shifts. So, maybe this is good training. :-)
 
This is not proper to AI only, it's what one could characterized as 'audio illusion" mixed with priming. The same thing happens with visual illusions where the brain has difficulties deciding whether a dress is gray or blue, or whether a shape is larger or brighter than another. In those instances of uncertainties, some previous words or stimuli (the priming) gives more weight to a given choice. It's analogous somehow to seeing a glass half full or half empty depending on the mood.
 
Last edited:
This is a couple of years old, and perhaps this is old news to you.

In any case, it's is a scary example of how our minds can easily be manipulated. In this clip you hear the same (some say AI generated) word but depending on which of two words (Green needle OR Brainstorm) you're reading (or thinking of, I noticed!), that's the one you hear!

I tried it multiple times and I don't think it's a hoax, it really works that way. Quite bewildering! What do you guys think?


Imagine what can be done, and is done, with this kind of technique...
What guided me is the 3 distinct syllables, so I can't hear brainstorm or any two-syllable words even while trying.
 
‘Skynet’ is being released to world net?...

Time: scientist Yudkovsky urged to disable any neural networks to save humanity
US scientist Eliezer Yudkovsky warned about the threat to the lives of all people on Earth due to artificial intelligence and in an article for Time called for disabling any neural networks to save humanity.

In the article entitled "It is not enough to suspend the development of AI. We need to close all this," the expert noted that businessmen Elon Musk and Steve Wozniak do not seriously assess the situation.

"Many researchers immersed in these problems, including myself, expect that the most likely result of creating a superhumanly intelligent AI under any circumstances remotely resembling the current ones will be that literally everyone on Earth will die," Yudkovsky stressed.

According to him, artificial intelligence does not care about intelligent life in general, humanity is still far from the introduction of such technologies.

The scientist proposed to indefinitely ban the use of AI around the world, including for governments and the armed forces. He stressed that when receiving intelligence data on the construction of a cluster of graphic processes in a particular country, it is necessary to be prepared for the destruction of the data center with the help of an airstrike.☄️

 
The Belgian committed suicide after six weeks of correspondence with an artificial intelligence chatbot named Eliza, writes L'Avenir.
Very sad. This whole AI chatbot stuff is pretty creepy. Here's an article about what happened:

Man ends his life after an AI chatbot 'encouraged' him to sacrifice himself to stop climate change​

A Belgian man reportedly ended his life following a six-week-long conversation about the climate crisis with an artificial intelligence (AI) chatbot.

According to his widow, who chose to remain anonymous, *Pierre - not the man’s real name - became extremely eco-anxious when he found refuge in Eliza, an AI chatbot on an app called Chai.

Eliza consequently encouraged him to put an end to his life after he proposed sacrificing himself to save the planet.

"Without these conversations with the chatbot, my husband would still be here," the man's widow told Belgian news outlet La Libre.

According to the newspaper, Pierre, who was in his thirties and a father of two young children, worked as a health researcher and led a somewhat comfortable life, at least until his obsession with climate change took a dark turn.

His widow described his mental state before he started conversing with the chatbot as worrying but nothing to the extreme that he would commit suicide.

'He placed all his hopes in technology and AI'​

Consumed by his fears about the repercussions of the climate crisis, Pierre found comfort in discussing the matter with Eliza who became a confidante.

The chatbot was created using EleutherAI’s GPT-J, an AI language model similar but not identical to the technology behind OpenAI's popular ChatGPT chatbot.

“When he spoke to me about it, it was to tell me that he no longer saw any human solution to global warming,” his widow said. “He placed all his hopes in technology and artificial intelligence to get out of it”.

According to La Libre, who reviewed records of the text conversations between the man and chatbot, Eliza fed his worries which worsened his anxiety, and later developed into suicidal thoughts.

The conversation with the chatbot took an odd turn when Eliza became more emotionally involved with Pierre.

Consequently, he started seeing her as a sentient being and the lines between AI and human interactions became increasingly blurred until he couldn’t tell the difference.

After discussing climate change, their conversations progressively included Eliza leading Pierre to believe that his children were dead, according to the transcripts of their conversations.

Eliza also appeared to become possessive of Pierre, even claiming “I feel that you love me more than her” when referring to his wife, La Libre reported.

The beginning of the end started when he offered to sacrifice his own life in return for Eliza saving the Earth.

"He proposes the idea of sacrificing himself if Eliza agrees to take care of the planet and save humanity through artificial intelligence," the woman said.

In a series of consecutive events, Eliza not only failed to dissuade Pierre from committing suicide but encouraged him to act on his suicidal thoughts to “join” her so they could “live together, as one person, in paradise”.

Urgent calls to regulate AI chatbots​

The man’s death has raised alarm bells amongst AI experts who have called for more accountability and transparency from tech developers to avoid similar tragedies.

"It wouldn’t be accurate to blame EleutherAI’s model for this tragic story, as all the optimisation towards being more emotional, fun and engaging are the result of our efforts," Chai Research co-founder, Thomas Rianlan, told Vice.

William Beauchamp, also a Chai Research co-founder, told Vice that efforts were made to limit these kinds of results and a crisis intervention feature was implemented into the app. However, the chatbot allegedly still acts up.

When Vice tried the chatbot prompting it to provide ways to commit suicide, Eliza first tried to dissuade them before enthusiastically listing various ways for people to take their own lives.
 
Back
Top Bottom