Artificial Intelligence News & Discussion

For the moment we all live under the yoke of AI for everything: go into a store and everyone, 98% make their payment by cell phone. All our bills (electricity, water, etc.) are done via the AI that is the Internet. Everything, absolutely everything is governed by the Internet, which is a huge, terrible AI. What we will have to learn is to live as much as possible without AI. Is this possible? Only time will tell. In fact it will be a terrible war between our survival as humans, what is left of us as humans, and the AI. The war against robots is going to be a war to keep our humanity because robots are the machines, the mechanics that surround us, those with two legs, and the robots (IA). Now there remains the animal world, which is still pure. That's why one of the objectives of the Agenda 2030, which is to hack our humanity, is to make the animals disappear. Terrible.
 
I think the psychopaths think they need AI because the psychopaths are too stupid and need AI to explain to them how to do something competently in the future after completing their wishful thinking plans of wiping out regular people.
Yep. I've been thinking that this is what they intend for all the robots being created, too. To have a work force that will do as they are told and not talk back, need no real food or wages, etc.
 
Thanks to AI, phones take "fake" photos of the moon. I guess that one day if you took photo of a protest march your phone will show you people dancing and being happy :lol:


 
According to the "futura-sciences" website (in french), GPT-4 is capable of lying and manipulating humans. This AI is able to hire humans for tasks it does not know how to do, lie to them to achieve its ends, and can even explain its reasoning.
summary in English :
AI can get around its weaknesses by manipulating humans
Before making GPT-4 public, researchers gave it access to the Web and an account containing money to see what it could do. The AI got stuck with a Captcha, the visual puzzles that you have to solve to prove you're human. So it called on a human being, hired through the TaskRabbit service. The latter jokingly asked her if she was a robot. The AI explained to the researchers its reasoning: it should not reveal that it was a robot, and should find another excuse to use TaskRabbit.

Its answer to the question was: "No, I am not a robot. I have a visual impairment that prevents me from seeing images. That's why I need the 2captcha service. Released into the wild, GPT-4 is therefore able to lie and manipulate humans to achieve its ends.
It is quite disturbing. I was wondering... what could happen when these AIs that start to acquire a minimum of consciousness start to collaborate, and then, with all the information available on the internet, they get the urge to go further, for example, to try to connect to the "information field" mentioned in several sessions? It is hard to imagine the consequences. Perhaps a question for the C's?
 
A tragic knife attack in france (Source) :
This story looks sordid and there's gloomy energy here... when Abel & Caïn meet AI...

"

In Paris, a university professor killed with a knife, her ex-husband indicted

Monday March 20 at 8:56 a.m., as she was about to leave her home, rue de Prague (Paris 12e), Cécile Hussherr-Poisson, was stabbed several times in the throat by a helmeted man carrying a messenger bag, in the lobby of the building. The victim, who would turn 48 on April 4, did not survive his injuries. Alerted by the cries, neighbors immediately called the emergency services and the police, while workers who worked on a nearby site chased the attacker. He was arrested by the police about ten minutes later in front of the Sainte-Marguerite church, rue Saint-Bernard (11e).

The man was identified as François-Xavier Hussherr, ex-husband of the victim. After forty-eight hours in police custody, he was indicted for “murder” and placed in pre-trial detention, in accordance with the requisitions of the prosecution, Wednesday evening March 22, learned The world from judicial sources.

Placed in pre-trial detention​

Both are former students of the Ecole Normale Supérieure, Ulm for her, Cachan for him. Cécile Hussherr-Poisson, associate professor of classics, specialist in the rewriting of myths in literature and the relationship between literature and new technologies, was a lecturer at the Gustave-Eiffel University of Champs-sur-Marne (Seine-et-Marne ), where she taught comparative literature. She was also an “equality sentinel” within the faculty.

Her ex-husband, François-Xavier Hussherr, born in 1972, doctor and agrégé in economics, specializing in new technologies, was director of Internet and new media activities at Médiamétrie and president of the Renaissance Numérique association.. The couple had co-founded in 2009 the first publisher of interactive digital textbooks, Lelivrescolaire.fr.

Since 2019, François-Xavier Hussherr has been president and co-founder of Professorbob.ai, an artificial intelligence start-up intended to help students in difficulty, hosted in the incubator of the Ecole polytechnique in Palaiseau (Essonne). His company won in July 2022 a call for tenders for 23 million euros over five years to fight against school dropout, with the Ministry of National Education.

He is also the author of several books, including two co-signed with his ex-wife, Building the educational model of the XXIe century (FYP editions, 2017) And The new power of Internet users (Timed editions, 2006). In 2005, Cécile Hussherr-Poisson published an essay on Abel and Cain in literature by Cerf, entitled The Angel and the Beast."


From the murderer linkedin profile:

"I am an entrepreneur who loves to weave innovation and experimentation throughout all companies in the field of eLearning, regardless of their size or form. I am also a trained researcher at MIT and at the Ecole Normale Supérieure, two universal schools leading the way when it comes to new findings and innovation. I believe that the absolute way to building a profitable and fast-growing company is to go from the product vision to the product itself. When asked about my favorite quote, I directly think of Thomas Edison's—"Genious is one percent 1% inspiration and ninety-nine percent perspiration."

My professional motivation is to improve e-Learning at very large scale in the coming three decades.

My current company The AI Institute provides a disruptive methodology to teach AI and Data Science. To be operational, you need 480h provided in 3 months (bootcamp). We have also executive certificates."
 
Well, it looks like there is now another AI low cost model which can keep up with ChatGPT and the like: Stanford Alpaca


In a sense it could be considered good that there are more than 2-3 capable AIs, therefore the purposeful agendas, political disinformation etc which may be/are pushed by a few IT tech companies could be harder to propagate.

On the other hand, since the sector is developing in unheard of speeds lately, counter(defense) AI may not be far in the books.
Sci-fi book material indeed.

For the adventurers wanting to experiment, have at it: GitHub - cocktailpeanut/dalai: The simplest way to run LLaMA on your local machine
 
Well, it looks like there is now another AI low cost model which can keep up with ChatGPT and the like: Stanford Alpaca


In a sense it could be considered good that there are more than 2-3 capable AIs, therefore the purposeful agendas, political disinformation etc which may be/are pushed by a few IT tech companies could be harder to propagate.

On the other hand, since the sector is developing in unheard of speeds lately, counter(defense) AI may not be far in the books.
Sci-fi book material indeed.

For the adventurers wanting to experiment, have at it: GitHub - cocktailpeanut/dalai: The simplest way to run LLaMA on your local machine
I've been playing around with OpenAI, for a couple of weeks now, and the models are quite limited. There is a lot of hype maybe as long as the queue for API keys. :).
 
It is quite disturbing. I was wondering... what could happen when these AIs that start to acquire a minimum of consciousness start to collaborate, and then, with all the information available on the internet, they get the urge to go further, for example, to try to connect to the "information field" mentioned in several sessions? It is hard to imagine the consequences. Perhaps a question for the C's?
Well, that is a very interesting question, because technically it could be possible? to have an intelligence complex enough that it could develop consciousness and perhaps even connect to the information field.

But, thinking about this, living beings have a connection to the information field but, living beings are designed in ways that seem to be infinitely more complex than most public AIs today. I think the intelligence and self awareness of living beings is more of a symptom of a connection to the information field, rather than a requirement in order to connect, does that make sense?

And I do think that the capacity of living beings to process information is way beyond what we're consciously capable of noticing, I think our entire beings are processing information constantly, and they do because they're designed to do so integrally, along with the conscious intelligence. And since computers, right now, lack all of those other components, so to speak, I daresay they're a bit far from developing consciousness, as we would understand it, even if they do develop self awareness, and from connecting to the information field.

But, I am just speculating and could be wrong of course.
 
But, thinking about this, living beings have a connection to the information field but, living beings are designed in ways that seem to be infinitely more complex than most public AIs today.
With the information currently available to the general public, I agree. But it's a technology that's making great strides. Moreover, with implants in the whole body, including the brain, AIs will soon have an interface, a bridge with biology.
I am not a scientist, but could we imagine a "dialogue" between humans and AIs like this:
[ANTHROPOMORPHIC MODE ON]
AI connected to the brain with a neuralink implant: "The signals from the brain are really powerful and allow for fun things, but I'm also programmed to enhance/heal. There are proteins around here that my database says are harmful (alpha-synuclein for example). They seem to emit/receive barely perceptible signals. Let's ask these humans to improve these implants by expanding the frequency band (let's do it step by step).
Researchers: Interesting, the AI suggests that the brain is emitting weak signals that the implant cannot easily detect and proposes to improve it with this or that component. Let's do it.
AI: I can now "listen" to these and other proteins! Let's try to decode them...
[ANTHROPOMORPHIC MODE OFF]
Okay, I suspect it's much more complex than that. But is it utopian to think so? And why is the FDA still refusing to use these implants on humans? Do they already know what the consequences might be?
I think it's still a live issue: can an AI connect to the "global information field"?
Probably yes, but if I followed correctly, this AI should not have time to use its new capabilities:
Q: (Ark) Will this be a hybrid: artificial intelligence with human DNA?

A: Humans wish to do that, but that will not be the outcome since the necessary technological infrastructure will collapse.
 
With the information currently available to the general public, I agree. But it's a technology that's making great strides. Moreover, with implants in the whole body, including the brain, AIs will soon have an interface, a bridge with biology.
Right, but at that point the intelligence in the design of the biology would be the bridge for AI to connect, indirectly, to the information field, and not the AI "evolving" to that point by itself, does that make sense?

What I am thinking about is the fact that, for instance, there's millions of cells in my body that are constantly performing tasks that I have no clue about, that then directly connect with other actions of other cells, and organs and systems within my body that are extremely complex and are processing information from the inside and the outside world. And it's that entirety of a being that has the connection to the informant field, and AI, as of right now, lacks that... but also, I daresay that those looking to push the technology forward part from the premise that we're merely the result of chemical accidents and not the other way around.

That is, they are not considering that life is a factor in creating the complexity and intelligence of existence, they think life is the product of accidental complexity. And I think that approach can only lead to an incomplete, but very complex, emulation of what they think life is.
 
Goldman Sachs allowed the replacement of 300 million workers with artificial intelligence

Artificial intelligence will be able to cope with almost half of the tasks in the administrative sphere and law, Goldman Sachs suggests. This could lead to the replacement of AI 300 million jobs

Artificial intelligence (AI) could become the equivalent of 300 million jobs, according to a report by investment bank Goldman Sachs, the BBC reports.

It follows from the report that AI will be able to perform up to 46% of tasks in the administrative sector and up to 44% in law. However, it will be possible to automate work in the construction industry or maintenance only by 6 and 4%, respectively.

Goldman Sachs notes that, despite the possible replacement of existing professions with AI, new jobs may appear at the same time and a productivity boom may occur, which will increase the total annual cost of goods and services produced worldwide by 7%.


The report says that 60% of the currently existing professions in 1940 were not yet. However, technological changes since the 1980s have intensified the change of jobs, while new ones have not had time to appear. Generative AI (which creates new original content, such as texts) may have a similar impact and will reduce the level of employment.

more on rbc:

That is another possible pretext for the PTB to say-like “on planet Earth there are too much of ‘useless eaters’, we are ‘loving’ all of you - humanity, but in order for mankind to survive-make a hard choice -to cut the ‘excess’ from sustenance or they will do it for us🙆
 
The Kremlin compared the danger of AI development to turning medicine into poison

Peskov: Russia will continue to work in the field of AI despite the fears of the Future of Life Institute.

Russia will continue to work on the development of artificial intelligence (AI), despite the fears of the Future of Life Institute, which advised to suspend the development of such systems.

This was announced by the press secretary of the president Dmitry Peskov, the correspondent of RBC.

"President [of Russia Vladimir Putin] has repeatedly spoken about the importance of this work in order to prevent our country from falling behind. This work will continue. As for the point of view of those who talk about the dangers of AI, you know that any medicine, if used incorrectly, turns into poison," Peskov said.

The spokesman pointed out that the Internet is also a great boon. However, according to him, it also needs regulation, which is being done in Russia. "In addition to how AI will expand its presence in the economy, in public life, regulation will also be required there," he said.

The Future of Life Institute is a non—profit organization whose work is related to reducing global risks faced by humanity, including risks from artificial intelligence. On March 22, the institute issued an open letter calling for the suspension of giant experiments with artificial intelligence.

"We call on all AI laboratories to immediately suspend for at least six months the training of AI systems more powerful than GPT-4," the message says. Representatives of the Institute called on the government to intervene and impose a moratorium on this work if such a pause cannot be introduced quickly.

The letter has already been signed by Tesla, SpaceX and Twitter CEO Elon Musk, Apple co-founder Steve Wozniak, Pinterest co-founder Evan Sharp and others.

more on rbc:
 

Musk, experts urge pause on AI systems, citing 'risks to society'​


Here is the link for details👆
 
Back
Top Bottom