Experts warn AI will allow for face-recognizing attacks that are untraceable

JGeropoulas

The Living Force
To paraphrase Orwell, we're going to have to be brave in the new world ahead.

Twenty-six experts from 14 different institutions and organizations, including Oxford University’s Future of Humanity Institute, Cambridge University’s Centre for the Study of Existential Risk, Elon Musk’s OpenAI, and the Electronic Frontier Foundation have issued a report based on a two-day workshop held at Oxford University back in February 2017.

They describe some ominous leaps toward creation of a real "Skynet" despite the cautionary tale of "The Terminator". [ln that movie, Skynet is a highly advanced artificial intelligence. Once it became self-aware, it saw humanity as a threat to its existence (due to the attempts of the Cyberdyne scientists to "kill" it once it had gained self-awareness), and decided to trigger the nuclear holocaust Judgment Day. A human resistance is formed, John Connor becomes a successful leader in it. In a hail mary move, Skynet sends the Terminator back in time to prevent John Connor from being born.]

Here are some highlights (underlining is mine):

New Report on Emerging AI Risks Paints a Grim Future
by George Dvorsky

The 100-page report, titled The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation, boasts . In the report, the authors detail some of the ways AI could make things generally unpleasant in the next few years, focusing on three security domains of note—the digital, physical, and political arenas—and how the malicious use of AI could upset each of these.

“It is often the case that AI systems don’t merely reach human levels of performance but significantly surpass it,” said Miles Brundage, a Research Fellow at Oxford University’s Future of Humanity Institute and a co-author of the report, in a statement. “It is troubling, but necessary, to consider the implications of superhuman hacking, surveillance, persuasion, and physical target identification, as well as AI capabilities that are subhuman but nevertheless much more scalable than human labour.”

Indeed, the big takeaway of the report is that AI is now on the cusp of being a tremendously negative disruptive force as rival states, criminals, and terrorists take advantage of its scale and efficiency .
...

They warn that the cost of attacks will be lowered owing to the scalable use of AI and the offloading of tasks typically performed by humans. Similarly, new threats may emerge through the use of systems that will complete tasks normally too impractical or onerous for humans.

“We believe there is reason to expect attacks enabled by the growing use of AI to be especially effective, finely targeted, difficult to attribute, and likely to exploit vulnerabilities in AI systems,” they write.

In terms of specifics, the authors warn of cyber attacks involving automated hacking, spear phishing, speech synthesis to impersonate targets, and “data poisoning.” The advent of drones and semi- and fully-autonomous systems introduces an entirely new class of risks; the nightmarish scenarios include the deliberate crashing of multiple self-driving vehicles, coordinated attacks using thousands of micro-drones, converting commercial drones into face-recognizing assassins, and holding critical infrastructures for ransom. Politically, AI could be used to sway popular opinion, create highly targeted propaganda, and spread fake—but perhaps highly believable—posts and videos. AI will enable better surveillance technologies, both in public and private spaces.

“We also expect novel attacks that take advantage of an improved capacity to analyze human behaviors, moods, and beliefs on the basis of available data,” add the authors. “These concerns are most significant in the context of authoritarian states, but may also undermine the ability of democracies to sustain truthful public debates.

Sadly, the era of “fake news” is already upon us. It’s becoming increasingly difficult to tell fact from fiction. [unless you read SOTT news--a new byline? :) ]
https://gizmodo.com/new-report-on-ai-risks-paints-a-grim-future-1823191087
 
What you describe reminded me of a previous topic in the same vein: Autonomous Weapons about the then latest innovation: so called slaughter-bots which operate in swarms.

Maybe these two topics should be merged to facilitate the discussion ?
 
This reminds me of the part where the Cassiopaeans said that our computers may become our own undoing, much like the Crystals were for Atlantis...
 
Back
Top Bottom