AI Slop-'n-Mush Ramps Up (Dark Futura substack)

Nathan

Dagobah Resident
FOTCM Member
Over on Simp's transhumanist-focused substack, we have a cynical and fascinating look at the low-end mush of AI-generated content and what that means for the average person's information seeking:


Intrepid researcher Whitney Webb seems to be onto something though—save the bolded thought in particular for later:

The Kissinger/Eric Schmidt book on AI basically states that the real promise of AI, from their perspective, is as a tool of perception manipulation - that eventually people will not be able to interpret or perceive reality without the help of an AI via cognitive diminishment and learned helplessness. For that to happen, online reality must become so insane that real people can no longer distinguish real from fake in the virtual realm so that they can then become dependent on certain algorithms to tell them what is "real". Please, please realize that we are in a war against the elites over human perception and that social media is a major battleground in that war. Hold onto your critical thinking and skepticism and never surrender it.

Brandon Smith covered this in an article last year, writing:

To summarize, globalists want the proliferation of AI because they know that people are lazy and will use the system as a stand-in for individual research. If this happens on a large scale then AI could be used to rewrite every aspect of history, corrupt the very roots of science and mathematics and turn the population into a drooling hive mind; a buzzing froth of braindead drones consuming every proclamation of the algorithm as if it is sacrosanct.

In this way, Yuval Harari is right. AI does not need to become sentient or wield an army of killer robots to do great harm to humanity. All it has to do is be convenient enough that we no longer care to think for ourselves. Like the “Great and Powerful” OZ hiding behind a digital curtain, you assume you’re gaining knowledge from a wizard when you’re really being manipulated by globalist snake oil salesmen.
 
“If you know that in Vegas if you break into a 7-11 at 2am you’re going to get caught by a drone, you’re not going to do it, right?”

Anyone with intelligence would weigh the dangerous potential for abuse against the claimed benefits of such a use case. Thought experiment example: in the post-9/11 Patriot Act and Homeland Security era which saw the creation of the TSA, compare the number of major crimes the TSA solved, or terrorists it “caught in the act”, to the number of wide-scale abuses the repressive agency committed against hundreds of thousands of fed-up citizens. Is one or two caught criminals or solved crimes worth the total reenvisioning of society into a fear-based panopticon?

It leads to the natural slippery slope question: at what point do you stop? As AI advances do you keep enacting greater and greater surveillance, restrictions, control, et cetera, until all crime and human suffering are entirely eradicated? At what point would that come? And the natural follow-up: Why not just eliminate all humanity itself, or plug everyone into a perpetual “Matrix” to keep anyone from ‘being hurt’ ever again—the ultimate fragile radical leftist snowflake telos, it would seem. There has to come a point where a line is drawn, and people of wisdom acknowledge and accept that some crime and pain is a necessary price for living in a free society. Why has this simple calculus always been so utterly elusive for insane radical leftist utopians’ comprehension?
 
As a kid, I read a science fiction short story (1958), The Feeling of Power, by Issac Asimov, about an "advanced" civilization that had become so dependent on computers that they had forgotten how to do math -- they just let the computer do it. Consequently, they were dependent on the machine for existence.

As Nathan quoted above, "All it [AI] has to do is be convenient enough that we no longer care to think for ourselves" and "eventually people will not be able to interpret or perceive reality without the help of an AI via cognitive diminishment and learned helplessness."

In Asimov's story, an enterprising technician "reverse-engineers" the calculations the computer performed and figures out how to add, subtract, multiply, divide, do square roots, etc. on paper -- a discovery which astounded his employers. According to Wikipedia, " The story is representative of the genre of sci-fi that started in the 1950s as a reaction to computers, around the theme of caution against human mental atrophy in the computer era."

Mental atrophy is scary and complex; it is both involuntarily self-inflicted and malignant; an act of commission and omission. Computers are only one part of it. Star Trek claims "space is the final frontier" -- actually, it seems that "Mind" is. I am sure there are more than 70x7 ways of disengaging or diverting "Mind."

For example, decades ago, I read in a lit-crit article that George Orwell actually titled his book "1948," but his editors changed it to "1984" because the ideas he presented would upset the public if they understood the psyco-social controls were occurring in real-time instead of a hypothetical future. ("As long as it's not happening to me personally, it's not so terrible.")

These early sci-fi types were either prognosticators -- clearly seeing the writing on the wall -- or perhaps they were the early progenitors of psychological warfare -- putting ideas into the noosphere so we can all say "I know that" when confronted by what should be shocking. The "I know that" reaction seems to be a foundational, and convenient, tool for turning off the mind when confronted with discontinuous change. Sometimes I wonder if that is why they teach "1984, "Brave New World," and "Animal Farm" in public schools. When events occur that correspond to the machinations in these books, kids can say, "No big deal.......I know that." Perennial reiteration of the idea of "15-minute cities" and "You'll own nothing but be happy" seems to achieve this reaction too -- "Yeah, I know that (so I don't have to address it)."

Of course, bias confirmation is just one of many methods -- there's powerlessness (I can't doing anything about it anyway), the nanny state (a free lunch -- at least for the moment), new age psychology (I am [fill in the blank] and I honor the 'true' me), intimidation (covid), provocation (Jan 6), seduction ("newspeak" & the politically correct). It's endless. If we are not active and aware stewards of our own consciousness, someone else -- or something else -- will be.
 
Back
Top Bottom