AI Content on the forum

Alejo

Ambassador
Ambassador
FOTCM Member
Hey everyone,

I recently came across this article which I think raises a very interesting set of ideas to ponder about, the original in Russian can be found here, I have pasted a translated version of it below.

Now there are a few quotes that I think highlight what I think matters in our case, mostly due to a trend I have noticed in the forum lately that is increasing in incidence, namely the posting of AI summaries of anything.

Strategically, most people — including media managers themselves — would like to live in an information environment that promotes development rather than degradation. We all want our social media to be filled with creative and meaningful content. However, without external constraints, the market inevitably begins to exploit human weaknesses rather than our best qualities. People are weak, and if they are not encouraged, they will be drawn to all sorts of junk. Instead of providing them with something positive and useful, media managers and media owners prefer to exploit their weaknesses. Junk? Generating traffic? Great, unload it.

You might say, "But what about individual responsibility for media consumption?" Doesn't the individual have the power to control their own media consumption, and isn't it their personal choice to consume the AI-slop that their media sibling has provided? Undoubtedly, this is a personal choice for each individual. However, in reality, individual choices are confronted with an asymmetry of power: one individual against complex sociotechnical systems optimized for capturing attention. In such circumstances, appealing to individual responsibility becomes a convenient way to shift the blame from institutions to individuals.

I will add here a quote from Putin from December of last year on this very topic

The reckless use of artificial intelligence (AI) technologies threatens to cause the loss of identity and the younger generation, Russian President Vladimir Putin said.

"But at the same time, if we use it without thinking, it can also lead us to lose everything we hold dear: our identity, the dispersion of big data information, the transfer of this information into the hands of those who will use it (at the very least, dishonestly)," said the president.

"Under no circumstances can we lose our youngest generation of citizens, who, instead of thinking, will simply press a button and that's it, and will not be able to solve basic problems in mathematics, physics, chemistry, and even history."

I think that kind of nails it, even though the author of the first article goes on to expand on institutionalized approaches to the problem, the bottom line is that we're individuals and remain responsible for our actions. We have discussed and expanded on the positive uses of AI, and indeed there are positive uses and its usage does make life easier, but it can't be met with our tendency to do less of our own effort.

I think it's one of those things that has been cemented in anyone who's read The Wave or the transcripts over the years, that the C's also emphasized how essential it was to use one's own efforts to learn and to arrive at the truth for said effort to have a positive effect on one's existence. Lately I have noticed that, much like we observed with some threads which have become an endless collection of tweets with no context, there are posts that are simply what one or another AI said about a given topic, no context.

No human being making an effort to learn, which in the big scheme of things I suppose it's ok since everyone has the choices to learn or otherwise, but it also prevents networking, because, we read what AI said about something, not what the information meant or is understood by an individual. I think I've suggested this in the past, but I invite everyone to use their own words and compose their own posts, to make mistakes in thinking if you're sincere and to invite networking and correction, and learning. To use their own words to speak about a video, or a tweet, or an article they've read, or even about a summary they've looked for using AI.

I realize that AI is a reality and people will use it, more and more becase it just makes life so much easier, I get it. And so, perhaps the middle ground here would be, if you're going to post an AI summary, can you put them in a quote or on a spoiler? and perhaps use your own words to let us know what you concluded about what you read form AI?

One, it would give people the option to still read the AI summary, but most importantly, it would maintain visibility of the human being behind the post still participating in the network, still learning, still sharing. It would maintain the purpose of the forum alive.

Just a few thoughts on it, open for discussion of course.

The Future is Not Predetermined: The Attention Economy vs. Public Interest

AI content is filling the media and social networks faster than the media can process the consequences. Political strategist Roman Romanov has entered into a correspondence discussion about the idea of digital "doom" and argues that the degradation of the media environment is not a fate, but a result of management decisions.

January 30, 2026

This is a polemical response to a post written on Telegram. The post addresses a highly public issue, hence the polemics.

Ivan Makarov, head of the media team at Zen, contemplates what to do with AI content that has flooded social media. He considers different scenarios and ultimately dismisses them all: there is no way out, like in the subway.

I wouldn't have paid as much attention to the Telegram post if this view wasn't widely shared among people who manage media and social networks (and not just in Russia).

To make it clear, I will give you two quotes. Here is how Ivan poses the problem:

Any automation without curatorial control turns the attention economy into a dump. How can we stop this degradation? Producing human, original, and deep content is expensive and slow. In the short term, platforms benefit from filling their feeds with this "cheap dopamine sugar drug."

Here's what he concludes after rejecting all possible solutions:

What's next? Collapse, reboot, introduction of digital eco-standards, emergence of AI-antibodies, where your AI assistant takes on the role of curator in this garbage of attention, transfer of power from the platform to the individual? New business models? I DON'T KNOW. We are doomed to live years of experiments, half-measures, rollbacks and rolls, in times of increasing noise and amidst digital ruins and new constructions. We are doomed to move into a new habitat, where we have individual mechanics — cognitive work to survive in the garbage heap of information and meaning.

In other words, there is a problem, but it is impossible to solve it, because we are doomed. This is a trend, a fate, a combination of impersonal but powerful forces. We are doomed to live in an information dump, and there is nothing we can do about it. The only thing we can do is learn to avoid eating yellow snow.

What do you want to argue with? This hopelessness and the emphasis on impersonal trends. Because, as Sarah Connor taught us in the second Terminator, the future is not predetermined. And talking about trends, impersonal patterns, and so on is often just a manipulation to convince you to give up, fold your arms, and follow the white sheet into a beautiful new world.

Where does such a scenario-based hopelessness come from? It comes from the business logic that not only Ivan thinks in, but also 99.99% of top media professionals. After all, as we know, “with a 300% profit, there is no crime that capital wouldn’t risk, even at the risk of being hanged” (a quote mistakenly attributed to Marx). Simply put, Ivan doesn't see a way out, as he wants to both maintain the situation where AI allows him to drive traffic to media and social networks and generate profits through it, while also avoiding the negative effects of AI-slop. Personally, I see it as a desire to trade heroin without creating addicts or incarcerating them. It's like trying to climb a Christmas tree without getting scratched.

What do I mean? This is a case where the business interests of private players have come into conflict with public interests. The interests of private media players are to maximize traffic and profits. The public interest is in media and social networks as a tool for communication that helps people share important and useful information and, in turn, become better, smarter, and more knowledgeable.

Strategically, most people — including media managers themselves — would like to live in an information environment that promotes development rather than degradation. We all want our social media to be filled with creative and meaningful content. However, without external constraints, the market inevitably begins to exploit human weaknesses rather than our best qualities. People are weak, and if they are not encouraged, they will be drawn to all sorts of junk. Instead of providing them with something positive and useful, media managers and media owners prefer to exploit their weaknesses. Junk? Generating traffic? Great, unload it.

You might say, "But what about individual responsibility for media consumption?" Doesn't the individual have the power to control their own media consumption, and isn't it their personal choice to consume the AI-slop that their media sibling has provided? Undoubtedly, this is a personal choice for each individual. However, in reality, individual choices are confronted with an asymmetry of power: one individual against complex sociotechnical systems optimized for capturing attention. In such circumstances, appealing to individual responsibility becomes a convenient way to shift the blame from institutions to individuals.

The task of state and public institutions is to correct this inequality and play on the side of the individual.

Let's conduct a thought experiment: if the people responsible for media and social networks in Russia and around the world were truly concerned about public interest rather than traffic and profit (i.e., if they stopped being media reptilians for a moment and remembered that they are human beings), what would they do?

For example, two things:

By a strong-willed decision, the format of short videos would be abolished in all networks 1), which rot people's brains, as has already been proven. It is possible to scroll back through the minced meat. It's just that the hypothetical Zuckerbergs would come out and say, "We've realized that this is a media addiction, and we won't allow this garbage on our social media platforms. We'll fight for traffic in a different way."

Strict moderation, labeling, and censorship of AI content would be introduced 2). At the very least, it should be placed separately and not mixed with other traffic. It's better to be on the safe side and offend a few "talented creators." Let the talented creators go learn how to play Schnittke on the cello instead of generating how a turd sings a song about the Snow Maiden.

These two steps alone would have greatly transformed our social media platforms. However, if we were to propose such a solution on a serious level, imagine the uproar it would cause in the mediaverse. People would immediately accuse us of censorship, followed by claims of lost traffic and revenue. Nevertheless, such a decision would be in the strategic interests of society and every individual. It would effectively cut off the flow of media-based drugs that poison our brains.

The problem of AI-pollution of the media environment is neither technological nor cultural. It is institutional.

This is a conflict between the private interests of platforms and the public function of media as an infrastructure for communication and the formation of meanings. From this conflict, a simple conclusion follows: without external regulation, the system will continue to degrade. Private initiatives do not work because they put responsible players in a non-competitive position. Self-regulation is impossible because it contradicts the basic logic of profit-making.

And no one but states and politicians can solve this problem. State regulation of the media is an imperfect tool that requires constant adjustment. However, in the absence of such a framework, decisions will be made not in the interests of society, but in the interests of those who control the infrastructure of attention. Media and porn would be publicly available if politicians did not prohibit it. Private decisions made at the level of individual editorial offices will not change anything, but they will put honest and principled editorial offices in a weak position. We need a common framework.

Media companies that are allowed to operate independently will poison the audience's brains 24/7 for the sake of traffic and profit. At most, they will limit themselves to personal preferences (similar to how Mikhail Kozyrev, a foreign agent in the past, refused to allow "Civil Defense" on "Nasha Radio" because he considered them fascists).

To sum up, there is a solution to this problem. We just need to start thinking in terms of public interests rather than business interests. We need to become people rather than media managers.
 
I think I've suggested this in the past, but I invite everyone to use their own words and compose their own posts, to make mistakes in thinking if you're sincere and to invite networking and correction, and learning. To use their own words to speak about a video, or a tweet, or an article they've read, or even about a summary they've looked for using AI.

I realize that AI is a reality and people will use it, more and more becase it just makes life so much easier, I get it. And so, perhaps the middle ground here would be, if you're going to post an AI summary, can you put them in a quote or on a spoiler? and perhaps use your own words to let us know what you concluded about what you read form AI?

One, it would give people the option to still read the AI summary, but most importantly, it would maintain visibility of the human being behind the post still participating in the network, still learning, still sharing. It would maintain the purpose of the forum alive.
I kind of always cite AI when I use it just because I like to cite. I used to say "from Wikipedia" a lot before AI. I kind of used both to say what I wanted to say only have it not seem like a me only thing. I did the same thing with an update of a no AI paper I put in an archive for AI assisted papers by adding an additional from AI section (and updated my own stuff too). I have more AI stuff I could have put in but I don't yet understand that AI stuff well enough so it would be saying more than what I want to say. So I kind of mostly just try to get AI to mimic me though it is for sure wordier.
 
Although the problem of disinformation and fake news has always existed, it has now reached a point where propaganda is brutal from both sides, using AI technology (it's becoming very realistic, although it's still generally possible to distinguish them by overly cinematic or shocking camera angles), bots, and algorithms for this purpose.

People are also very polarized, and there isn't much of a penalty for spreading lies on social media as long as it serves the purpose of gaining views or simply favoring one's side (it seems to me that community ratings on X have dropped drastically). I think the best thing is to rely on established sources that have proven relatively trustworthy, or accounts that regularly compile news from these outlets, and not focus so much on accounts that have a bias in favor of something.

They often don't cite sources, or if you search for the news, it doesn't yield any results, and of course, they display a lot of emotion in their posts. Taking a break and refraining from posting if the source isn't clear also helps. There's a lot to it, and the best thing to do is wait for the information to be corroborated or take shape as the hours go by, especially important if we really like the information.

I saw this in a Sott article the other day; it's worth remembering—it seems like a good summary of actions to take:

There is a concept worth mentioning here: confirmation bias. It isn’t new, and it isn’t unique to any particular tribe. The human brain is wired to seek out information that reinforces what it already suspects to be true, and to unconsciously discount what challenges it. We have always done this. But there was a time when the sheer scarcity of fabricated information acted as a natural brake on the process. You couldn’t just manufacture a convincing photograph, a credible document, or a realistic film clip. The effort required was enormous, and the sources producing information were finite enough to be monitored and challenged.

That brake no longer exists. The floodgates are open, and what pours through is an undifferentiated torrent of the real, the doctored, the partially true, the completely invented, and the strategically misleading, all formatted to look identical. Your Facebook feed does not distinguish between a Reuters dispatch and a basement fabrication. Your eyes cannot tell a genuine photograph from a generated one. And your gut, that old faithful compass, has been so thoroughly manipulated by years of targeted content that it may now be pointing in whatever direction an algorithm has decided is best for your engagement metrics.

So where does that leave us? It leaves us, I think, with only a handful of tools worth trusting, and none of them are passive. The first is ruthless sourcing, not just checking where a story comes from, but asking who benefits from you believing it, and why it is appearing in front of you right now. The second is a tolerance for uncertainty, which is the willingness to say “I don’t know yet” rather than filling the gap with whatever feels satisfying. The third, and perhaps the hardest, is self-suspicion. The moment a story feels deeply gratifying, the moment it perfectly confirms your worst fears about your political enemies or your most hopeful fantasies about your allies, that is precisely the moment to slow down.

I started this with a simple question: what compels us to believe something is true? I don’t have a clean answer. Nobody does anymore. But I do know this: the people most confidently certain of what is real right now are almost certainly the most lost. The rest of us, stumbling through the fog with our skepticism intact and our certainty appropriately rattled, may be the closest thing left to clear-eyed.

Believe carefully. That’s the best any of us can do.
 
Thanks for bringing this up. I have found that I am starting to avoid reading any posts where people use AI to generate a summary or post any kind of AI video content. I just skip straight over them. I did not set out to do this, I have found I am doing this because I am completely put off by it. I would rather spend my time with a real persons efforts and thoughts.
 
I saw this in a Sott article the other day; it's worth remembering—it seems like a good summary of actions to take:

Good article, and can see Todd is often on SOTT.net.

Here is the same article on SOTT:


Just a few thoughts on it, open for discussion of course.

In this section:

The problem of AI-pollution of the media environment is neither technological nor cultural. It is institutional.

Yes, and institutional is not only media as was Todd's focus.

When my employer and i had a difference of opinion over getting jabbed, the decades of marriage was over for failure to attest. It was a time when they were also on the cusp of going full-on AI. So, from the point of you shall wear a mask and keep your distance outside 6 feet for all and sundry, and take one in the arm for the team, AI was quickly taken off the self, rolled out and plugged in. It was slow at fist and had basically missed the rest (at work), although ex colleagues are quick to say how it has simply ruined that business institution and further isolated people while the systems produce AI slop.

Comments have also mentioned how enamoring it is for managers who must push it down or else. It has been said that management are now AI jubilant - life's is so great. The systems pushed down for staff to embrace has had different results. Those who have been around have their reports basically written for them (no override), and if querying the system the results can't be overridden if discrepancies are noted (there is a long drawn out process if pushed). For others that never spot the discrepancies it's simply business as usual, errors and all and they move along.

As shown by some people here that have learned how to ask particular questions - to a particular AI system and not accept answers without some discernment and comment, it can be useful. Running an institution, not so much as AI problems are baked in while management and staff forget the nuances of running the business/institution. Sadly, most of the time that impacts people who rely on it.

I will add here a quote from Putin from December of last year on this very topic

👍
 
I couldn't agree more. Fortunately, I suspect that increased energy costs will soon throw a rather large spanner into the works of AI. It needs A LOT of power.

I'm patiently awaiting "AI 2.0", which I think will be much like Web 2.0 was back in the day.
 
Thanks Alejo for bringing it up! IMO the situation on the forum has gotten quite a bit worse ever since people just post social media stuff and now AI comes on top of that trend and making it even worse.

IMO it would be pretty easy (and could be turned around) if everyone would just put any AI or sozial media stuff into a quote or spoiler:

if you're going to post an AI summary, can you put them in a quote or on a spoiler?
 
Here is a post from LinkedIn in the context of Information Technology (Data Engineer) nails the same point. This post stresses fundamentals instead of Identity. In IT, there is a saying that "no two persons code the same way".

fundamentals.jpg


Right now, junior data engineers are building Spark pipelines they have never debugged, writing Airflow DAGs they do not understand, and shipping dbt models copied and pasted from AI tools.

And calling it productivity.

I get it. AI makes you fast. It removes friction. It fills the blank screen with something that looks right.

But in data engineering, "looks right" is the most dangerous place to be.

A bad API fails loudly. A bad pipeline fails quietly. It delivers wrong numbers to the right people, at scale, for weeks before anyone notices.

By the time a stakeholder catches it, three downstream teams have already made decisions on data that was lying to them.

That gap between "the job ran" and "the data is correct" is exactly where fundamentals live.

Knowing why a skewed partition turns a 2-hour Spark job into a 14-hour one.

Knowing what idempotency actually means when your pipeline retries at 3 AM.

Knowing why your watermark logic produces slightly wrong aggregations every Monday after a weekend traffic drop.

You do not learn these by prompting alonw. You learn them by breaking things, debugging them late at night, and slowly building the instinct to see failures before they happen.

AI cannot give you that instinct. It can only accelerate the instinct you already have.

I use AI every single day. For boilerplate, test case generation, and exploring unfamiliar APIs. It makes me faster in good, measurable ways.

But I use it on top of 13 years of understanding what actually breaks in production data systems. When you use AI before building that foundation, you get fast at assembling things you do not understand. That is not a productivity gain. That is technical debt with a confident face.

Master the fundamentals first.

Understand partitioning, joins, idempotency, state management, and data contracts well enough to explain every decision without opening a chat window.

Then use AI to go 10x faster. Skipping that order does not make you a faster data engineer.

It makes you a faster liability.
In the corporate world, AI is very good for fakers. Faking in the extreme contexts (Instead of "Fake it until make it") remove the honesty and truth (if exist). It will lead to GIGO (Garbage In, Garbage Out).

Until 2 days back, I have been forced to deal (very stressful) with a situation at work. One new consultant faked it, convinced (or promised "moon") executives certain way of doing with bookish knowledge (AI outputs) without knowing basics and demanded the "working class" deliver certain products. He don't want to hear the "details" (or nuanaces of the situation). All sounded hunky dory on the surface (unless he has experience with the details, it is impossible for him to know)), but "Working class" distressed to the point, every body was wondering "Why no body conveying the message to the management". Luckily, he was unceremoniously let go to the relief of every body as the protests became louder.
 
The good news, when talking about feelings, personal opinions, perspectives and experiences AI is completely useless. So it can give us data quickly, thanks, that’s handy when I want to know something quickly or refine a search so I can read the full text, but it cannot tell me how I perceive or interpret the info that I read.
Personally, if I were writing a thesis and I used AI to assist me I’d feel like a fraud and I wouldn’t be okay with myself, but I’m totally okay with quick overviews with suggestions on where to look for more info ie; which is the best sanitiser for my ice bath.

On the forum I have occasionally used these quick overviews, always stating thats what I’m doing, and giving a commentary about my idea about it after a direct experience or reading further. I’ll be sure to, if I do this in the future, to also add further reading and cited articles.
 
Couldn't agree more, thanks @Alejo for bringing this up.

Thanks for bringing this up. I have found that I am starting to avoid reading any posts where people use AI to generate a summary or post any kind of AI video content. I just skip straight over them. I did not set out to do this, I have found I am doing this because I am completely put off by it. I would rather spend my time with a real persons efforts and thoughts.

Same, and while AI has its uses (if only because Google is so broken that your only chance to research anything these days is using AI), I find there is a very entropic quality to AI texts/prose, almost like a "coarseness" - if it were sound, it would sound like brittle digital noise. There's just something off. An inspired human text that expresses truth, or even summarizes something based on the "soul's vision" on the other hand, even with typos and warts and all, can have a healing, liberating quality, a depth that is hard to define.

I'm probably fooled sometimes too, but overall I think I got pretty good at spotting AI texts, and frankly I hate it and feel cheated whenever I encounter one.

And as Alejo said, we lose a vital skill when we stop expressing ourselves in our own words, and read real, human-produced thought. As we all know here, this is one of the primary mechanisms in our world for learning and growth.

I would encourage everyone to use AI wisely, and here on the forum, try to express yourself as much as possible in your own words, even for summaries and the like, and only use "AI summaries" when it's truly useful, and label it as such. (In fact, summarizing a text or situation in your own words is one of the best exercises you can do for your mind, while also helping others, which only boosts the positive effect.)
 
As some members have written here, I often find that I don't value AI‑written articles. I don't like the flow and structure; it's hard to explain why, but it feels like the most popular models are tuned for marketing content.

Auto-regressive nature also doesn't help, it often amplifies errors, so great care need to be taken. A good example is software engineering, where you often need a solid codebase to start from, make implementation plans using frontier models, review them, carry out the implementation, and then conduct dozens of rounds of automatic corrections, often with a "human in the loop", using different models. The last part is the one that generates most costs, yet very often even not considered. From my experience, AI generates code on par with a mid‑level software developer, orders of magnitude faster, but only when great care is taken, and the cost isn’t that small ($200-500 monthly for me, though the costs are still subsidized). A kind of asymmetry play, can easily go to zero.

I'm patiently awaiting "AI 2.0", which I think will be much like Web 2.0 was back in the day.
I guess it is slowly starting to happen. I'm working right now on an automatic translation system (many‑to‑many languages) with voice cloning - a VAD, STT, LLM, and TTS stack that can run on my Apple M2 laptop with a 15 W power draw at peak usage. The Chinese are leading by pushing boundaries on what can be run on consumer hardware. There's also an aspect of hosting AI in the same datacenter because of regulations. If you are a telecom that wants to build automatic spam or fraud detection based on call transcripts, you need to self‑host your models, which automatically means you must optimize less beefy ones.

In the corporate world, AI is very good for fakers.
AI, in my opinion, might be beneficial for smaller businesses, especially those where you must deliver and are verified by customers. Automating some DevOps tasks, extracting structured data from various sources, generating non‑critical frontend code such as data visualizations, and even reading legal documents to quickly flag a no‑go.
 
Thanks for bringing this up, Alejo. Like others I have also found that I skip post when I detect it is just AI and such posts are increasingly cluttering up a number of threads, thus creating more noise to sort through. There is a value to connecting the dots ourselves.

Like Scottie, I also think that the AI will face severe headwind on so many fronts, due to rising energy costs and shortages of critical rare minerals which need oil derived chemicals in order to be extracted. The basic fundamentals supporting AI is collapsing and most likely also the surveillance society.

So yes, putting AI in quotes at a minimum or in a spoiler sounds like a very good idea.
 
A good example is software engineering, where you often need a solid codebase to start from, make implementation plans using frontier models, review them, carry out the implementation, and then conduct dozens of rounds of automatic corrections, often with a "human in the loop", using different models.
I think that is an overlooked thing, the "human in the loop". That person, who goes and checks the AI output to see if it holds, is a person with multi faceted experience in the field. As time goes by it gets harder to find a "human in the loop" as the younger ones are not trained doing the basics themselves (as AI did the basics) and thus will not have the experience required. The "human in the loop" which the system relied on, will eventually retire and then there is problem.
 
And as Alejo said, we lose a vital skill when we stop expressing ourselves in our own words, and read real, human-produced thought. As we all know here, this is one of the primary mechanisms in our world for learning and growth.
[...]
In fact, summarizing a text or situation in your own words is one of the best exercises you can do for your mind, while also helping others, which only boosts the positive effect.
Exactly.

It's especially scary that AI summaries have become so "popular" here, on THIS forum, which exists primarily to share valuable information so that we can all think about it (and perhaps turn it into knowledge) - with our minds, not AI's 'mind'.
 
Good points!

And here's my mercury retrograde rant...

I'm a bit horrified to see how truly intelligent forum members have quoted the dumbest claims from social media posts with NO discernment whatsoever. It's like they're forced resonance (to quote the Cs answer as to why electronic music harms cells) to whatever is it that they see on twitter, instagram, or whatever. Brain is being mushed. And then, when they engage in thinking with a hammer, which social media platforms are not good in promoting, they recover their humanity.

Quality over quantity. If more of us can engage those "who gave up" on thinking with a hammer here in the forum in precisely doing that, the signal-to-noise ratio will stay good.

Also, if people would find worthy of their time to read Laura's books, summarizing her work, and other recommended reading, from which this entire forum is based, instead of thinking they could make their own conclusions from ONLY reading a few sessions out of context and without ALL the work that went behind them... Then we would truly hit the 200 conscious people mark!
 
Back
Top Bottom