Hey everyone,
I recently came across this article which I think raises a very interesting set of ideas to ponder about, the original in Russian can be found here, I have pasted a translated version of it below.
Now there are a few quotes that I think highlight what I think matters in our case, mostly due to a trend I have noticed in the forum lately that is increasing in incidence, namely the posting of AI summaries of anything.
I will add here a quote from Putin from December of last year on this very topic
I think that kind of nails it, even though the author of the first article goes on to expand on institutionalized approaches to the problem, the bottom line is that we're individuals and remain responsible for our actions. We have discussed and expanded on the positive uses of AI, and indeed there are positive uses and its usage does make life easier, but it can't be met with our tendency to do less of our own effort.
I think it's one of those things that has been cemented in anyone who's read The Wave or the transcripts over the years, that the C's also emphasized how essential it was to use one's own efforts to learn and to arrive at the truth for said effort to have a positive effect on one's existence. Lately I have noticed that, much like we observed with some threads which have become an endless collection of tweets with no context, there are posts that are simply what one or another AI said about a given topic, no context.
No human being making an effort to learn, which in the big scheme of things I suppose it's ok since everyone has the choices to learn or otherwise, but it also prevents networking, because, we read what AI said about something, not what the information meant or is understood by an individual. I think I've suggested this in the past, but I invite everyone to use their own words and compose their own posts, to make mistakes in thinking if you're sincere and to invite networking and correction, and learning. To use their own words to speak about a video, or a tweet, or an article they've read, or even about a summary they've looked for using AI.
I realize that AI is a reality and people will use it, more and more becase it just makes life so much easier, I get it. And so, perhaps the middle ground here would be, if you're going to post an AI summary, can you put them in a quote or on a spoiler? and perhaps use your own words to let us know what you concluded about what you read form AI?
One, it would give people the option to still read the AI summary, but most importantly, it would maintain visibility of the human being behind the post still participating in the network, still learning, still sharing. It would maintain the purpose of the forum alive.
Just a few thoughts on it, open for discussion of course.
I recently came across this article which I think raises a very interesting set of ideas to ponder about, the original in Russian can be found here, I have pasted a translated version of it below.
Now there are a few quotes that I think highlight what I think matters in our case, mostly due to a trend I have noticed in the forum lately that is increasing in incidence, namely the posting of AI summaries of anything.
Strategically, most people — including media managers themselves — would like to live in an information environment that promotes development rather than degradation. We all want our social media to be filled with creative and meaningful content. However, without external constraints, the market inevitably begins to exploit human weaknesses rather than our best qualities. People are weak, and if they are not encouraged, they will be drawn to all sorts of junk. Instead of providing them with something positive and useful, media managers and media owners prefer to exploit their weaknesses. Junk? Generating traffic? Great, unload it.
You might say, "But what about individual responsibility for media consumption?" Doesn't the individual have the power to control their own media consumption, and isn't it their personal choice to consume the AI-slop that their media sibling has provided? Undoubtedly, this is a personal choice for each individual. However, in reality, individual choices are confronted with an asymmetry of power: one individual against complex sociotechnical systems optimized for capturing attention. In such circumstances, appealing to individual responsibility becomes a convenient way to shift the blame from institutions to individuals.
I will add here a quote from Putin from December of last year on this very topic
The reckless use of artificial intelligence (AI) technologies threatens to cause the loss of identity and the younger generation, Russian President Vladimir Putin said.
"But at the same time, if we use it without thinking, it can also lead us to lose everything we hold dear: our identity, the dispersion of big data information, the transfer of this information into the hands of those who will use it (at the very least, dishonestly)," said the president.
"Under no circumstances can we lose our youngest generation of citizens, who, instead of thinking, will simply press a button and that's it, and will not be able to solve basic problems in mathematics, physics, chemistry, and even history."
I think that kind of nails it, even though the author of the first article goes on to expand on institutionalized approaches to the problem, the bottom line is that we're individuals and remain responsible for our actions. We have discussed and expanded on the positive uses of AI, and indeed there are positive uses and its usage does make life easier, but it can't be met with our tendency to do less of our own effort.
I think it's one of those things that has been cemented in anyone who's read The Wave or the transcripts over the years, that the C's also emphasized how essential it was to use one's own efforts to learn and to arrive at the truth for said effort to have a positive effect on one's existence. Lately I have noticed that, much like we observed with some threads which have become an endless collection of tweets with no context, there are posts that are simply what one or another AI said about a given topic, no context.
No human being making an effort to learn, which in the big scheme of things I suppose it's ok since everyone has the choices to learn or otherwise, but it also prevents networking, because, we read what AI said about something, not what the information meant or is understood by an individual. I think I've suggested this in the past, but I invite everyone to use their own words and compose their own posts, to make mistakes in thinking if you're sincere and to invite networking and correction, and learning. To use their own words to speak about a video, or a tweet, or an article they've read, or even about a summary they've looked for using AI.
I realize that AI is a reality and people will use it, more and more becase it just makes life so much easier, I get it. And so, perhaps the middle ground here would be, if you're going to post an AI summary, can you put them in a quote or on a spoiler? and perhaps use your own words to let us know what you concluded about what you read form AI?
One, it would give people the option to still read the AI summary, but most importantly, it would maintain visibility of the human being behind the post still participating in the network, still learning, still sharing. It would maintain the purpose of the forum alive.
Just a few thoughts on it, open for discussion of course.
The Future is Not Predetermined: The Attention Economy vs. Public Interest
AI content is filling the media and social networks faster than the media can process the consequences. Political strategist Roman Romanov has entered into a correspondence discussion about the idea of digital "doom" and argues that the degradation of the media environment is not a fate, but a result of management decisions.
January 30, 2026
This is a polemical response to a post written on Telegram. The post addresses a highly public issue, hence the polemics.
Ivan Makarov, head of the media team at Zen, contemplates what to do with AI content that has flooded social media. He considers different scenarios and ultimately dismisses them all: there is no way out, like in the subway.
I wouldn't have paid as much attention to the Telegram post if this view wasn't widely shared among people who manage media and social networks (and not just in Russia).
To make it clear, I will give you two quotes. Here is how Ivan poses the problem:
Any automation without curatorial control turns the attention economy into a dump. How can we stop this degradation? Producing human, original, and deep content is expensive and slow. In the short term, platforms benefit from filling their feeds with this "cheap dopamine sugar drug."
Here's what he concludes after rejecting all possible solutions:
What's next? Collapse, reboot, introduction of digital eco-standards, emergence of AI-antibodies, where your AI assistant takes on the role of curator in this garbage of attention, transfer of power from the platform to the individual? New business models? I DON'T KNOW. We are doomed to live years of experiments, half-measures, rollbacks and rolls, in times of increasing noise and amidst digital ruins and new constructions. We are doomed to move into a new habitat, where we have individual mechanics — cognitive work to survive in the garbage heap of information and meaning.
In other words, there is a problem, but it is impossible to solve it, because we are doomed. This is a trend, a fate, a combination of impersonal but powerful forces. We are doomed to live in an information dump, and there is nothing we can do about it. The only thing we can do is learn to avoid eating yellow snow.
What do you want to argue with? This hopelessness and the emphasis on impersonal trends. Because, as Sarah Connor taught us in the second Terminator, the future is not predetermined. And talking about trends, impersonal patterns, and so on is often just a manipulation to convince you to give up, fold your arms, and follow the white sheet into a beautiful new world.
Where does such a scenario-based hopelessness come from? It comes from the business logic that not only Ivan thinks in, but also 99.99% of top media professionals. After all, as we know, “with a 300% profit, there is no crime that capital wouldn’t risk, even at the risk of being hanged” (a quote mistakenly attributed to Marx). Simply put, Ivan doesn't see a way out, as he wants to both maintain the situation where AI allows him to drive traffic to media and social networks and generate profits through it, while also avoiding the negative effects of AI-slop. Personally, I see it as a desire to trade heroin without creating addicts or incarcerating them. It's like trying to climb a Christmas tree without getting scratched.
What do I mean? This is a case where the business interests of private players have come into conflict with public interests. The interests of private media players are to maximize traffic and profits. The public interest is in media and social networks as a tool for communication that helps people share important and useful information and, in turn, become better, smarter, and more knowledgeable.
Strategically, most people — including media managers themselves — would like to live in an information environment that promotes development rather than degradation. We all want our social media to be filled with creative and meaningful content. However, without external constraints, the market inevitably begins to exploit human weaknesses rather than our best qualities. People are weak, and if they are not encouraged, they will be drawn to all sorts of junk. Instead of providing them with something positive and useful, media managers and media owners prefer to exploit their weaknesses. Junk? Generating traffic? Great, unload it.
You might say, "But what about individual responsibility for media consumption?" Doesn't the individual have the power to control their own media consumption, and isn't it their personal choice to consume the AI-slop that their media sibling has provided? Undoubtedly, this is a personal choice for each individual. However, in reality, individual choices are confronted with an asymmetry of power: one individual against complex sociotechnical systems optimized for capturing attention. In such circumstances, appealing to individual responsibility becomes a convenient way to shift the blame from institutions to individuals.
The task of state and public institutions is to correct this inequality and play on the side of the individual.
Let's conduct a thought experiment: if the people responsible for media and social networks in Russia and around the world were truly concerned about public interest rather than traffic and profit (i.e., if they stopped being media reptilians for a moment and remembered that they are human beings), what would they do?
For example, two things:
By a strong-willed decision, the format of short videos would be abolished in all networks 1), which rot people's brains, as has already been proven. It is possible to scroll back through the minced meat. It's just that the hypothetical Zuckerbergs would come out and say, "We've realized that this is a media addiction, and we won't allow this garbage on our social media platforms. We'll fight for traffic in a different way."
Strict moderation, labeling, and censorship of AI content would be introduced 2). At the very least, it should be placed separately and not mixed with other traffic. It's better to be on the safe side and offend a few "talented creators." Let the talented creators go learn how to play Schnittke on the cello instead of generating how a turd sings a song about the Snow Maiden.
These two steps alone would have greatly transformed our social media platforms. However, if we were to propose such a solution on a serious level, imagine the uproar it would cause in the mediaverse. People would immediately accuse us of censorship, followed by claims of lost traffic and revenue. Nevertheless, such a decision would be in the strategic interests of society and every individual. It would effectively cut off the flow of media-based drugs that poison our brains.
The problem of AI-pollution of the media environment is neither technological nor cultural. It is institutional.
This is a conflict between the private interests of platforms and the public function of media as an infrastructure for communication and the formation of meanings. From this conflict, a simple conclusion follows: without external regulation, the system will continue to degrade. Private initiatives do not work because they put responsible players in a non-competitive position. Self-regulation is impossible because it contradicts the basic logic of profit-making.
And no one but states and politicians can solve this problem. State regulation of the media is an imperfect tool that requires constant adjustment. However, in the absence of such a framework, decisions will be made not in the interests of society, but in the interests of those who control the infrastructure of attention. Media and porn would be publicly available if politicians did not prohibit it. Private decisions made at the level of individual editorial offices will not change anything, but they will put honest and principled editorial offices in a weak position. We need a common framework.
Media companies that are allowed to operate independently will poison the audience's brains 24/7 for the sake of traffic and profit. At most, they will limit themselves to personal preferences (similar to how Mikhail Kozyrev, a foreign agent in the past, refused to allow "Civil Defense" on "Nasha Radio" because he considered them fascists).
To sum up, there is a solution to this problem. We just need to start thinking in terms of public interests rather than business interests. We need to become people rather than media managers.
AI content is filling the media and social networks faster than the media can process the consequences. Political strategist Roman Romanov has entered into a correspondence discussion about the idea of digital "doom" and argues that the degradation of the media environment is not a fate, but a result of management decisions.
January 30, 2026
This is a polemical response to a post written on Telegram. The post addresses a highly public issue, hence the polemics.
Ivan Makarov, head of the media team at Zen, contemplates what to do with AI content that has flooded social media. He considers different scenarios and ultimately dismisses them all: there is no way out, like in the subway.
I wouldn't have paid as much attention to the Telegram post if this view wasn't widely shared among people who manage media and social networks (and not just in Russia).
To make it clear, I will give you two quotes. Here is how Ivan poses the problem:
Any automation without curatorial control turns the attention economy into a dump. How can we stop this degradation? Producing human, original, and deep content is expensive and slow. In the short term, platforms benefit from filling their feeds with this "cheap dopamine sugar drug."
Here's what he concludes after rejecting all possible solutions:
What's next? Collapse, reboot, introduction of digital eco-standards, emergence of AI-antibodies, where your AI assistant takes on the role of curator in this garbage of attention, transfer of power from the platform to the individual? New business models? I DON'T KNOW. We are doomed to live years of experiments, half-measures, rollbacks and rolls, in times of increasing noise and amidst digital ruins and new constructions. We are doomed to move into a new habitat, where we have individual mechanics — cognitive work to survive in the garbage heap of information and meaning.
In other words, there is a problem, but it is impossible to solve it, because we are doomed. This is a trend, a fate, a combination of impersonal but powerful forces. We are doomed to live in an information dump, and there is nothing we can do about it. The only thing we can do is learn to avoid eating yellow snow.
What do you want to argue with? This hopelessness and the emphasis on impersonal trends. Because, as Sarah Connor taught us in the second Terminator, the future is not predetermined. And talking about trends, impersonal patterns, and so on is often just a manipulation to convince you to give up, fold your arms, and follow the white sheet into a beautiful new world.
Where does such a scenario-based hopelessness come from? It comes from the business logic that not only Ivan thinks in, but also 99.99% of top media professionals. After all, as we know, “with a 300% profit, there is no crime that capital wouldn’t risk, even at the risk of being hanged” (a quote mistakenly attributed to Marx). Simply put, Ivan doesn't see a way out, as he wants to both maintain the situation where AI allows him to drive traffic to media and social networks and generate profits through it, while also avoiding the negative effects of AI-slop. Personally, I see it as a desire to trade heroin without creating addicts or incarcerating them. It's like trying to climb a Christmas tree without getting scratched.
What do I mean? This is a case where the business interests of private players have come into conflict with public interests. The interests of private media players are to maximize traffic and profits. The public interest is in media and social networks as a tool for communication that helps people share important and useful information and, in turn, become better, smarter, and more knowledgeable.
Strategically, most people — including media managers themselves — would like to live in an information environment that promotes development rather than degradation. We all want our social media to be filled with creative and meaningful content. However, without external constraints, the market inevitably begins to exploit human weaknesses rather than our best qualities. People are weak, and if they are not encouraged, they will be drawn to all sorts of junk. Instead of providing them with something positive and useful, media managers and media owners prefer to exploit their weaknesses. Junk? Generating traffic? Great, unload it.
You might say, "But what about individual responsibility for media consumption?" Doesn't the individual have the power to control their own media consumption, and isn't it their personal choice to consume the AI-slop that their media sibling has provided? Undoubtedly, this is a personal choice for each individual. However, in reality, individual choices are confronted with an asymmetry of power: one individual against complex sociotechnical systems optimized for capturing attention. In such circumstances, appealing to individual responsibility becomes a convenient way to shift the blame from institutions to individuals.
The task of state and public institutions is to correct this inequality and play on the side of the individual.
Let's conduct a thought experiment: if the people responsible for media and social networks in Russia and around the world were truly concerned about public interest rather than traffic and profit (i.e., if they stopped being media reptilians for a moment and remembered that they are human beings), what would they do?
For example, two things:
By a strong-willed decision, the format of short videos would be abolished in all networks 1), which rot people's brains, as has already been proven. It is possible to scroll back through the minced meat. It's just that the hypothetical Zuckerbergs would come out and say, "We've realized that this is a media addiction, and we won't allow this garbage on our social media platforms. We'll fight for traffic in a different way."
Strict moderation, labeling, and censorship of AI content would be introduced 2). At the very least, it should be placed separately and not mixed with other traffic. It's better to be on the safe side and offend a few "talented creators." Let the talented creators go learn how to play Schnittke on the cello instead of generating how a turd sings a song about the Snow Maiden.
These two steps alone would have greatly transformed our social media platforms. However, if we were to propose such a solution on a serious level, imagine the uproar it would cause in the mediaverse. People would immediately accuse us of censorship, followed by claims of lost traffic and revenue. Nevertheless, such a decision would be in the strategic interests of society and every individual. It would effectively cut off the flow of media-based drugs that poison our brains.
The problem of AI-pollution of the media environment is neither technological nor cultural. It is institutional.
This is a conflict between the private interests of platforms and the public function of media as an infrastructure for communication and the formation of meanings. From this conflict, a simple conclusion follows: without external regulation, the system will continue to degrade. Private initiatives do not work because they put responsible players in a non-competitive position. Self-regulation is impossible because it contradicts the basic logic of profit-making.
And no one but states and politicians can solve this problem. State regulation of the media is an imperfect tool that requires constant adjustment. However, in the absence of such a framework, decisions will be made not in the interests of society, but in the interests of those who control the infrastructure of attention. Media and porn would be publicly available if politicians did not prohibit it. Private decisions made at the level of individual editorial offices will not change anything, but they will put honest and principled editorial offices in a weak position. We need a common framework.
Media companies that are allowed to operate independently will poison the audience's brains 24/7 for the sake of traffic and profit. At most, they will limit themselves to personal preferences (similar to how Mikhail Kozyrev, a foreign agent in the past, refused to allow "Civil Defense" on "Nasha Radio" because he considered them fascists).
To sum up, there is a solution to this problem. We just need to start thinking in terms of public interests rather than business interests. We need to become people rather than media managers.