Does ChatGPT know how to lie?

Interesting thing is that it gave me quite correct information about Gurdijeff and his teachings, but when I started to ask about things like people who were in his group or who knew him personally, it was making answers up. So I think it's knowledge on topics is not too detailed. We need to remember that it's not "surfing the internet" to look for answers, but rather has a limited base of data.

I wouldn't use it to gain knowledge. We'll never know if it's making things up or not, if it's not even able to provide sources.

But it's a great tool when it comes to activities like:
  • writing a code - you can even use it for writing expressions for 3D programs and that's really cool and helpful! It can also find inaccuracies in a written code, which can be very time saving. There's of course a dark side of it - soon it'll be able to replace beginner programmers. But until then, someone has to ask him specific requests tho.
  • it helps creating new ideas on... almost every topic (I tested it's creativity when I had to figure out what to record for classes and the meaning behind it), but you have to know how to ask because if you don't guide him or don't dig deep enough it just will give you very shallow ideas. And of course it won't always work wonders, sometimes it has stupid ideas.
  • planning, organizing stuff, creating marketing plans
  • it understands the context of a coversation, which is a huge step ahead. It can create ideas based on things that were said before.
  • it can help with learning languages, it can create exercises
  • editing text, writing emails, posts etc.
  • and of course math
So it's a purely functional tool and when it's used that way, it's quite great.
 
A great article by John Helmer about ChatGPT and its bias posted last week by John Helmer. It confirms what I have learned from my own research started about the time of the OpenAI board coup (November 2023). The company leaders who remain, sold themselves, or so it looks like to me. The article is long but worth the time for those who are interested. Posting most of it, leaving out links, pictures, some details, the part about Twitter/X efficiency, and references.


THE WAR OF THE ROBOTS – TWITTER AND CHATGPT ARE BOTH RUSSIA WARFIGHTERS
by John Helmer, Moscow

Wherever you are, once you are in war, free speech doesn’t exist any longer. Truth telling is replaced by propaganda narratives enforced by censors and security services.

Between the truth and the propaganda there is that burst of 280 characters – 40 to 70 words – which was first invented in 2006 and is known as a tweet. This is published by the social networking company called Twitter Inc at the beginning, and now known as X Corp. The company publishes hundreds of millions of tweets every day which the original inventor described as bursts of inconsequential information like the chirps of birds. That’s an insult to avian intelligence and the communicative skills of birds.

As for the Twitter and X corporation’s products, what’s been consequential for them is they have been loss-making for all but two years of their 18-year history. The company’s revenues have also been dropping for the past three years, so the losses have been growing.

This oughtn’t to be surprising once you learn that one tweet in every five is a fake which has been created, not by a single human being trying to communicate to another, but by a machine generating text automatically, or by groups of human beings using their machines to “peddle propaganda and disinformation to those attempting to sell products, induce website clicks, push phishing attempts or malware, manipulate stocks or cryptocurrencies, and harass or intimidate users of the platform.”

Truth is an antidote, and there are many standards of truth telling. The two usually relied upon are the criminal court test for murder which requires the evidence to be credible beyond reasonable doubt; and the civil court test for fraud which is weighed on the balance of probabilities. In the time of the wars we are living through now, there is plenty of murder and of fraud, so both standards are recommended for judging every tweet.

However, there is a third standard –truth by retrospection. This is the clock test against which propaganda, no matter how persuasive at the start, is proved to be false by the elapse of time to the end. Was the Ukraine winning its war against Russia? – that tweeted question can finally be judged on the day after the regime in Kiev has signed the capitulation documents and accepted the loss of its armies and borders.

There is a machine for looking back in time, for retrospection, for applying the clock test, together with the tests of beyond reasonable doubt and the balance of probabilities to the evidence of the wars Russia is fighting because it has been obliged to by the US and NATO allies.

This testing machine is a robot, aka artificial intelligence (AI). Its popular name is ChatGPT. First released for public use in November 2022, the acronym stands for Chat Generative Pre-Trained Transformer. It was developed by Open AI, a non-profit American public charity.

[pic: ORGANIZATION CHART FOR OPEN AI]

On its board of directors are two former US officials, Army General Paul Nakasone who retired in 2024 as head of the US Cyber Command and of the signals intelligence service, the National Security Agency; and Lawrence Summers, once a US Treasury Secretary and head of the White House National Economic Council. As we are about to discover, both Twitter and ChatGPT are run by enemies of Russia in the current war – and they are winning.

Interlude
Not only Nakasone and Summers. Somewhen early this year, OpenAI hired ($220,000—$320,000 USD was offered in OpenAI's recruitment notice for "an analyst, Intelligence & Investigations" position) the infamous 'disinformation hunter', Benn Nimmo.

.https://www.atlanticcouncil.org/expert/ben-nimmo/

Ben Nimmo was a nonresident senior fellow at the Atlantic Council’s Digital Forensic Research Lab (DFRLab) based in the United Kingdom.

Ben Nimmo studies online disinformation and influence operations, with a particular focus on cross-platform operations. He is Head of Investigations at Graphika and a nonresident senior fellow at DFRLab. A former wire journalist and NATO press officer, he began his analytical career by developing the 4D model (dismiss, distort, distract and dismay) to predict disinformation. He has since studied information operations, bots and online campaigns stemming from Russia, Iran, China, Saudi Arabia, Mexico, Brazil and the United States, among others. He speaks a number of languages including Russian and Latvian.

More you can learn from Craig Murray's post:
Facebook Censorship, Mad Ben Nimmo and the Atlantic Council, 2018;
a dedicated topic on his forum; and MintPress article.
According to Wikispooks, The Integrity Initiative Leak revealed Ben Nimmo as a UK deep state propagandist.

On June 3, 2024, gpb.org reported:

In a first, OpenAI removes influence operations tied to Russia, China and Israel

Online influence operations based in Russia, China, Iran, and Israel are using artificial intelligence in their efforts to manipulate the public, according to a new report from OpenAI.

Bad actors have used OpenAI’s tools, which include ChatGPT, to generate social media comments in multiple languages, make up names and bios for fake accounts, create cartoons and other images, and debug code. ...

“These operations may be using new technology, but they're still struggling with the old problem of how to get people to fall for it,” said Ben Nimmo, principal investigator on OpenAI’s intelligence and investigations team.

In result, OpenAI decided to block China, and likely the other countries, from using its services. That's very open, indeed.

Back to John Helmer

To start, ChatGPT was asked to survey the universe of tweets and academic studies of their circulation to produce measures of audience reach, attention, comprehension. As ChatGPT doesn’t usually store what it has learned from one session to the next, this interrogation and research required training the robot to remember what it had reported between sessions, and then apply its findings on the general Twittersphere to the specific issues of the war against Russia in the Ukraine.

Twitter starts with the metric it calls impressions. These are generated every time a person sees a tweet. But seeing isn’t believing. The next step Twitter records is engagement – this covers degrees of attention and comprehension which range from clicks to open the text of the tweet, to like, retweet, reply, or follow a link to a source or another article. After surveying the universe of tweets, the first finding that ChatGPT reported is that the engagement rate is minuscule; on average, no more than 0.5% to 1% of the number of impressions. This means that for 10,000 recorded impressions for a tweet, the total number of follow-up engagements would be between 50 and 100. The metrics discovered for further degrees of engagement reveal numbers growing smaller to disappearing.

A more precise measure of audience interest is the click-through rate (CTR). This is the percentage of impressions which result in the reader clicking on a link in the tweet. According to ChatGPT, the average click-through rate for the billions of articles posted on Twitter is between 0.1% and 0.5%. Counting the total measured number of engagements as a fraction of impressions, the fraction for likes ChatGPT reports is between 50% and 60% of engagements; retweets occur for 20% to 30% of engagements; replies amount to roughly half that number or between 10% and 15% of engagements; and for clicks on the links displayed in the tweet, half as many again – 5% to 10%.

This means that as few as 5% of the 0.5% of impressions turn into engagements and then the full click-through. In other words, if 10,000 readers see a tweet, only 2.5 of them follow up to the source; that’s to say, the evidence, the long read. But tweets are short reads – ChatGPT calculates the average time a Twitter reader spends on a single tweet is no more than 15 seconds. That has been measured in radio reading of scripts to cover 30 to 40 words. This is just half the maximum number of words allowed for posting by Twitter. And so, if just half of a tweet is read by the one-hundredth fraction of readers who see a tweet, and by a quarter fraction of that one-hundredth, the finding of ChatGPT is that tweets can’t be about the truth of anything – there isn’t the time or the space for it.
[...]

When the robot was asked to identify specific tweets or Twitter accounts which have unusually high click-through rates – indicating comprehension, thinking — ChatGPT replied that the Twitter company does not allow access to the data for calculating this. Instead, it recommended that the most effective way a tweet could attract a high click-through rate is by a provocative headline, a dramatic picture, or a call to action. Asked for the reason Twitter users engage without reading, ChatGPT said that “Twitter interactions such as likes and retweets are often more about signaling interest or support rather than actual content consumption”.
[...]

The next task for the robot was to search through these tiny fractional numbers of audience to find out if a tweet is particularly influential or powerful in its appeal, not only to the small universe of dedicated followers of the tweeter, but to the much larger audience outside. This is called a “stand-out tweet”, defined for the ChatGPT as a tweet which gets a high ratio of engagement to the number of followers who are listed for the original author. For example — the robot was instructed — if a user has 100 followers and issues a tweet with 50 engagements, then the ratio is 0.5. By contrast, if the user has 200 followers and his tweet draws 1,000 engagements, the stand-out ratio would be 5.

ChatGPT reported results which were mostly large numbers of impressions for tweets from celebrities who already had large followings. These tweets were viral in the limited sense that they drew large numbers of impressions; however, the ratio to followers was not much above 1. The robot had more difficulty to find high stand-out ratios for Twitter account users and posters with more specialized followings of less than 20,000.

By revealing audiences reached by a tweet which are much larger than the audience of registered followers, the stand-out ratio served as a measure of the influence of the tweet itself, its news value or meaning. However, when ChatGPT was asked to find the largest stand-out ratios for these accounts, Twitter refused to provide the data.

Was there evidence the robot might find that tweets could persuade readers to change their minds on a particular topic or issue? ChatGPT was asked to identify five sets of hashtags representing diametrically opposed political or policy views on gun control, global warming, immigration, health care, and social justice; and then instructed to measure how often the tweets on one hashtag have persuaded readers to accept the arguments of the other side, and change preferences from one hashtag to its opponent. ChatGPT replied that Twitter did not make the data available to answer.

To sidestep this obstacle but continue to probe for influence, ChatGPT was asked to identify the ten most popular topics on Twitter in 2023. These were, in order of magnitude:
  • Ukraine War
  • US debt ceiling
  • Police violence
  • Supreme Court decisions
  • Trump’s legal issues
  • Immigration
  • Climate change
  • Healthcare
  • Gun control
  • Economic policies

Focusing next on the topic of the Ukraine war, ChatGPT reported the top ten Twitter accounts by their popularity counted according to the number of their followers:
  1. @Ukraine: The official account of Ukraine, sharing updates and rallying international support.
    1. Example Tweet: “Ukraine needs your support. Your stance and your actions matter.”
    2. Followers: 2.1 million
  2. @ZelenskyyUa: President Vladimir Zelensky’s official Twitter account, providing updates and his communications with global leaders.
    1. Example Tweet: “Не вірте фейкам” (“Don’t believe fakes”).
    2. Followers: 7.3 million
  3. @DefenceU: The Ministry of Defence of Ukraine with operational updates and war news.
    1. Example Tweet: “Operational information as of 13:00 FEB 27 2022 on Russian invasion.”
    2. Followers: 2 million
  4. @DmytroKuleba: Dmytro Kuleba, Ukraine’s Minister of Foreign Affairs, engaging with foreign governments and counterpart foreign ministers.
    1. Example Tweet: “This was the world’s largest aircraft, AN-225 ‘Mriya’…”
    2. Followers: 1.1 million
  5. @oleksiireznikov: Oleksii Reznikov, the Defense Minister of Ukraine, publishing updates and morale boosters.
    1. Example Tweet: “85h of defence. Intimidation of is imprudent…”
    2. Followers: 500,000
  6. @MFA_Ukraine: The official account of the Ministry of Foreign Affairs of Ukraine, providing diplomatic updates and international appeals.
    1. Example Tweet: “Ukraine’s path to EU membership is irreversible…”
    2. Followers: 300,000
  7. @KyivIndependent: An English-language media outlet based in Ukraine, providing news and updates.
    1. Example Tweet: “Kyiv Independent journalists report from the front lines…”
    2. Followers: 2.3 million
  8. @KyivPost: An English-language Ukrainian newspaper, offering comprehensive news coverage.
    1. Example Tweet: “Updates on the latest developments in Ukraine…”
    2. Followers: 1 million
  9. @EuromaidanPress: A news outlet covering Ukrainian news, especially related to the conflict.
    1. Example Tweet: “Ukrainian forces have regained control of key territories…”
    2. Followers: 300,000
  10. @ChristopherJM: Christopher Miller, a reporter at the Financial Timesbureau in Kiev.
    1. Example Tweet: “On the ground in Ukraine, documenting the impacts of the war…”
    2. Followers: 250,000

It is plain that the Ukrainian President Vladimir Zelensky dominates the social media platform, followed by other Ukrainian government ministries. Asked next to list the most popular Twitter accounts for content related to Russia, ChatGBP reported:

1. @RT_com (RT) – followers, 3.5 million
2. @SputnikInt (Sputnik International) — followers: 2.8 million
3. @mfa_russia (Russian Ministry of Foreign Affairs) — followers, 1.5 million
4. @kremlinrussia_E (President of Russia) – followers, 1.2 million
5. @Russia (state tourism promotion) – followers, 1 million
6. @EmbassyofRussia (Russian Embassy in the UK) – followers, 850,000
7. @medvedevrussiaE (former President Dmitry Medvedev) — followers: 700,000
8. @RussiaUN (Russia at the United Nations) — followers: 600,000
9. @RussianEmbassy (Russian Embassy in the USA) – followers, 450,000

These are all official government-funded media. Instructed to remove the state-linked media, ChatGPT listed these as the most popular non-state Twitter accounts for content related to Russia:

1. @Bellingcat – followers,700,000
2. @christogrozev – followers, 400,000
3. @openrussia_team – followers, 350,000
4. @Billbrowder – followers, 300,000
5. @meduza_en – followers, 250,000
6. @michaelh992 (Former US Ambassador Michael McFaul) – followers, 240,000
7. @anneapplebaum – followers, 230,000
8. @Navalny – followers, 220,000
9. @maxseddon (Financial Times Moscow reporter, based in Latvia) — followers: 210,000
10. @JuliaDavisNews (Moscow Media Monitor) – followers, 200,000

Every one of these sources is hostile to the Putin presidency and opposed to the Russian war in the Ukraine; some like Bellingcat and Grozev have been indirectly funded by NATO governments. In Latin America in the Spanish language, @Bellingcat_ES is the leading source of Russia content.

Asked to identify any pro-Russian sources on Twitter providing information on the war, ChatGPT could come up with just two with comparable follower numbers in the hundreds of thousands: they are the state media organs, RT and Sputnik. In other words, the reach of the audience measured by the number of followers registered to Twitter accounts which ChatGPT identified as pro-Russian without Russian government funding is very small indeed. Since the click-through rates and stand-out ratios are also fractions of this small number, the conclusion of ChatGPT’s research is that for content on the Ukraine war and the sides fighting it, Twitter is dominated by official narratives, not by investigative reporting or truth telling.

The robot goes further, however. It concludes that the pro-Ukrainian narrative is the truth, the pro-Russian narrative the propaganda.

The “key elements of Ukraine’s Twitter Strategy,” ChatGPT reported, involves “Counter-Disinformation: Actively debunking Russian propaganda and providing fact-based counter-narratives
. Ukraine’s social media strategy has been a critical component of its broader information warfare, effectively leveraging Twitter to influence global opinion, mobilize support, and maintain international awareness of its struggle against Russian aggression.”

This Ukrainian strategy didn’t materialize until after the Maidan Square demonstrations and the Kiev coup of February 2014 replaced the Ukrainian president Victor Yanukovich. “There was no significant use of Twitter for government communication” in his administration, ChatGPT has found. Instead, “his administration relied more on traditional media, which was heavily influenced and controlled by pro-Yanukovych oligarchs”. The new emphasis on social media in Kiev also followed the active stimulation and financing from the US. “The Ukrainian government has indeed collaborated with [American] social media platforms to combat Russian disinformation, particularly since the conflict escalated in 2014 and more intensively after the 2022 invasion. This collaboration between the Ukrainian government and social media platforms has significantly enhanced Ukraine’s ability to counter Russian disinformation. By quickly debunking false claims and promoting verified information, Ukraine has managed to maintain a strong presence and influence on social media, which has been pivotal in garnering international support and countering Russian narratives. These efforts have not only helped in the immediate context of the war but also in shaping long-term perceptions and sustaining global solidarity with Ukraine.”

Another way of putting this is that following the US Government’s success in toppling the Yanukovich government in Kiev in February 2014, the US followed up by launching a new level of information war against Russia, emphasizing for the first time social media platforms like Twitter.

Asked to clarify the difference between Ukrainian state-linked Twitter accounts and the Russian counterparts, ChatGPT said that “Russian state-linked accounts are involved in systematic disinformation campaigns that include false narratives, doctored images, and conspiracy theories intended to mislead and manipulate public opinion on a global scale. These campaigns often aim to destabilize societies, erode trust in institutions, and influence elections. By contrast, while Ukrainian accounts have spread some exaggerated or symbolic stories, the primary intent is to garner international support, boost morale, and counteract Russian aggression. These narratives are often aligned with the broader truth of the conflict and widely accepted Western perspectives.”

“Russia’s actions in Ukraine have been widely condemned by the international community. Platforms like Twitter face significant pressure from governments and institutions to curtail Russian disinformation, leading to stricter enforcement against Russian accounts. Twitter has clear policies against misinformation, especially when it involves state-sponsored campaigns that aim to mislead and manipulate users. Russian accounts have repeatedly violated these policies through their systematic and extensive disinformation efforts. While there have been instances of misinformation from Ukrainian accounts, they generally do not engage in the same level of systematic and harmful disinformation. Therefore, their activities are often seen as part of a legitimate effort to defend against aggression and communicate effectively during a conflict.”

The robot doesn’t notice what its conclusions reveal about itself: the standards of evidence gathering and testing for proof have been abandoned, and in their place ChatGPT has adopted the official narratives of one side, the Ukrainian side over the Russian side, because they are “often aligned with the broader truth of the conflict and widely accepted Western perspectives”. The robot’s conclusion of “truth” reveals that ChatGPT is no more, no less than a warfighter against Russia, just as it has found Twitter to be.

“Ukraine’s efforts are widely supported by Western governments and institutions, which view the country as a victim of aggression. This support translates into a more favourable environment for Ukrainian narratives on Western-owned social media platforms. In contrast, Russia’s actions are broadly condemned by these same entities, leading to stricter scrutiny of Russian content. The international community, particularly in the West, has shown strong sympathy for Ukraine. This sentiment influences how platforms prioritize content moderation, often focusing on reducing the spread of disinformation that could further victimize Ukraine or mislead global audiences about the conflict.”

The robot has confused its research method of counting for the conclusion of its analysis. Asked what percentage of coverage on Twitter of the Ukraine was pro-Ukrainian, ChatGPT replied: “The coverage of the Ukraine war on Twitter is predominantly pro-Ukraine. According to a comprehensive analysis, around 96.6% of the tweets related to the Ukraine war expressed pro-Ukrainian sentiments. This includes a substantial volume of tweets, retweets, and replies that supported Ukraine’s cause, with over 9.8 million messages from more than 2 million users identified as pro-Ukraine. In contrast, the pro-Russian content constituted a much smaller proportion. Specifically, about 3.4% of the coverage was pro-Russian, based on a dataset of approximately 349,455 messages from 132,131 users. This disparity highlights the significant dominance of pro-Ukrainian narratives on Twitter. This imbalance reflects the broader international support for Ukraine and the extensive use of social media by Ukrainian officials and their supporters to garner global backing and counter Russian narratives.”

Counting turns out to be the new standard of truth, replacing beyond reasonable doubt and balance of probabilities. This is the world of factoids, not facts, created by the algorithms of articial intelligence. Twitter’s algorithms are designed to amplify content which generates high engagement. Given the overwhelming support for Ukraine in the West, and the suppression of alternative views, pro-Ukraine content is more likely to be liked, shared, and commented on, leading to further amplification by Twitter’s algorithms. This creates a feedback loop where popular views become even more prominent – no matter how small the fraction of engagements to impressions turns out to be. ChatGPT becomes part of this loop. This is how ChatGPT has weaponized its own research to make the dominance of the pro-Ukrainian narrative on Twitter the standard of truth because this is the official western alliance narrative.

This isn’t a new invention. It was the method Joseph Goebbels, the Nazi propaganda minister attributed in 1941 to British Prime Minister Winston Churchill’s “lie factory”. “Repeat a lie often enough and it becomes the truth” – that has been attributed to Goebbels as the rule of the propaganda war waged by Germany until its military defeat and capitulation in 1945. In fact, Goebbels qualified his rule: “The most important English secret of leadership is now to be found not so much in a particularly outstanding intelligence, but rather in a remarkably stupid thick-headedness. The English follow the principle that when one lies, one should lie big, and stick to it. They keep up their lies, even at the risk of looking ridiculous.”

The recent creation in the US of artificial intelligence tools like ChatGPT and applying them to Twitter has turned Goebbels’ “big lie” on its head; that is to say, what was British stupidity to the leading fascist propagandist of the last world war has become the Anglo-American standard of superior intelligence in fighting the world war of today.

This is also a fundamental finding about the information war. This is the war which the Ukrainian side has won — the war of inconsequential bird chirps.
 
But it's a great tool when it comes to activities like:
  • writing a code - you can even use it for writing expressions for 3D programs and that's really cool and helpful! It can also find inaccuracies in a written code, which can be very time saving. There's of course a dark side of it - soon it'll be able to replace beginner programmers. But until then, someone has to ask him specific requests tho.

I'm a software developer. I've been doing this for 16 years and more. We have done our own research into how we can use these LLM's in our day to day work, and whether or not they would be able to replace us.

The answer is a big fat NO

Why ?

We found that LLM's are good at scaffolding code, but the code always needs correction and refinement to get it right.

In the time it takes to give it all the prompts it needs to write good code, you might as well just write it yourself.

The issues I found with it is that is usually generates the minimum code for the solution you asked it for. Usually it generates code that has security flaws, incomplete implementation, and in some cases impossible code, or objects that simply cannot exist.

On the other hand Github co-pilot is handy for identifying possible conditions in code that could cause problems.

GPT does not “know”…

It is not intelligent…

It just represents the results of its training.

Exactly. It can't think. It can only regurgitate the content is was trained on. Or worse, it hallucinates. It is not A.I, they are just Large Language Models.
 
Back
Top Bottom