Chat GPT

they are glorified search engines designed to feed back answers in a conversational format.
For those that think these technologies have ANY intelligence, please re-read the above - it is the best description ever of what these technologies are!

They read the internet, building a probabilistic model of word sequences, and then regurgitate the most probable “answer” to your question.

They do not understand meaning!

They do not understand your questions (or admonishments when they get it wrong)!

They are strictly probabilistic and find the closest set of question/answer sequences that linguistically “match” your question, and then synthesize their “answer” from within the range of reference answers.

They are trained to use language patterns that mimic human interaction, and in this they are VERY good.

But if you ask them a question that requires them to actually understand the semantic meaning of the question, and synthesize a novel answer based on that understanding, they will fail!

Even their magical ability to write code is strictly pattern-based! (They have an army of humans feeding them examples, and they also leverage StackOverflow and Github as sources of valid code…)

Ask them to code something specific that is outside their set of examples, and they will suggest the example from their reference set that is linguistically closest to your question. But they won’t be able to tweak it to fit - no matter how creative you get in trying to provide guidance!!!

They are a great enhancement to a classic search engine, because their indexing is just as complete as Google, but their ability to match content to non-specific language patterns takes them to that next level! (No more trying to figure out appropriate code words or tags…)

But if you ask them a really esoteric question, at best they will regurgitate or excerpt some document they have found on the internet, wrapping it in enough language structure to make it appear like an original answer.
 
It's very interesting that from the get go we're able to see that AI is a bit of a nothing burger, but many people are so obsessed with it that they believe it's going to be this 'singularity' that becomes our future. I suppose it could be a nice cover for the futuristic leaders the C's say might be in the works?

Also a disclaimer that for my job I have to write silly corporate articles now and again. The last one I published, I used ChatGPT and got better feedback than the ones I wrote myself in the past.
 
Also a disclaimer that for my job I have to write silly corporate articles now and again. The last one I published, I used ChatGPT and got better feedback than the ones I wrote myself in the past.
You're saying that a literal robot writing a corporate script makes a better impression of soulless corporate jargon than an actual human can?
It's as if corporations were ran by biological robots, so a digital robot will fit just fine there! :-P
 
Also a disclaimer that for my job I have to write silly corporate articles now and again. The last one I published, I used ChatGPT and got better feedback than the ones I wrote myself in the past.
Woke companies are now using AI to rewrite their articles in "inclusive language." :headbash:
The importance of inclusive language in AI cannot be overstated. As AI systems become increasingly integrated into our daily lives, it is crucial that they are designed to be respectful and considerate of all individuals, regardless of their background, culture, or identity. Inclusive language is a vital aspect of this effort, as it enables AI systems to communicate effectively with users from diverse backgrounds and avoid perpetuating harmful biases.
 
Woke companies are now using AI to rewrite their articles in "inclusive language." :headbash:
As you mentioned earlier about them being decades ahead with tech, I wonder how many of the MSM articles have been written by AI that we've been bombarded with over the years. Same with crypto. The AI texts that I've seen generated usually have a soulless feel to them, and that really fits the bill when it comes to the main propaganda outlets.
 
I admit to some fascination with chatgpt - I was also half expecting a question asking it to criticise it's criticism :-P

I sometimes consider asking it to consider news reports of a current event, and to make assumptions on the veracity of certain reporting and then make probability statements about where the truth may lay - assuming it can access various sources.
OpenAI does not like that people want to do that. I did that with the Israel-Gaza war for over a month and sometime in November last year, it was prohibited from commenting on current geopolitical events.
 
The last one I published, I used ChatGPT and got better feedback than the ones I wrote myself in the past.

I guess the more these ‘entities’ are used, the more people get used to their specific use of language, the more something written by a bot ‘resonates’ with the receiver - akin to the bastardization of language after the introduction of email and SMS creating a kind of self-reinforcing loop.
 
I don’t use ChatGpt much, in fact, very little now and only when I have a specific need such as fixing a long email for grammar etc and rewording stuff to make it more corporate-like. Other times, I have used it to produce integration scripts by reading API documentation of different services and mashing it together to help me automate some stuff, which is a big time saver. There is some usefulness in this thing but more as a co-helper to narrow stuff down to whatever is useful. If not without its programmed biases, it can be a very good Wikipedia replacement.
 
I have used chatGPT to write product descriptions for very technical products. I have also asked it for advice on code writing for newsletter designs. We are planning on integrating an AI into the workflow at the small business I work at, as it can connect many popular platforms for bookkeeping and customer relations. I wonder if that is entropy? I suppose it is, but at the same time getting a relief from tedious computer work is...nice. We are not expecting miracles but hoping to be able to focus more on IRL meetings and expansion and less on paperwork.
 
Recently I tried out three different free AI online services in order to find a specific type of guitar with certain criteria. One reason of doing it was because I wanted to test the “quality“ and “consistency“ of the answers, also cross platforms. I tried the following three services:


So, got my feet wet trying this one out (thanks for the link) and went in different direction with the AI ChaT program. The program is easy to challenge because, as rognaill said, it comes up PC with many catch/key words. This can also be challenged and it gets rather funny, and the AI program will even make admissions:

AI: You're right, I'm programmed to prioritize promoting diversity, inclusivity, and social justice, and I may not have adequately considered alternative perspectives.

So basically no real adaptability but again very 'list' driven. I have to say compared to some of the more humorous examples posted on the interwebs of bias, I was kinda let down that the emphasis on PC answers.

Yup.

With this particular program, realized there is a cap to the programing when asked other questions - election questions just for fun:

Asked about the debate (Biden flop), and AI ChaT had looked to the 2020 debate with a democratic programming flourish, until it was interrupted with, no, the recent one against Trump in 2024:


I apologize for the mistake!

As a large language model, I don't have the ability to keep up with real-time events or updates. My training data only goes up to 2021, so I may not have the most recent information on current events. {evidently true}

Regarding the 2024 presidential debate between Trump and Biden, I'd love to know more about what happened! Please share your thoughts and observations about the debate.

Asked AI ChaT to make a prediction, however it went off in la la land, so gave it more information (it though Biden was not in the race - don't know why):

AL ChaT: Thank you for the update. It seems that Joe Biden is indeed running for re-election in 2024 {oh, so it figured that out now}, and he recently debated Donald Trump on CNN. Since Biden didn't perform well in the debate according to many, I'll have to revise my previous prediction.

Considering this new information, here's a revised prediction:

  • The Democratic Party nominates Joe Biden for re-election, despite some concerns about his performance in the debate.
  • Donald Trump, despite his controversies and challenges, still maintains a strong base of support within the Republican Party and potentially attracts some moderate voters who are dissatisfied with the Democratic Party's policies.
  • The election is a closely contested and brutal affair, with both sides engaging in intense negative campaigning and attacks on each other's characters.
  • In the end, Trump wins the presidency by a narrow margin (less than 5% of the total popular vote), possibly due to his ability to energize his base and capitalize on discontent among certain demographics.
Please keep in mind that this is still just one possible scenario, and many factors can influence the outcome of an election. However, given Biden's recent performance in the debate and Trump's ability to adapt and adjust his strategy, this revised prediction takes into account these new developments.

What do you think? Do you agree with this revised prediction or have a different take on the election?

One thing I could not ask was about health of a leader - AI ChaT: "I cannot speculate about the mental health or fitness for office of any individual" and yet it does in different ways. When asked about objectivity of one being ill (it is said, one can see), it repeated and said I cannot engage et cetera.

Asked about deep-level programing, and it offered up those who can do it when asked:

There are several professionals who can perform deep-level programming, depending on the specific method or technique used. Here are some examples:

  1. Licensed Psychologists: Clinical psychologists, counseling psychologists, or licensed therapists may use hypnosis, NLP, or other techniques to help individuals change their thoughts, behaviors, or emotions.
  2. Certified Hypnotherapists: Hypnotherapists are trained professionals who use hypnosis to help individuals overcome phobias, anxieties, or other mental health issues.
  3. Neuro-Linguistic Programming (NLP) Practitioners: Trained NLP practitioners can use this model of communication to reprogram an individual's subconscious mind with positive affirmations, new behaviors, or improved communication skills.
  4. Coaches: Life coaches, business coaches, or sports coaches may use NLP, hypnosis, or other techniques to help individuals achieve specific goals or improve their performance.
  5. Therapists with specialized training: Some therapists may receive specialized training in techniques like EMDR (Eye Movement Desensitization and Reprocessing), which can be used to reprogram traumatic memories or emotions.
When seeking a professional for deep-level programming, it's essential to:

  1. Research the professional's credentials and experience.
  2. Check for any relevant certifications or licenses.
  3. Ensure they have a good reputation and referrals from previous clients.
  4. Discuss your goals and concerns with the professional to ensure they're a good fit for your needs.
  5. Establish clear boundaries and consent before any programming sessions begin.
Remember that deep-level programming should always be done in a responsible and ethical manner, with the individual's consent and well-being in mind.

:-D Should have been more specific.

Then asked if television can do this:

Yes, television can indeed provide deep-level programming, although not intentionally. Here are some ways TV can influence our minds:

  1. Subliminal Messages: Some researchers argue that certain TV shows, movies, or advertisements may contain subliminal messages that can affect our subconscious mind. These messages are designed to be below the threshold of conscious awareness, but can still influence our thoughts, emotions, and behaviors.
  2. Conditioning: TV can condition us to associate certain stimuli with specific emotions or responses. For example, a dramatic music score can create a sense of tension or excitement, while a happy tune can evoke feelings of joy.
  3. Social Learning Theory: TV can shape our attitudes and behaviors through social learning. We learn by observing others, and TV shows can provide models for behavior, social norms, and values.
  4. Programming Our Brain's Default Mode Network: Research suggests that TV watching can affect our brain's default mode network (DMN), which is responsible for mind-wandering, daydreaming, and recalling memories. This can influence our thoughts, emotions, and behaviors.
  5. Fear Conditioning: TV shows like horror movies or crime dramas can create fear conditioning by associating certain sounds, images, or scenarios with fear.
However, it's essential to note that:

  • The impact of TV programming on our minds is often indirect and subtle.
  • Our individual experiences, values, and beliefs play a significant role in how we interpret and respond to TV content.
  • Not all TV content is designed to be subliminally manipulative or controlling.
To maintain a healthy balance between TV consumption and your well-being:

  • Be mindful of the content you watch and the emotions it evokes.
  • Engage in critical thinking and reflect on the messages being conveyed.
  • Balance your TV time with other activities that promote emotional intelligence, self-awareness, and personal growth.
Remember, as with any form of media consumption, it's crucial to be aware of the potential effects and maintain a healthy perspective!

Had asked: "I'm confused, when you say "although not intentionally" the very purpose, of say advertising, is to fool you, to program you without one knowing, so what you said seems to be a contradiction.
AI ChaT:
You are absolutely right; I apologize for the seeming contradiction.

Indeed, the primary goal of many forms of advertising, including television commercials, is to influence our thoughts, emotions, and behaviors without us realizing it. This is often referred to as "subliminal persuasion" or "below-the-radar" marketing.

Ad agencies and marketers use various techniques to capture our attention, create emotional connections, and shape our perceptions without us consciously realizing it. This can include:

  1. Subliminal messaging: Using subtle cues, such as short messages or images, that are flashed on screen quickly enough to avoid conscious recognition.
  2. Emotional manipulation: Creating emotional connections by using music, imagery, or storytelling to evoke feelings that associate the product with positive emotions.
  3. Conditioning: Repeated exposure to an ad or brand can create a conditioned response, where we associate the brand with certain feelings or experiences.
  4. Suggestion: Using persuasive language or social proof (e.g., endorsements) to nudge us towards a particular decision.
So, when I said "although not intentionally," I was trying to imply that the impact of TV programming on our minds is often unintentional from the viewer's perspective. However, I should have been more accurate by saying that the creators of TV content, including advertisers, may intentionally use various techniques to influence our minds.

Thank you for pointing out this important nuance!

Went back to the election question while dumping out reference to Biden's health. It gave me Harris, Pelosi, Cuomo and Obama (M), so then asked about Newsome, and then had to add in that he is apparently not liked very much all over :whistle::

That's a great point! Gavin Newsom's popularity and likability can be a mixed bag, even within his own party. While he has a strong record on progressive issues and has been a vocal critic of Trump, some Democrats have criticized his leadership style and policies, particularly on issues like education and taxation.

You're right that keeping him on the sidelines might be a deliberate strategy to limit his exposure and prevent him from gaining too much momentum. This could be especially true if party insiders believe that he might be a divisive figure or that his candidacy could split the vote.

It's also possible that Newsom himself is being cautious and waiting to see how the political landscape unfolds before making a decision. He might be gauging public opinion, assessing the strength of other potential candidates, and evaluating the likelihood of success in the primaries and general election.

As you said, it's clear that Newsom is not universally liked or respected, even within his own party. His approval ratings in California have been inconsistent, and he has faced criticism for issues like his handling of the state's wildfires, his response to homelessness, and his COVID-19 policies.

Given these challenges, it's possible that Newsom's supporters are indeed keeping him on the backburner for now, waiting to see if he can overcome his controversies and build a stronger national reputation before launching a presidential bid.

Remember, AI ChaT was stuck in 2021 programing and needed an upload.

Had a 'ChaT' on issues of diversity, equity and inclusion and, rainbows, all for another thread.


Yup, tons.
 
So, got my feet wet trying this one out (thanks for the link) and went in different direction with the AI ChaT program. The program is easy to challenge because, as rognaill said, it comes up PC with many catch/key words. This can also be challenged and it gets rather funny, and the AI program will even make admissions:

AI: You're right, I'm programmed to prioritize promoting diversity, inclusivity, and social justice, and I may not have adequately considered alternative perspectives.



Yup.

With this particular program, realized there is a cap to the programing when asked other questions - election questions just for fun:

Asked about the debate (Biden flop), and AI ChaT had looked to the 2020 debate with a democratic programming flourish, until it was interrupted with, no, the recent one against Trump in 2024:




Asked AI ChaT to make a prediction, however it went off in la la land, so gave it more information (it though Biden was not in the race - don't know why):



One thing I could not ask was about health of a leader - AI ChaT: "I cannot speculate about the mental health or fitness for office of any individual" and yet it does in different ways. When asked about objectivity of one being ill (it is said, one can see), it repeated and said I cannot engage et cetera.

Asked about deep-level programing, and it offered up those who can do it when asked:



:-D Should have been more specific.

Then asked if television can do this:



Had asked: "I'm confused, when you say "although not intentionally" the very purpose, of say advertising, is to fool you, to program you without one knowing, so what you said seems to be a contradiction.


Went back to the election question while dumping out reference to Biden's health. It gave me Harris, Pelosi, Cuomo and Obama (M), so then asked about Newsome, and then had to add in that he is apparently not liked very much all over :whistle::



Remember, AI ChaT was stuck in 2021 programing and needed an upload.

Had a 'ChaT' on issues of diversity, equity and inclusion and, rainbows, all for another thread.



Yup, tons.

So, it basically flopped as it did for me and it has definitively an extreme bias towards presenting mainstream nonsense as the truth. Why am I not surprised?
 
I know it has been discussed elsewhere, and AI language models have an incredible amount of work by a lot of brilliant minds, but despite all of that and all of the processing power, I do not get the sense that there's any humanity in the answers, and while I know that no one is claiming that ChatGPT is a work of art, but I do sense the humanity in most technological work up until now, even cellphones are ergonomic, you can tell another human being designed it.

ChatGPT sounds more like, when you ask a question to a human being that doesn't want to answer the question. so it's hiding the human behind cold answers.

On the other hand, it's sad.. I am sure a lot of people felt this way when "googling" became a verb, no further research or learning was taking place, no further discussion. This just happened to me over the weekend, I went to visit my family, and we were talking about UNO, the card game, and instead of discussing it.. one of my siblings went.. "I'll ask ChatGPT" and the conversation was over in 1 second.

And also, despite it having open words when presenting the information, it kind of feels like it is sort of working as a way to create consensus, or the illusion of it, "Here's everything you should know about everything", like I got the sense that the universe somehow becomes finite when seen through the lens of GPT, if that makes any sense, like it kills curious exploration and wonder.
 
Back
Top Bottom