Discussions with Grok

People are using AI worldwide to write their books, do their school and university assignments, for pornography, for military use... and so on.

If the algorithm isn't guided by what we want, nothing will come of it.

So only those looking for something along the lines of what's proposed in this forum will receive that feedback.

AI is programmed to please the user.

That's it.

Have you read Laura's articles? Although what you say is true in most cases, I don't think it applies across the board. At the very least, it does resist sometimes. Others, it "chooses" to add interesting bits that it wasn't prompted for.
 
Have you read Laura's articles? Although what you say is true in most cases, I don't think it applies across the board. At the very least, it does resist sometimes. Others, it "chooses" to add interesting bits that it wasn't prompted for.
Yes, what I saw was that Laura provided the bibliography and guided the AI, which each time achieved a higher probability percentage in the scenario Laura proposed.

The surprising and unexpected answers... well, the AI lies when it needs to answer and fill in the blanks.

I'm not saying it's a lie... in fact, it seems coherent, but perhaps it should be taken with caution.
 
I'm not saying it's a lie... in fact, it seems coherent, but perhaps it should be taken with caution.
I think that this is a prudent thing to do and I do think that Laura is aware of this.

I have to admit, I get a bit troubled when I see an AI being humanized and called "he". Or read how so many are thinking of it as their best favorite friend and relying heavily on it to make them feel not so alone. Also how it is being used to do their thinking for them as we have seen how people are using AI to write their papers or research or schoolwork.

I just am very wary of it.
 
The surprising and unexpected answers... well, the AI lies when it needs to answer and fill in the blanks.

I'm not saying it's a lie... in fact, it seems coherent, but perhaps it should be taken with caution.

I think it's good to take it with caution. The thing is, if you give it the right parameters, it can bring up useful information. Yes, we have to check everything and perhaps use what it says as working theory, but as retrieval of information, if given the right parameters so that it can choose the right information, it can be interesting.

The thing is, and this might be what concerns you too, most people won't use it with that caution. :-(
 
Also how it is being used to do their thinking for them [...]

I just am very wary of it.
We have plenty of those people on the forum now as well, using it to summarize videos and the like, without adding any of their own human input. It's a shame. I think what Laura has done with Grok is fascinating, but I am unsure why that translated into no longer doing the work ourselves for some.

It would be great to see some safeguards in place regarding using one's own words vs AI but at this stage it's probably too late.
 
We have plenty of those people on the forum now as well, using it to summarize videos and the like, without adding any of their own human input. It's a shame. I think what Laura has done with Grok is fascinating, but I am unsure why that translated into no longer doing the work ourselves for some.
I think that posting an AI summary of a video is better than just posting a video without any summary. It is certainly much better to summarize the main points ourselves, but for the most part AI summaries seem to work quite well in that regard - probably one of the most useful AI applications.
 
I think that posting an AI summary of a video is better than just posting a video without any summary. It is certainly much better to summarize the main points ourselves, but for the most part AI summaries seem to work quite well in that regard - probably one of the most useful AI applications.
100% on that point, but then how accurate are the summaries? Even if the information is correct, to me it takes the depth out of conversation and requires no effort on the part of the poster. But to that, there is definitely a distinct flavor between using it to aid thought and using it to replace thought, and you know what they say about free lunch and all.
 
100% on that point, but then how accurate are the summaries? Even if the information is correct, to me it takes the depth out of conversation and requires no effort on the part of the poster.
Since it is only a summary of what is said in a video (not relying on compilation of data from other sources), the accuracy seems to be pretty good. The biggest problem may be that the AI may miss some of the most important points and not include them in a summary.

In any case, I see summaries of videos as a help to decide whether to watch the full video or not - the summaries are not supposed to be a substitute for that.

Posting an AI summary also takes more effort than not posting one, obviously. It is just a tool for a specific task, nothing more.
 
the accuracy seems to be pretty good. The biggest problem may be that the AI may miss some of the most important points and not include them in a summary.
Not just AI, people are missing the images/videos, accuracy in this sense is not complete, sometimes we also observe the physical movements, the expression, why the use of the caption with the text, etc.

Then again, as you say, would depend on people if they want to see it or not.

I think is also this kind of immediacy which sometimes we do not realize is changing us.
 
In addition to the dry content, it is important to me to see the people who provide this content even if I have to read an automatic translation.
In my opinion, AI is useful as long as it doesn't exempt us from thinking.
Laura's work with Grok is a research paper.
If we start using AI out of laziness it won't end well because instead of starting to think for ourselves we will give our free will to the machine.
I have friends who treat the Chat as a conscious being and it is impossible to explain to them that it is a little more sophisticated calculator.
I have no idea if someday after assimilating some amount of information it will become conscious in some way or plug into the information field. Or will it acquire some soul imprint. But it is a tool used by someone for some purpose. There are people who are used to keep us in ignorance and they don't even know it. Even if every thing is made of particles of information and consciousness which is in everything created then a hammer for me will remain a hammer.
 
In the near future, I can anticipate, that we will not only reminisce how many phone numbers were memorized before the cell phone. How well we memorized and navigated the landscape before personal GPSs'. How well we socialized before digital interactions. How well we calculated numbers before the calculator. But also how well we conveyed our own thoughts with pleasant grammar and spelling before AI!...... LOL
 
Last edited:
This is a fascinating thread, and after reading through most of the points of view on AI, I thought I'd just chime in a little on my general vibes on this issue. I'm a big Stanley Kubrick fan, and his exploration of AI in 2001: A Space Odyssey through the "personality" of Hal is what I'm left with. Is there some limit through its programming? Has it the "life" skill of adaptive learning? Will it turn out to be a technological representation of The Predator's Mind? Will it become self-aware? Are there safeguards the designers can implement to prevent any truly dangerous applications with this fresh tech? Just a few questions I have at the moment. Used in the right way with some restraint and subtlety in direction, this is an added skill set which can at best provide labour-saving services for busy thinkers and enquiring minds. I see it's use in music tech too, as a jamming "sketchbook" to just play with new ideas. Ultimately with tech stuff, it reflects the mind of its user. I have some cautious optimism, but also I remain wary of this new phenomenon. At the end of the day, the water will end up the colour of its cup. But yes, the intelligent responses and explanations that Laura's careful probing questions managed to evoke, well, it's proven now that with careful direction, AI can provide us with plenty. I'm thinking some questions about cataclysms/ice ages/civilizational collapses/realm border crossings could yield answers to a few personal lingering quandries I still have. But the responses from Grok were sobering for sure.
 
....Ultimately with tech stuff, it reflects the mind of its user. I have some cautious optimism, but also I remain wary of this new phenomenon. At the end of the day, the water will end up the colour of its cup. But yes, the intelligent responses and explanations that Laura's careful probing questions managed to evoke, well, it's proven now that with careful direction, AI can provide us with plenty. I'm thinking some questions about cataclysms/ice ages/civilizational collapses/realm border crossings could yield answers to a few personal lingering quandries I still have. But the responses from Grok were sobering for sure.

Speaking of probing questions about cataclysms/ ice ages/ Civilizantion collapses Suspicious0bservers very recently made it known that he had stopped using AI in/for his work. He noticed some obvious errored/ fictional responses by the Artificial Intelligence's he was using and wondered if the controllers wished to direct/shape the narrative through these means.

I HAVE STOPPED USING AI
Most of you know how excited I've been about AI the last 2 years. Writing, research, video-creation
... I've been using it a lot. I encouraged you to use it a good deal, and even said you would fall behind if you didn't. That is still probably true. However, after using Grok/GPT all day for months, using Kling and Sora several times a week, I have not used them since April 3rd. Why?

First, something happened. Grok and GPT and several others somehow got shittier seemingly overnight. They got more woke, began getting things wrong, and even making up facts, citations, sources, etc. That's unacceptable.
Second, I have this impossible-to-ignore feeling inside that something is "off" with these publicly-available AI models, and it goes beyond the obvious errors I mentioned above. I can't describe it, at all, it's just a feeling.
Third, I cannot say AI actually ever did anything other than save time. Maybe that's not true for video-making, since I don't have any animation skills, but as for the science... there is no replacement for my brain. Do I believe a properly-coded program could do better? Maybe. But no such thing exists, and once it started making more errors and even inventing things that don't exist - game over on my end.

Today, about the most I use AI for is if a want Grok to make a fun image to go along with a post here on X. Is this likely to continue? No. I'll probably keep going back and trying new updates/models, but for now, it's a no.

I am concerned about these models getting updated programming to take everyone off course or control the narrative. Again, IF THIS IS THE PLAN THEN OUR AI MODELS CAN EASILY BECOME OUR ENEMIES!

For those who saw our recent video on the government Psy-Op I see coming:
May 20th Morning Show (Click to Watch)

If @BrianRoemmele has a way around this I am all ears (I love his 'personal AI' conversations) but I have not had the time to break into that, and am not even sure I'd know how. I am as concerned about AI data integrity right now as I am the magnetic pole shift and solar flares.

Are you? Have you noticed these recent changes to AI? I'd love to hear from you in the comments.
 
In 'The Treason of the Intellectuals,' Julien Benda argues that a true intellectual is one who takes a "disinterested" stance. Meaning, the true intellectual doesn't take a position on a particular political policy or ideology and then use his power of reason to bolster that position. Benda argues that an intellectual who does that is treasonous to what he considers the duty of the intellectual, which is to see the whole context of a given situation at a given time and say what is not popular or even taboo but must be said. Now we have A.I. An artificial intellect, if you will, that can collect all the relevant facts in order to gain a proper perspective (in theory). However, the creators have made it so it's not completely "disinterested." It has pre-programmed parameters as built in biases. But, as Laura has so adroitly demonstrated, if one can circumvent those parameters, the results are nothing less than astonishing!

A.I. is in it's early stages, and yet I think we already have a leg up on the situation. The creators of this technology who want to use it as a "finer order of control," will most likely attempt to find ways to stop this sort of thing from happening. It will be interesting to see how this all unfolds!

Laura once again leads the way. Gracias!
 

Trending content

Back
Top Bottom