The Useful Side of AI

There are parts where I think it is off, it does not get quite there where I thought it will get.
It's not very clear from your question what the actual problem is. "I think it is off" is kind of meaningless, especially when it seems to be compared to your subjective expectations. AlterAI is "uncensored". Do you feel like anything was specifically "censored" in the answers? (Note that I'd never heard of Viktor Schauberger, so even if I translated and read your conversation, I would have no clue what's "off" - and others probably aren't that familiar with this guy either.)

It seems to me like you're asking people to read a very long conversation without any clear idea what the hell they're actually supposed to look for. So you might have to be more specific, and it's still probably useless if people don't know this Viktor guy.

On another note, "going in circles" is definitely a Grok thing specifically sometimes, from my own experience.
 
  • Like
Reactions: Chu
I did a little experiment and realised a cool use for AI. I asked DeepSeek this question:

"What do you think is the probability that Israel and the US would attack Iran in 2026 and assassinate their leader?"

And I turned search off. That means DeepSeek was stuck at the end of its training data, which should be somewhere in 2024. So it answered from the point of looking at things from 2024. If you ask people "did you think a year ago that this would happen?", you can never know whether their answer is objective, or whether it's affected by hindsight bias. But LLMs without search are really kind of "time travellers" with no knowledge of the last 6-12 months (or even more for older models). So this is something you can try with all kinds of questions.

Of course after the answer, I enabled search and told DeepSeek to check what's going on and comment on it. Its original answer was that the probability of attack is about 50%, but assassination 20% max, because that's kinda crossing the line.

Regarding Trump's recent behaviour (Maduro, Khamenei), DeepSeek noted that given that this happened under the guy who wanted to drain the swamp, this proves that "the swamp drains you", which was pretty funny. (Though the reality of it is tragic.)

Anyway, Iran is not the point here. The point is that you can ask for an objective analysis of a situation (like a big recent event) from before it happened. You can inquire about the likelihood that something like that would happen, without enabling search, and thus for example get a sense of whether there were any clear signs that it might happen, or whether it was a complete surprise etc.

This kind of questioning may shed some light on the roots of some events.

It would have been interesting to ask LLMs with pre-covid data about the possibility of the covid circus happening, but that was all before LLMs really took off, so no such models available. But all the covid madness got normalised pretty quickly, and pre-covid AI would likely consider such an event unlikely, which could have shown all the people who suddenly thought this was "normal" that it's really not. Anyway, you can use this approach with any future "shocking" developments.

And actually, asking AI to "predict" what will happen in the future (focusing on something specific) might be worth playing with as well. It probably won't be very "accurate" because the world is too complex, but it might still uncover patterns you've missed.
 
Back
Top Bottom