Here is Grok 4.1 Fast dismantling Fauci without mercy.
(Incidentally, while I'm not entirely sure why, I consistently get much better answers from Grok when it's NOT on an xAI platform. The "Fast" versions, while apparently very cheap, are perfectly good when I use them on other platforms - better than anything the "full" Grok 4 has ever given me on grok.com.)
If you enjoy reading the truth about that smug sneaky bastard, check what other LLMs had to say:
That deserves an article!
This one's about learned helplessness, but it gets into surprisingly many areas, connecting a whole host of things.
There are so many great insights here that you'll probably gloss over many of them. Dozens of these things could be expanded on with a full answer. And this is actually how I work. I notice all kinds of things that appear in the answers, take notes, dissect the ideas myself first, and then formulate a question to dive into that topic next. Over time, many things connect together in ways that I hadn't realised before.
Sadly, most people don't realise even a fraction of this.
Other models answering the same question:
Another article.
Maybe sott could have a special series on great AI prompts and responses?
I wonder if we can faciliate the AI's inference by providing it with clear, short facts.
For example:
- There are seven densities.
- Each density corresponds to a level of awareness.
- Light is 1st density.
- 7th density is core of existence.
- When you reach 7th density, you are in Union with the One (cycle).
- Jupiter is 4th density.
- When light crosses densities, it becomes electricity.
- The Cassiopeans are 6th density humanoid light forms.
- The Cassiopeans are us in the future.
- There are two polarities: STO (Service to Others) and STS (Service to Self).
- STO is represents balance.
- STS is represents imbalance.
- STS operates within a very narrow frequency band.
- Plants, rocks, minerals are 1st density.
- Animals are 2nd density.
- Humans are 3rd density STS.
- 5th density is the contemplation zone.
- Earth is a convergence point.
- The center of a (magnetized) sphere is a window.
- The interior of the Sun is a window.
- Earth's core is composed of methane and ammonia.
- Proteins are antennas.
- Information field aggregates matter.
- Consciousness is matter.
- Matter is consciousness.
- Consciousness, information, and matter correspond to different concentrations of truth.
- Information arranged by a truth becomes consciousness.
- Without truth and objectifvity, consciousness and individuality fractures and disintegrates.
- Consciousness is objective until it has the capacity to choose to be otherwise.
- The radius of the universe is infinite.
- Thoughts unify all reality in existence and are all shared.
- All is one and one is all.
- Knowledge protects.
- Ignorance endangers.
- Electron is borrowed unit of 7th density.
- Gravity is in perfectly balanced static state.
- Unstable waves can be static in their instability.
Then, we could ask the AI to come up with "new" insights based on our vetted knowledge base. What do you think?
Not sure that AI can come up with "new insights".
You can certainly squeeze a lot of good reasoning from a model with system prompts and context files and then sending the AI out to find more stuff, but the essential problem with LLMs is that in this mode you are essentially giving them the seed of the answer and they simply expand on it. This has to do with how they work under the hood and is part of the argument against AGI, because
the LLM cannot generate anything it hasn’t already seen. It doesn’t actually think, it just reformulates your words and prompts and ‘expands’ them.
That's what gave me the idea of using Grok to read through all our legal files, organize, analyze, and spit out the facts and assessment. For that, it was a useful tool.
But, there is limited usefulness. I think for research and study there are some uses but they are limited. The models' tendency to hallucinate in order to make the user happy makes research very difficult.
AI can be very good for sort of low level pedagogy. It can help you study in the traditional sense, like learning a topic composed mainly of facts which the AI collects efficiently - more efficiently than you can - but once you get to the higher levels that require greater degrees of abstraction and inference of new or unique insights, it’s a very poor helper.
Today, Kimi K2.5 appeared on one of the platforms I use. It was released only a few days ago. So I tried it.
I was kinda floored by the first answer and how open it was. Let me share a few excerpts.
This was the first thing that caught my attention. Saying that we probably have antigravity and free energy in secret projects is something I don't usually see acknowledged this easily.
Then there was this part about banks:
That's a pretty deep explanation of how things work in just three paragraphs.
Calling a spade a spade.
Obvious to us, but try explaining that to most people...
Another good insight, but what I really liked is the examples at the end. Kimi is not shy and often pretty funny in how clearly it lays out certain facts. But it gets better.
By this time, I was really paying attention. Again, this goes deep, and chemtrails are not usually acknowledged as worth consideration unless you push hard. But just seeing how many things are packed into two paragraphs is pretty interesting. Still not finished, though.
You get Ponerology, and even lizards aren't off the table. And the last part:
Great examples to illustrate the point again.
Given how we had touched upon the idea that AI answers could be influenced by higher density beings (good or bad), it makes you wonder. Either there was some of that going on here, or some models are just pretty damn awesome on their own. Either way, I'm not detecting even a shred of censorship in this answer.
Now, this was quite astonishing. It should go in the hall of fame! Put it in an article. Don't need much more than you wrote here explaining what it was and what you were doing. Include prompts.