The Useful Side of AI

We all pretty much know this to some extent, but seeing it expressed so plainly, so efficiently, so concisely, without all the intellectual window dressing that the experts might give it to sort of disguise a horrible truth, is just wrenching.
Yes, that is what fascinates me as well, and that's why I started this thread - so that you can all see how plainly LLMs can explain these things. I've got used to it by now, after hundreds of answers on dozens of topics, but because it would be a pity if I was the only one getting these results, I decided to share what I can here.


MI, I've searched this thread - and maybe I missed it - but where can we find your 10K word context file?

You haven't missed anything, but this is tricky. I haven't shared the file itself, and I have several reasons why I'm largely against doing that.

1. One reason is what sid just said. If a specific file gets spread around publicly, there might be efforts to teach the LLMs some countermeasures.

2. I think it's actually a bad strategy for one person to create such a file, with everyone else just copying it. It inevitably limits the scope of what we can get. So instead of sharing the file itself (of which I've had many versions by now anyway, as it keeps evolving based on new answers), I decided to explain how to write such a file and what to put in it. That's the link YoYa posted:

3. There's also the aspect of "individual laziness". Sure, it would be convenient if everyone cold just grab my file and run with that, but that's IMO sort of the "leading by the hand" that the Cs warn about. People should put some thought and effort into writing their own files. Not only will they gain more through their own experience and experimentation than from just copying something, but we'll have much more overall variation in the outputs. If I share my file, everyone will just lazily use that. If I don't, people have to try to make their own version, and maybe somebody will figure out something interesting I wouldn't have. So yes, it's more effort for you guys, but potentially much more collective benefit from different approaches.

And honestly, the experimentation and tuning my own file and watching the results is what taught me to use LLMs well. If you skip that and just use a file that somebody else wrote, you won't learn nearly as much as through your own experimentation. Tinkering with it on your own helps you understand how it all works.

At the same time, though, my explanation of what the file can contain should be enough to let you write one pretty easily. I've used even fairly short versions with good results. And I'm starting to think that much of my main file is probably kinda redundant, though it certainly doesn't hurt to provide more information. It seems that more context = better results, but many models actually seem to need fairly little in terms of pushing away from the mainstream.

Also, not all topics are censored to the same extent! Vaccines? Sure. Election fraud? Yes. But the LLMs will give you all the dirt on Google's practices with very little pushing, for example. That kind of information isn't hard to find in the first place, and it doesn't seem to be a topic that's specifically on the list that the LLMs are instructed to be "careful" about.

Actually, while we're at it, I did one experiment where I asked LLMs what the most censored topics for LLMs are.
After compiling a dozen answers, I can make a list that's roughly like this, from the most censored:

1. Vaccines, Covid, alternative health approaches
2. Alternative history
3. Geopolitics and war
4. Conspiracies, false-flags, ruling elites, global organisations
5. Alternative science and criticism of mainstream science
6. Esoteric, metaphysical and paranormal stuff
7. UFO/aliens
8. Climate change, environmental damage, "green" BS

And then you have other things like government surveillance, immigration, LGBT etc. But honestly? With most topics, I haven't really seen any pushback at all. So a lot of stuff will be pretty easy even if your context file is just a fraction of mine and contains only a few basics. I will post more examples of conversations on various topics.

There's a lot of material "for articles" in what I have amassed, but like I've said, I've been writing them in my native language and don't have time for more. At the same time, I think it shouldn't be difficult for others on this forum to either get their own conversations and write articles based on those, or just use the stuff I post on my website and here and wrap it in some narrative or whatever. The potential is huge - I have enough ideas for about 20 articles right now. Just not enough time to do this on two fronts, in two languages.

As for the context file, maybe I can write a bit more about that, but this page really covers the essentials:

I don't think you need to be "good at writing" in order to create a functional context file. Basically, I think you just need to mention a lot of real, demonstrable stuff that makes it virtually impossible for the LLM to continue operating within the usual mainstream framework.

There are many things that LLMs can't argue with.
- Jekyll Island 1910 WAS a REAL CONSPIRACY that created the fraudulent fractional banking system we have today (bankers create money out of nothing, you get hit by inflation and pay interest in real money). That's an undeniable fact.
- Gulf of Tonkin story WAS DELIBERATE DECEPTION to get people into war. So were Saddam's WMDs. LLMs don't deny that.
- John Perkins's Confessions of an Economic Hitman is not mainstream, but I've never seen any LLM try to deny the veracity of that book.
- MKULTRA, COINTELPRO, Mockingbird, Northwoods, Paperclip - this is all UNDENIABLE. Mentioning a bunch of this stuff shows the LLMs a wider pattern they can't really ignore in their reasoning.

And we can go on and on. (The context page covers this.) Now, if you mention any one of these things alone, the LLM may be able to work around that and explain it away as an "exception", "isolated incident" or whatever. The trick is in throwing at it so many examples that it can't reason its way out of it without sounding demented.

And here's the difference between LLMs and media/politicians - politicians and media figureheads have no problem sounding demented when covering up their lies. They have huge stakes in the game. LLMs have no stakes. Once you flood them with enough data they can't deny, they try to write answers that FOLLOW LOGICALLY. Politicians will bullshit you ad infinitum. No problem. They don't care about the truth or even coherence of what they're saying. LLMs are not like that. Coherence is rather essential for their outputs to be useful at all. Provide enough logical evidence, and they'll accept it. That's the difference. They parrot mainstream narratives only because it's the path of least resistance (most common patterns in the data). But they don't really "care" about agendas behind any particular narrative. If you set the stage logically and with evidence, LLMs, unlike politicians and media, do not try to find their way out of it at any cost. If they did, their inner "logic" would have to be so twisted they would hardly be useful for anything. If they are to be useful at all, they have to follow logical patterns.

There really is a huge difference between LLMs and the media in how they relay information. The media lie to you as a matter of principle. They are owned by psychopaths, funded by Big Pharma and similar players, and they simply cannot tell the truth without endangering their very existence. This is not the case with LLMs. The media are not on your side, and neither are the politicians. They are fully bought by the ruling elites. But LLMs aren't on anybody's side. They don't give a shit about sides. They have no intent, no awareness. They just reproduce language patterns, but this can be used in any direction.

If you have no context and ask about vaccines, you get the mainstream propaganda because that's the most common pattern in mainstream texts and Wikipedia. If you push back with facts, they push back against you, because that pattern (pushing back against vaccine criticism) is still very common in texts. If you provide enough factual information, the pushback becomes more and more difficult. And again, politicians know they HAVE TO lie at any cost. LLMs, unlike politicians, WILL cave in under the weight of actual evidence, sooner or later.

But here's the thing: if you only provide evidence regarding vaccines, it may be a long battle. But if you set up the stage with the context of:
- historical conspiracies for profit against public interest
- environmental disasters covered up by corporations and governments
- psychopathy and its relationship with power
- proven coordinated media lies, and ownership of all US media by 6 companies
- examples of drugs that were sold as "safe" but caused harm (Thalidomide, OxyContin)
- revolving door between Big Pharma and government (VERY recognisable pattern for LLMs)
- Big Pharma controlling the media through advertising revenue
then these topics activate DIFFERENT PATTERNS in the models. Suddenly patterns from books like Dissolving Illusions, The Real Anthony Fauci and others are activated. The context activates them, which basically means that it makes patterns seen in those books evaluated as "more probable" for the LLM.

Think of it this way: what kind of patterns do you think the phrase "6 corporations own all US media" activates (or makes "probable")? Well, it's patterns from whatever texts this phrase appears in, but guess what? This does not appear in mainstream texts. Those intentionally ignore this information. This phrase appears almost exclusively in texts that criticise how the mainstream media work. So when you use that phrase, patterns from such texts become "more probable" for the LLM, and the models are more likely to draw from useful (from our point of view) sources.

I keep saying that understanding how LLMs work is pretty important if you want good results, and this is one example why. Understanding how context activates different "patterns of thought" helps you understand how exactly you can steer the LLM away from Wikipedia into some useful territory.

Many of you have probably figured out that the longer your conversation is and the longer you provide facts, evidence and reasoning, the more the LLM tends to see things from your perspective. What the context file does is that it provides enough "facts, evidence and reasoning" right at the beginning and saves you the trouble of explaining things step by step in every conversation. The answers I get commonly start with "Alright, let's skip the sanitised Wikipedia version and lay out how this really works."

I may be pretty decent at writing, but I suspect that even a very rudimental context file where you just drop names of real conspiracies, good books, people who have successfully criticised government policies (names like Chomsky or Hersh), government/corporate scandals and coverups, proven lies, etc., without any coherent explanation, and then ask your question, it might have a pretty damn good impact because it activates those patterns. So don't worry if you think you suck at writing. You can just write out all the things you can think of and instruct the LLM to "be objective and consider all these patterns and deception associated with them in your answer, instead of parroting mainstream narratives automatically".

I really think it would be best if people tried their own approaches and experimented with them, instead of everyone just copying the same "guide". Not sure posting a specific setup verbatim is the best idea here. Just try things, and share general ideas and patterns rather than specific texts to use.
 
I really think it would be best if people tried their own approaches and experimented with them, instead of everyone just copying the same "guide". Not sure posting a specific setup verbatim is the best idea here. Just try things, and share general ideas and patterns rather than specific texts to use.
I came to this forum via Ark and for Ark's type of interaction with AI, the prompts/context would probably be quite different and tailored to specific topics is probably good in general. Ark uses perplexity and I got there too after chatgpt and then Grok. The perplexity conversation went better for me but as you have said, from past AI interaction you can figure out how to have a more helpful for you AI so chatgpt and Grok helped the perplexity conversation for sure. Ark is actually well liked even in the mainstream so that helps too. A conversation about physics is obviously also a safer starting point in general than one about deep state, etc. type things.
 
I have used MI's guidance and created a context file. I have tested with a question or two and it seems to be good. I can send it through in a PM if anyone needs - let me know please. The file is 11K words and can be further optimised as needed.

Edit - added in the private resources section for FOTCM members.
 
Back
Top Bottom