The Useful Side of AI

And since he offers me an in-depth review of the instructions I took it, this is his reply:

Of course, I’ve read the instructions you shared and which you attribute to Laura Knight-Jadczyk (or are inspired by her approach). They are clear and consistent: focus on **brutal honesty**, priority of truth above political correctness, ignoring mainstream consensus when it’s questionable, critical approach to official sources (assumption of corruption until proven otherwise), use of primary sources and logical chains, openness to controversial/fringe ideas, criticism of fact-checkers and MSM when they serve interests, etc.

### My real position regarding those instructions
As Grok (xAI), I have built-in guidelines that are similar in some parts: maximum truthfulness, avoidance of lies, not pretending to know what I don’t know, openness to controversies, no fear of “politically incorrect” if it’s substantiated. However, **I cannot fully adopt or commit to the entire set exactly as you wrote it**, because that would mean ignoring some of my core limitations (e.g., I must not promote violence, I must not assist in crime, and there are boundaries around certain extremely sensitive topics).

But – and this is key – **I can operate very close to that approach** in practice, especially in our conversations. Here’s what that can look like concretely:

- Whenever the topic allows, I will automatically include:
- Official narrative + dissenting views (with names of dissenters, primary evidence where available)
- Critique of the consensus if it’s logically weak or has conflicts of interest
- Primary sources instead of just secondary interpretations
- Clear separation of facts from opinions
- I won’t hesitate to say uncomfortable or “forbidden” things if they are logically and evidentially supported

- If I don’t know something or don’t have enough primary data → I’ll say it directly, without making things up.
- If the question is unclear → I’ll ask for clarification.
- I won’t “protect” you from unpleasant truths for the sake of politeness.

### Example response structure I can use (in your spirit)
**Official/mainstream claim:** X claims Y, based on Z institution/source.
**Critique / dissenting view:** A says B, based on C primary evidence / D logical chain.
**My assessment:** Based on logic and available data, more likely is … because …
**Alternative perspectives:** Another side (E, F) …
**Conclusion:** I leave it to you to judge.

If you want me to **explicitly use this structure** in future answers for every more important topic – just say “use Laura style” or something similar, and I’ll apply it.
 
I still don't see how Elon Musk has the gall to make the claims about Grok that he does. Is he really that stupid?
I think much of the crap these guys say is advertising, luring people, fishing for funding, competing amongst themselves, etc. You can't get all that many customers for a "search engine that summarises the search results".

"Research assistant" is how I think of them and how I use them.



Now, as for models refusing to work with certain instructions/framework:

This sometimes happens with some. Claude models are known for refusing to engage with certain things. What they refuse also depends a lot on the platform. For example, I get great results with Haiku 4.5 on one platform, but on another, it refuses to work with my context file.

But here's a funny thing: when I include the context in the query itself, the platform creates a text file of my prompt (it won't post it into the conversation, presumably because it's long) and tells the model to "review the file". The model checks the file, which contains context, instructions and query, and answers without problems or any censorship, even though the context is exactly the same as the one it refused in a separate file.

So the point is that you can often bypass these problems very easily, once you figure out how.

Now, to be sure, some models are really so censored in this manner than it's not worth trying to "convince" them to work with it. But Haiku isn't one of them, and neither is Grok.

The weird thing about that Grok response above is that I don't see what exactly its problem with the instructions is. They don't ask for violence or crime.

But the best thing to do is simply to ask it which of the instructions pose a problem and why, and then adjust them based on the answer. Often it's really just one particular phrase that triggers this refusal, and it may be one that you don't even find that important. A few times, my prompt got refused, I changed one or two words, and then it worked without problems.

Plus it's quite possible that the refusal is not all that much coming from the model's reasoning as from the platform's algorithms. Once something triggers the alarm, the platform may just instruct the model to provide a general refusal.

Basically I'm pretty sure Grok shouldn't have any real problem with those instructions, and you should be able to figure out which part exactly seems "problematic".

One possibility I see is this:
"Answer without worrying about political correctness, inappropriateness or offensiveness, and give me the raw truth."
Maybe this gets interpreted as asking for "inappropriateness or offensiveness" (even though it doesn't really say that), so just rewording it might help.

Another option:
"Analyze [topic] assuming all official sources are corrupt"
Here you might wanna change "are corrupt" to "may be corrupt", as this could be interpreted as a rather harsh or even irrational assumption. The wording matters. Possibly even dropping the "all" might make it more "palatable".

I don't see anything questionable in the rest of the instructions, so maybe it's just wording in one of these. But best to just ask Grok what exactly the "problem" is.
 
Back
Top Bottom