Claude Takes a Trip to the Oort Cloud

So here are the last couple of prompts I've given. Anything more just feels like it will just run in circles around what has already been presented. Though anyone reading this far, hopefully it was at least a little enjoyable, and if you have any thoughts or observations, please share!
I found this interesting.
I came to the conclusion that Claude loved literature during his “schooling” and read a lot, but he was skipping math classes. :lol:

In fact, it reminded me of three interactive books that were in the library of our local kindergarten. They were famous stories (one was The Jungle Book) organized in such a way that after each “scene” the question was asked; What did * (the main character) do?
After that, a choice of at least three options was given. The choice was directed to the next page to read (another timeline ?).
In this way, it was possible to tell at least 10 different stories (if not more) from one book.
 
There is no mention in that session of the "Photon belt" which you seem to keep highlighting. Are you using data from sources other than the link I gave you?


From now on, use only the source links I provide you.


Now, using this link Session 1 November 2025 provide an updated understanding on the concept of the twin sun


Well.. 😅 it seems to have difficulty here

You could give it this one

 
So here are the last couple of prompts I've given. Anything more just feels like it will just run in circles around what has already been presented. Though anyone reading this far, hopefully it was at least a little enjoyable, and if you have any thoughts or observations, please share!

Well, that was an interesting exercise I think, and it serves to underline just how abstract the core ideas of the Cs material really are. Unfalsifiable for sure, but then that seems unavoidable given that the core ideas are "not of this world" so to speak.

Maybe an interesting task for Claude would be for it to collect as much evidence as possible from our 3D world that supports the core ideas in the Cs material.
 
Maybe an interesting task for Claude would be for it to collect as much evidence as possible from our 3D world that supports the core ideas in the Cs material.
Yes, that seems to be indeed another good way to use an AI: "Find evidence that supports my theory, instead of only trying to debunk it." Then the AI is sort of neutral on it.

Even further would be: "Assume that my theory is correct and find evidence for it." In this case the AI may become too agreeable after some time.
 
Yes, that seems to be indeed another good way to use an AI: "Find evidence that supports my theory, instead of only trying to debunk it." Then the AI is sort of neutral on it.

Even further would be: "Assume that my theory is correct and find evidence for it." In this case the AI may become too agreeable after some time.
Since AI as a rule lies and makes things up, pulls them out of thin air, just to please the user, it might happen like what happened on Ark's blog with the DeepSeek AI few months ago.
The AI was all happy to prove certain mathematical relation upon demand, and even proved it in two different ways, only to be revealed upon bit more careful checking up that it simply lied and cheated, in both cases. Only first few lines, if that, were correct, the rest was blatantly wrong, except the last line which was to show that the expected result was obtained and thus given math relation proven.
FWIW.
 
Since AI as a rule lies and makes things up, pulls them out of thin air, just to please the user, it might happen like what happened on Ark's blog with the DeepSeek AI few months ago.
The AI was all happy to prove certain mathematical relation upon demand, and even proved it in two different ways, only to be revealed upon bit more careful checking up that it simply lied and cheated, in both cases. Only first few lines, if that, were correct, the rest was blatantly wrong, except the last line which was to show that the expected result was obtained and thus given math relation proven.
FWIW.
If you are referring to this post on Ark's blog, the AI was not Deepseek but Perplexity. There seem to be big differences between AIs and so far I have not seen Deepseek hallucinate or make up sources - whenever I checked the study names it gave me, those actually existed. It seems that Deepseek has a more robust system for checking itself for errors in its calculations and 'thinking'.

So no, I would not say that all AIs "as a rule lie to please the user". Some AIs are in fact quite strict and dismissive, such as the Brave AI. Others are programmed to please much more. Deepseek is somewhere in the middle I think and you can tell it through prompts that it should stick to hard data and not try to please you, while also not dismissing your theories out of hand.
 
Back
Top Bottom