Claude Takes a Trip to the Oort Cloud

So here are the last couple of prompts I've given. Anything more just feels like it will just run in circles around what has already been presented. Though anyone reading this far, hopefully it was at least a little enjoyable, and if you have any thoughts or observations, please share!
I found this interesting.
I came to the conclusion that Claude loved literature during his “schooling” and read a lot, but he was skipping math classes. :lol:

In fact, it reminded me of three interactive books that were in the library of our local kindergarten. They were famous stories (one was The Jungle Book) organized in such a way that after each “scene” the question was asked; What did * (the main character) do?
After that, a choice of at least three options was given. The choice was directed to the next page to read (another timeline ?).
In this way, it was possible to tell at least 10 different stories (if not more) from one book.
 
There is no mention in that session of the "Photon belt" which you seem to keep highlighting. Are you using data from sources other than the link I gave you?


From now on, use only the source links I provide you.


Now, using this link Session 1 November 2025 provide an updated understanding on the concept of the twin sun


Well.. 😅 it seems to have difficulty here

You could give it this one

 
So here are the last couple of prompts I've given. Anything more just feels like it will just run in circles around what has already been presented. Though anyone reading this far, hopefully it was at least a little enjoyable, and if you have any thoughts or observations, please share!

Well, that was an interesting exercise I think, and it serves to underline just how abstract the core ideas of the Cs material really are. Unfalsifiable for sure, but then that seems unavoidable given that the core ideas are "not of this world" so to speak.

Maybe an interesting task for Claude would be for it to collect as much evidence as possible from our 3D world that supports the core ideas in the Cs material.
 
Maybe an interesting task for Claude would be for it to collect as much evidence as possible from our 3D world that supports the core ideas in the Cs material.
Yes, that seems to be indeed another good way to use an AI: "Find evidence that supports my theory, instead of only trying to debunk it." Then the AI is sort of neutral on it.

Even further would be: "Assume that my theory is correct and find evidence for it." In this case the AI may become too agreeable after some time.
 
Yes, that seems to be indeed another good way to use an AI: "Find evidence that supports my theory, instead of only trying to debunk it." Then the AI is sort of neutral on it.

Even further would be: "Assume that my theory is correct and find evidence for it." In this case the AI may become too agreeable after some time.
Since AI as a rule lies and makes things up, pulls them out of thin air, just to please the user, it might happen like what happened on Ark's blog with the DeepSeek AI few months ago.
The AI was all happy to prove certain mathematical relation upon demand, and even proved it in two different ways, only to be revealed upon bit more careful checking up that it simply lied and cheated, in both cases. Only first few lines, if that, were correct, the rest was blatantly wrong, except the last line which was to show that the expected result was obtained and thus given math relation proven.
FWIW.
 
Since AI as a rule lies and makes things up, pulls them out of thin air, just to please the user, it might happen like what happened on Ark's blog with the DeepSeek AI few months ago.
The AI was all happy to prove certain mathematical relation upon demand, and even proved it in two different ways, only to be revealed upon bit more careful checking up that it simply lied and cheated, in both cases. Only first few lines, if that, were correct, the rest was blatantly wrong, except the last line which was to show that the expected result was obtained and thus given math relation proven.
FWIW.
If you are referring to this post on Ark's blog, the AI was not Deepseek but Perplexity. There seem to be big differences between AIs and so far I have not seen Deepseek hallucinate or make up sources - whenever I checked the study names it gave me, those actually existed. It seems that Deepseek has a more robust system for checking itself for errors in its calculations and 'thinking'.

So no, I would not say that all AIs "as a rule lie to please the user". Some AIs are in fact quite strict and dismissive, such as the Brave AI. Others are programmed to please much more. Deepseek is somewhere in the middle I think and you can tell it through prompts that it should stick to hard data and not try to please you, while also not dismissing your theories out of hand.
 
There seem to be big differences between AIs and so far I have not seen Deepseek hallucinate or make up sources - whenever I checked the study names it gave me, those actually existed. It seems that Deepseek has a more robust system for checking itself for errors in its calculations and 'thinking'.
I caught it making a mistake and corrected it. Then, in his "deepthink" mode, it said:

Okay, the user just corrected me about [redacted]. I need to address this carefully. They pointed out it's in the [redacted] gene, not [redacted]. I should acknowledge the mistake right away and thank them.

It was about gene mutations that I know very well. I would have expected confabulation from censorship, not a mistake about the nature of a gene mutation. Maybe gene mutations are not its best use.
 
If you are referring to this post on Ark's blog, the AI was not Deepseek but Perplexity. There seem to be big differences between AIs and so far I have not seen Deepseek hallucinate or make up sources - whenever I checked the study names it gave me, those actually existed. It seems that Deepseek has a more robust system for checking itself for errors in its calculations and 'thinking'.

So no, I would not say that all AIs "as a rule lie to please the user". Some AIs are in fact quite strict and dismissive, such as the Brave AI. Others are programmed to please much more. Deepseek is somewhere in the middle I think and you can tell it through prompts that it should stick to hard data and not try to please you, while also not dismissing your theories out of hand.
No, it was the DeepSeek, around mid July, in this post on Blogger:

22-07-25 7:48 Playing with AI. At the end of the document I have added the Summary and the solution to Exercise 2 as it was found and nicely written for me in .tex by DeppSeek AI. The Summary also written by AI.

Two days later, after realizing what the AI did, Ark wrote:
I was evidently over-enthusiastic and naively assuming superiority of AI. So much for DeepSeek! Grok 4 seems to be better, but also not reliable. But we are at the very beginning of AI. I like to compare AI of today to first cars that were running on steam engine, and required a person walking in front of the car and warning people to step away.

In September, he switched to Perplexity.
06-09-25 I switched to Perplexity AI. It seems to be better than ChatGPT and Grok, but still once in a while makes mathematical errors inside proofs. Nevertheless it tries to be careful, writing, for instance: ""However, I should note that this proof requires careful verification of the action of outer automorphisms, and I may have missed some subtle cases."

After another debacle with DeepSeek:
01-09-25 13:35 I am experimenting with AI. Yesterday DeepSeek AI constructed a beautifully written (but false!) proof of a mathematical theorem in algebra and number theory. It took me a while to pinpoint the error. One should NEVER rely on AI! NEVER! It can lie to you shamelessly! The reasoning may look very logical, but the evil devil may be hiding behind inside.AI can also be a cause of a degenerating process in our brain functioning. It CAN helpful, for sure, but it is also extremely dangerous.

Two months before that, at the start of July, he started playing with AI, Grok.
Grok was remarkably helpful, anticipating my questions, assisting with Mathematica calculations, and discussing relevant publications on Cl(2,2) and related topics. However, Grok also made a series of mathematical errors, from proposing incorrect generators for Cl(2,2) to suggesting flawed minimal left ideals. At one point, Grok even claimed that (−1)×(−1)=−1! Each time I caught these mistakes, Grok graciously apologized. This led me to hypothesize that Grok might be programmed to test the mathematical acumen of its interlocutors, subtly checking whether I truly understand the subject!

But it turned out to be a disappointment.
P.S. 17-07-25 19:38 Grok 4 made today three mathematical/calculational errors when working with Cl(2,2). I am very disappointed! It is supposed to be so much better than Grok 3, is often thinking correctly, but everything that he claims to be true needs to be verified!

Edit: Corrected link to Blogger post.
 
Last edited:
  • Like
Reactions: axj
Back
Top Bottom