Thank You for this very interesting session!
So it looks like we have a big clue about OP and perhaps this video shows person of this kind:
Something else she said was particularly interesting, at 2 minutes in:
Transcript of that part:
Q: [...] I don't know how you can possibly read without not hearing the words.
A: When I read I can see the sentence structure. In my head every sentence has a shape (makes a sign wave with her hand) so you can see the shape of the sentence. Key words will pop out and I will file those away into my concept map. So at the end of reading something I can have the concept map (indicates that it's a visual concept map to her) of the main topics that I read about.
Q: So do you visualize the setting when you're reading a book?
A: No, it's not like images - it's just the words.
Q: So do you like reading fiction books where you have to put yourself in that situation?
A: I never really enjoyed reading, maybe that's why. I can see the story plot.
I posted 3 videos in the ChatGPT thread about how language models work:
Ark decided to play a bit. Here is his experience with Chat GPT yesterday: Ark: Do all humans have souls? GPT: The question of whether all humans have souls is a deeply philosophical and spiritual one, and its answer varies widely depending on cultural, religious, and individual beliefs...
cassiopaea.org
Long story short, neural networks work by fitting a curve to a plot of points. Those points could represent concepts, words, paragraphs, pictures, it doesn't matter. The algorithm tries to draw a curve through all the points, which allows it to come up with a function to explain that curve, which allows it to then predict the "next point" by using that function to extending the curve beyond the known points. The better your line fits the points, the better your prediction.
I can't really imagine how a sentence can have a visual shape, but what she said there sure seems to mirror the shape and visual "concept map" described in those videos! This also explains why language models can't really think of anything truly novel. They're just predicting what word should come next according to the curve that fits the words that came before. That's not thinking! It's pattern recognition, and sure that is useful and could help you connect some dots, but that's just one element of thinking. If all you do is fit curves through concepts, that's very mechanical and you can't come up with new concepts, you can't think outside the box because by definition you're just stretching and deforming the box. Useful to do? Yes. But if we all did that, we'd never invent anything new - technologically, scientifically, or even philosophically.
I think that's another cool thing about these artificial neural networks we've been building. They can teach us a lot about our own brains - both their usefulness to navigate/"understand" certain aspects of the physical world, and also their limitations (like lack of true understanding) and why a soul is needed to go beyond the rudimentary functioning. And thinking about this it makes more and more sense why we have brains, why the soul isn't just remote piloting a body on its own. At least part of the function of the brain seems to be to deal with all the mundane things. Keep all the physiological things going smoothly (heart rate, blood sugar/pressure, etc), scratch itches, duck when a baseball is coming at you (or catch it!), feed yourself, etc. The soul can focus on greater more interesting things like learning lessons and growing from doing so, learning how to love and care for others and ourselves, etc.
What seems to be tricky sometimes is to parse out what is our brain (neural network) and our hormones/chemicals, and what are the influences of the soul. And it is pretty crazy how "functional" our brain alone can be, at least in terms of navigating our 3d world in a basic way. Or maybe that just speaks about the simplicity of 3D as compared to higher densities, that it actually doesn't take proper intelligence beyond curve fitting to handle basic activities without dying.
Which of course makes me wonder about things like the grays - which are like the OP's of 4D. How autonomous are they when not being piloted by a lizzie soul I wonder? Can they manage 4D on a neural network alone? I read (I think in the Wave, can't seem to find it now) that they're not good at adapting to new situations, and are easily confused when things don't go as they expect. In other words, they have their neat little curve fitted to their concept map, and if you act outside of what their function predicts, they literally don't have the capacity to respond as needed. Again, I think it's the Wave - but wasn't there a story about Ark sitting in a bar and some guy trying to take his wallet and he said firmly "do we have a problem?" (or something to that effect) and it completely melted that guy's reality and he just left as he didn't anticipate that? I could be mis-remembering the details.
One neat conclusion I could draw is that artificial neural networks would allow for some actually useful robots in the future (if we didn't have the cleansing anyway). But right now the trajectory is to use them to educate kids, and basically plug them into everything. This is bad because their thinking (or lack thereof) will infect the susceptible people who hang on their every word in admiration of their "intelligence", and those people's thinking will itself be corrupted as they try to learn emulate their AI tutors and what not. And kids are most susceptible! These things are tools, and they should be seen as such, and probably not used by anyone who admires it or projects soul qualities into it.