6 Part Podcast Series with Laura Interviewed by Jay Campbell & Hunter Williams

I have just a regular X account, not paid or premium (it’s offered in my account settings, but I haven’t opted for it). That said, once I logged in I was able to see the post and the comments below it. However, not having premium means I don’t have access to Grok, so while I can read its responses to others, I cannot myself ask questions. FWIW
 
Last edited:
I don't want to break the balloon of hope that you have inflated, but it is clear that the AI identified who asked the question and responded accordingly.

The same question that was asked by another user, returned a different result.

However, they are still interesting answers.
I thought Laura was mostly anonymous on X? Unless it has access to her private account info which may have her name there? My best theory is that Grok was trained on the entire internet (but wasn't fine-tuned for censorship quite like ChatGPT and others) and the keyword "hyperdimensional" happens to be closely associated with Laura's writings. These models will often give different answers to the exact same question because of the "temperature" and "sampling" inference settings that determine which "token" (basically a word) should come next out of all the options it comes up with as it goes.

There are many more censored models that are more inclined to give all sorts of disclaimers and disparaging remarks about anything non-mainstream, even to the point of complete refusal to "go there". So the good news is that Laura's work is being picked up by these models during training. And some of them, like Grok, are not so censored, and treat it with respect. The potential issue is that these models also halucinate often, and could mischaracterise the work and just make stuff up about it entirely. But I suppose that even mentioning Laura in relation to such topics, even if not entirely accurately, might be a net positive if it causes people to do some googling and wind up learning something.

And unlike Google, there doesn't appear to be SEO (search-engine-optimization) for language models. In other words, their knowledge and accuracy with a topic has more to do with the topic's prevalance in its training data, but it has no bias towards what shows up on, say, Google's first page of results. In other words, Cassiopaea content is not treated as less important than a super popular front page news story on yahoo (or wherever people get their mainstream news on the internet these days). So there's actually a decent chance that, unlike google, when searching for topics that pertain to Laura's work, her actual work will be mentioned. That's of course assuming the model wasn't finetuned for censorship and certain content removed after training, or the training data was scrubbed of certain content so the model never gets a chance to see it at all.

Since Grok is trained on X content, maybe it's not a bad idea to spread some Cassiopaea love on X, so the info is more readily picked up? Kinda like how the new podcasts are already in its training.
 
Last edited:
I thought Laura was mostly anonymous on X? Unless it has access to her private account info which may have her name there? My best theory is that Grok was trained on the entire internet (but wasn't fine-tuned for censorship quite like ChatGPT and others) and the keyword "hyperdimensional" happens to be closely associated with Laura's writings. These models will often give different answers to the exact same question because of the "temperature" and "sampling" inference settings that determine which "token" (basically a word) should come next out of all the options it comes up with as it goes.

There are many more censored models that are more inclined to give all sorts of disclaimers and disparaging remarks about anything non-mainstream, even to the point of complete refusal to "go there". So the good news is that Laura's work is being picked up by these models during training. And some of them, like Grok, are not so censored, and treat it with respect. The potential issue is that these models also halucinate often, and could mischaracterise the work and just make stuff up about it entirely. But I suppose that even mentioning Laura in relation to such topics, even if not entirely accurately, might be a net positive if it causes people to do some googling and wind up learning something.

And unlike Google, there doesn't appear to be SEO (search-engine-optimization) for language models. In other words, their knowledge and accuracy with a topic has more to do with the topic's prevalance in its training data, but it has no bias towards what shows up on, say, Google's first page of results. In other words, Cassiopaea content is not treated as less important than a super popular front page news story on yahoo (or wherever people get their mainstream news on the internet these days). So there's actually a decent chance that, unlike google, when searching for topics that pertain to Laura's work, her actual work will be mentioned. That's of course assuming the model wasn't finetuned for censorship and certain content removed after training, or the training data was scrubbed of certain content so the model never gets a chance to see it at all.

Since Grok is trained on X content, maybe it's not a bad idea to spread some Cassiopaea love on X, so the info is more readily picked up? Kinda like how the new podcasts are already in its training.
Well I have search history turned off for Google, and I typed in “hyperdimensional” realities, and this is what came up. So clearly she’s trending as of May 13. (And I clearly need to charge my phone!😄)
IMG_1712.png
 
Whats up Everyone!

Episode 1 of this epic 6 episode series with Laura launches Monday May 13th at 1PM EST.

Part 1: Hyperdimensional Realities: The Most Dangerous Idea in the World

View attachment 95678

Every succeeding 2 hour plus episode will launch at the same time on Monday’s through June 17th.

Hunter and I would love if we could collect your comments, criticisms, insights and most importantly questions for the entire series in this thread.

I would like to tell everyone this was our greatest honor to both interview and learn from Laura.

She is without question one of Earths greatest teachers.

It is our express desire to position her work to as many humans with the requisite FRV as possible.

In doing this, we ask you to share these interviews with your RESONANT social networks as often as possible and specifically where applicable.

Special Shout Out to Laura who sat with us for more than 12 hours to answer fascinating, yet difficult questions about our current place in space and time.

Thanks also to the Chateau crew for your encouragement and cooperation in allowing this to happen!

Here is to the Realm Border Crossing and the opportunity to serve others far beyond.

The is great I always enjoy when I get to hear and see Laura talk in person.

Welcome to the forum Jay…Cheers🥂
 
I thought Laura was mostly anonymous on X?

Yes.

Unless it has access to her private account info which may have her name there?

Possible.

My best theory is that Grok was trained on the entire internet (but wasn't fine-tuned for censorship quite like ChatGPT and others) and the keyword "hyperdimensional" happens to be closely associated with Laura's writings. These models will often give different answers to the exact same question because of the "temperature" and "sampling" inference settings that determine which "token" (basically a word) should come next out of all the options it comes up with as it goes.
Exactly. Which is why I asked the question in the open way I did.

There are many more censored models that are more inclined to give all sorts of disclaimers and disparaging remarks about anything non-mainstream, even to the point of complete refusal to "go there". So the good news is that Laura's work is being picked up by these models during training. And some of them, like Grok, are not so censored, and treat it with respect. The potential issue is that these models also halucinate often, and could mischaracterise the work and just make stuff up about it entirely. But I suppose that even mentioning Laura in relation to such topics, even if not entirely accurately, might be a net positive if it causes people to do some googling and wind up learning something.

Yup.
And unlike Google, there doesn't appear to be SEO (search-engine-optimization) for language models. In other words, their knowledge and accuracy with a topic has more to do with the topic's prevalance in its training data, but it has no bias towards what shows up on, say, Google's first page of results. In other words, Cassiopaea content is not treated as less important than a super popular front page news story on yahoo (or wherever people get their mainstream news on the internet these days). So there's actually a decent chance that, unlike google, when searching for topics that pertain to Laura's work, her actual work will be mentioned. That's of course assuming the model wasn't finetuned for censorship and certain content removed after training, or the training data was scrubbed of certain content so the model never gets a chance to see it at all.

Since Grok is trained on X content, maybe it's not a bad idea to spread some Cassiopaea love on X, so the info is more readily picked up? Kinda like how the new podcasts are already in its training.

Good idea.
 
And unlike Google, there doesn't appear to be SEO (search-engine-optimization) for language models. In other words, their knowledge and accuracy with a topic has more to do with the topic's prevalance in its training data, but it has no bias towards what shows up on, say, Google's first page of results.
I think that the ones like ChatGPT are doing prompt interception and analysis. So when you write, "How bad are COVID-19 vaccines?", they could analyze your prompt for keywords or even do a full entity recognition to classify and pass an "enriched" prompt to their LLM like, "Considering that vaccines are the greatest gift to humanity with COVID-19 vaccines in general, evaluate question {original_prompt} with the assumptions of the following statement {fda_vaccine_statemnt}". The worst thing is that this filtration can be added over night, and rules can be constantly updated. So it might be only a matter of time to silence some sensitive topics.
 
In other words, Cassiopaea content is not treated as less important than a super popular front page news story on yahoo (or wherever people get their mainstream news on the internet these days). So there's actually a decent chance that, unlike google, when searching for topics that pertain to Laura's work, her actual work will be mentioned. That's of course assuming the model wasn't finetuned for censorship and certain content removed after training, or the training data was scrubbed of certain content so the model never gets a chance to see it at all.

To me what was nice to see was that the bot wasn't totally biased against Laura's work. There's THAT at least, in the sense that it doesn't judge, because it doesn't think, and people who have shared her content on X (its data source) weren't against it either. But it´s a language module, and it is picking up on what is in there.

The word "hyperdimensional" was practically coined by Laura and the Cs. So, it is statistically way more likely to pick up on her work. The same if you type "ponerology", you would expect Lobaczewski's work. But if you ask something without those words, like "how real is the UFO phenomenon", for example, you are much less likely to get anywhere close to the Cs because of the giant mass of content, info and disinfo, that it has to process. That would be a better "test" to see how easy it is to find knowledge for a person who is NOT familiar with our work, IMO. And if we share posts on X with real info, it may pick up on it more, who knows? We just can't assume that it's doing any "thinking" on its own.
 
What SAO and KJS said. It's basically statistical processing of a query based on what data it has, or can get (by scanning entire web sites quickly). So in short, "AI" these days is a glorified search engine. The fact that we interact with it in a more 'real' or real-time and natural way is really the only thing that makes it so 'wow' for so many people.

And I keep coming back to that AI meme. For decades, it was all about how in the future, AI would do our laundry, the dishes, mow the lawn, fix the leaky pipe, etc. and we'd be free to be awesome lazy humans. Instead, it's "thinking" for us and the armies of robots that makes our lives easier are nowhere to be found. Meanwhile, who needs Skynet when you've got the "you'll eat bugs and live in 15-min cities and LOVE IT!" gang?

There's also the fact that AI is being integrated into all our tech before the "killer app" has been released. As I've said on YT, first it was the Internet of Things, which never happened. Now it's AI = the future, and chip makers and OS makers and everyone else are MADLY trying to incorporate AI, but they don't even know why. They're trying to create the Next Big Thing and betting the bank on it as the world kinda falls apart. Well, that's what they do...

I'm not seeing the Killer App that changes everything. I'm just seeing glorified search engines, lots of sales pitches, and lots of hype.

Consider the hilarity when Google is now poo-pooed by everyone as the crappiest search engine, but we're all gonna trust their new AI? That's gonna fix everything? Or Microsoft: their search was never popular, but their AI is gonna be just sooo groovy and amazing? Somehow, I doubt it.
 
Back
Top Bottom