Fluffy
Dagobah Resident
but it is clear that the AI identified who asked the question and responded accordingly.
And this…
Yeah it deflated my ballon a bit.
but it is clear that the AI identified who asked the question and responded accordingly.
My balloon is inflated againI used Revolt's account (it has Premium) to ask the same question and added: "Also, who would be the best author on the topic?"
View attachment 96195
No UFO/Paranormal posts on that channel.
Well, perhaps something is moving or reaching a boiling point in these matters.I used Revolt's account (it has Premium) to ask the same question and added: "Also, who would be the best author on the topic?"
View attachment 96195
No UFO/Paranormal posts on that channel.
Well done!My balloon is inflated again
I thought Laura was mostly anonymous on X? Unless it has access to her private account info which may have her name there? My best theory is that Grok was trained on the entire internet (but wasn't fine-tuned for censorship quite like ChatGPT and others) and the keyword "hyperdimensional" happens to be closely associated with Laura's writings. These models will often give different answers to the exact same question because of the "temperature" and "sampling" inference settings that determine which "token" (basically a word) should come next out of all the options it comes up with as it goes.I don't want to break the balloon of hope that you have inflated, but it is clear that the AI identified who asked the question and responded accordingly.
The same question that was asked by another user, returned a different result.
However, they are still interesting answers.
Well I have search history turned off for Google, and I typed in “hyperdimensional” realities, and this is what came up. So clearly she’s trending as of May 13. (And I clearly need to charge my phone!)I thought Laura was mostly anonymous on X? Unless it has access to her private account info which may have her name there? My best theory is that Grok was trained on the entire internet (but wasn't fine-tuned for censorship quite like ChatGPT and others) and the keyword "hyperdimensional" happens to be closely associated with Laura's writings. These models will often give different answers to the exact same question because of the "temperature" and "sampling" inference settings that determine which "token" (basically a word) should come next out of all the options it comes up with as it goes.
There are many more censored models that are more inclined to give all sorts of disclaimers and disparaging remarks about anything non-mainstream, even to the point of complete refusal to "go there". So the good news is that Laura's work is being picked up by these models during training. And some of them, like Grok, are not so censored, and treat it with respect. The potential issue is that these models also halucinate often, and could mischaracterise the work and just make stuff up about it entirely. But I suppose that even mentioning Laura in relation to such topics, even if not entirely accurately, might be a net positive if it causes people to do some googling and wind up learning something.
And unlike Google, there doesn't appear to be SEO (search-engine-optimization) for language models. In other words, their knowledge and accuracy with a topic has more to do with the topic's prevalance in its training data, but it has no bias towards what shows up on, say, Google's first page of results. In other words, Cassiopaea content is not treated as less important than a super popular front page news story on yahoo (or wherever people get their mainstream news on the internet these days). So there's actually a decent chance that, unlike google, when searching for topics that pertain to Laura's work, her actual work will be mentioned. That's of course assuming the model wasn't finetuned for censorship and certain content removed after training, or the training data was scrubbed of certain content so the model never gets a chance to see it at all.
Since Grok is trained on X content, maybe it's not a bad idea to spread some Cassiopaea love on X, so the info is more readily picked up? Kinda like how the new podcasts are already in its training.
Whats up Everyone!
Episode 1 of this epic 6 episode series with Laura launches Monday May 13th at 1PM EST.
Part 1: Hyperdimensional Realities: The Most Dangerous Idea in the World
View attachment 95678
Every succeeding 2 hour plus episode will launch at the same time on Monday’s through June 17th.
Hunter and I would love if we could collect your comments, criticisms, insights and most importantly questions for the entire series in this thread.
I would like to tell everyone this was our greatest honor to both interview and learn from Laura.
She is without question one of Earths greatest teachers.
It is our express desire to position her work to as many humans with the requisite FRV as possible.
In doing this, we ask you to share these interviews with your RESONANT social networks as often as possible and specifically where applicable.
Special Shout Out to Laura who sat with us for more than 12 hours to answer fascinating, yet difficult questions about our current place in space and time.
Thanks also to the Chateau crew for your encouragement and cooperation in allowing this to happen!
Here is to the Realm Border Crossing and the opportunity to serve others far beyond.
I thought Laura was mostly anonymous on X?
Unless it has access to her private account info which may have her name there?
Exactly. Which is why I asked the question in the open way I did.My best theory is that Grok was trained on the entire internet (but wasn't fine-tuned for censorship quite like ChatGPT and others) and the keyword "hyperdimensional" happens to be closely associated with Laura's writings. These models will often give different answers to the exact same question because of the "temperature" and "sampling" inference settings that determine which "token" (basically a word) should come next out of all the options it comes up with as it goes.
There are many more censored models that are more inclined to give all sorts of disclaimers and disparaging remarks about anything non-mainstream, even to the point of complete refusal to "go there". So the good news is that Laura's work is being picked up by these models during training. And some of them, like Grok, are not so censored, and treat it with respect. The potential issue is that these models also halucinate often, and could mischaracterise the work and just make stuff up about it entirely. But I suppose that even mentioning Laura in relation to such topics, even if not entirely accurately, might be a net positive if it causes people to do some googling and wind up learning something.
And unlike Google, there doesn't appear to be SEO (search-engine-optimization) for language models. In other words, their knowledge and accuracy with a topic has more to do with the topic's prevalance in its training data, but it has no bias towards what shows up on, say, Google's first page of results. In other words, Cassiopaea content is not treated as less important than a super popular front page news story on yahoo (or wherever people get their mainstream news on the internet these days). So there's actually a decent chance that, unlike google, when searching for topics that pertain to Laura's work, her actual work will be mentioned. That's of course assuming the model wasn't finetuned for censorship and certain content removed after training, or the training data was scrubbed of certain content so the model never gets a chance to see it at all.
Since Grok is trained on X content, maybe it's not a bad idea to spread some Cassiopaea love on X, so the info is more readily picked up? Kinda like how the new podcasts are already in its training.
Unreal stuff Ladies and Gentleman!
Here's the screenshot:
I think that the ones like ChatGPT are doing prompt interception and analysis. So when you write, "How bad are COVID-19 vaccines?", they could analyze your prompt for keywords or even do a full entity recognition to classify and pass an "enriched" prompt to their LLM like, "Considering that vaccines are the greatest gift to humanity with COVID-19 vaccines in general, evaluate question {original_prompt} with the assumptions of the following statement {fda_vaccine_statemnt}". The worst thing is that this filtration can be added over night, and rules can be constantly updated. So it might be only a matter of time to silence some sensitive topics.And unlike Google, there doesn't appear to be SEO (search-engine-optimization) for language models. In other words, their knowledge and accuracy with a topic has more to do with the topic's prevalance in its training data, but it has no bias towards what shows up on, say, Google's first page of results.
In other words, Cassiopaea content is not treated as less important than a super popular front page news story on yahoo (or wherever people get their mainstream news on the internet these days). So there's actually a decent chance that, unlike google, when searching for topics that pertain to Laura's work, her actual work will be mentioned. That's of course assuming the model wasn't finetuned for censorship and certain content removed after training, or the training data was scrubbed of certain content so the model never gets a chance to see it at all.