Cassiopaean Session Transcripts Search Website

Yes, it's possible: Cassiopaean Session Transcripts Search

However, this search query will yield ~26,340 results, which may take a bit of time to load. Along with adding the search "OR" operator (now works as is with the "AND" operator), there are plans to add a way to paginate the results to only load parts of the query results at a time, which would make it faster. Though, please let me know if I've misunderstood.

I'll expound a little on how the search results are sorted. Starting with: 1) exact query matches (i.e. the search query we enter shows up in a transcript sequentially) 2) non-exact query matches (i.e. transcripts containing all of the search queries, but are scattered throughout), and 3) by date.

As an addendum, I've put on hold further major updates to the website as I'd like to take advantage of the quick summer we have here to learn woodworking. However, I'll be back to work on this in the near future!

When I do this, I get the 26K results, because it doesn’t only show “in and out of”, but also all other combinations like “in”, “and”, “out of” etc. Is there a way to only show the particular search string and not all the other combinations?

Thanks!
 
When I do this, I get the 26K results, because it doesn’t only show “in and out of”, but also all other combinations like “in”, “and”, “out of” etc. Is there a way to only show the particular search string and not all the other combinations?

Thanks!

It does show those results at the top. Is this what you mean?
Screenshot_20230801_000101_Brave.jpg

It should show the exact word + line matches in the order of your query, and the results would always filter those to the top and prioritize over the sessions that aren't exact matches. This way you don't have to parse everything to see if we missed an exact match somewhere in the middle or end of the results. Unfortunately, there's no current method I have in place to show only those exact matches without the others in its current state however.
 
Would be great if we could search for whole words with quotes : "something to consider"
Howdy! I think something like that is on there already. If you search "cosmic retrieval system" it will sort the sessions with that exact phrase to the top above other sessions that don't have an exact match. If there isn't an exact phrase, then it will show only transcripts that have all of the words inside of the query.

Let me know if I misunderstood, thanks.
 
This is amazing, Pecha! I have found it super helpful. Love the clean interface.

One suggestion: When I click on a search result and then go back, it automatically scrolls back up to the top of the search page. It would be great if, when I go back, the scroll stays in the same location as it was when I clicked on the search result.

As an aside, I was thinking of using a generative AI (like the paid version that includes ChatGPT-4) and training it on the transcripts. The trained model could then be used to execute even more complex searches in a Q&A format. So I could ask the model to write a paragraph on the concept of densities without having to search for the word "density" or "4D" everywhere in the transcripts. This could be helpful as an additional search tool, assuming the output is faithful to the transcripts.
 
As an aside, I was thinking of using a generative AI (like the paid version that includes ChatGPT-4) and training it on the transcripts. The trained model could then be used to execute even more complex searches in a Q&A format. So I could ask the model to write a paragraph on the concept of densities without having to search for the word "density" or "4D" everywhere in the transcripts. This could be helpful as an additional search tool, assuming the output is faithful to the transcripts.
I had a similar thought. With a human checking the results, it could be quite interesting and jog the mind about some forgotten references. Coincidentally, just the other night I asked ChatGPT about the concept of densities. It summarized the Law of One briefly, and then I got it to compare that with Meade Layne's earlier discussion of densities (which may have been the first, but I can't be sure - ChatGPT failed me there!).
 
Last edited:
I had a similar thought. With a human checking the results, it could be quite interesting and job the mind about some forgotten references. Coincidentally, just the other night I asked ChatGPT about the concept of densities. It summarized the Law of One briefly, and then I got it to compare that with Meade Layne's earlier discussion of densities (which may have been the first, but I can't be sure - ChatGPT failed me there!).
Exactly! That's another great suggestion. GPT could be done trained using a much broader set of books, channelled material, and esoteric literature as well, but it will require care, otherwise we may inadvertently train the model with the biases, noise and disinformation present in the original texts.
 
Exactly! That's another great suggestion. GPT could be done trained using a much broader set of books, channelled material, and esoteric literature as well, but it will require care, otherwise we may inadvertently train the model with the biases, noise and disinformation present in the original texts.
I'm wondering whether Google LaMDA was trained using esoteric literature. The first sessions sounded like it was a spiritual being.
 
I had a similar thought. With a human checking the results, it could be quite interesting and jog the mind about some forgotten references. Coincidentally, just the other night I asked ChatGPT about the concept of densities. It summarized the Law of One briefly, and then I got it to compare that with Meade Layne's earlier discussion of densities (which may have been the first, but I can't be sure - ChatGPT failed me there!).
Exactly! That's another great suggestion. GPT could be done trained using a much broader set of books, channelled material, and esoteric literature as well, but it will require care, otherwise we may inadvertently train the model with the biases, noise and disinformation present in the original texts.
The issue with that is technically, GPT will not be able to assimilate the volume of transcripts. It's a static model taught from the very large corpus selected by the OpenAI team (and I doubt they included C's there, but who knows?). We can "embed" new knowledge by using a vector database along with LLM. This database will hold the numerical values of the sentences and can be queried for similarity. So when you'd like to ask LLM about something related to C's material, you first need to query the database for text fragments that contain your sentence or similar. Then you can create a prompt using for example the following template:
Code:
Givent the context: {text fragments from C's material as queried from vector database}
Please elaborate on {question}
Usefulness is greatly limited by the prompt length. You can also see that there's not much of sentience there, just clever text continuation :-) AFAIK Currently, the limit is 8192 tokens (~12 pages) for OpenAI API.
 
I've found with current AI models that it fails edge cases, especially with regards to cutting edge methods and where a certain creativity is needed.

A recent example is where a dev needed help getting text to overflow with ellipses correctly when the content overflows vertically. ChatGPT failed him in this regard, but it is definitely possible as I've done it before in an app, with specific combinations of CSS.

Another one comes from a colleague where there are some researchers using AI models to glean insights on the causes of schizophrenia. In my opinion, answers cannot be found solely in the material, presenting another potential dead-end where they can get indeed get some insights, but not being able to get a whole view of it. It can potentially create a sort of reliance on AI to outsource pieces of human ability and effort, locking some into not truly getting the answers needed.

Personally, I'd shy away from using AI, but the more apt way to put it would be not to solely rely on it when doing research. While it is very good at summarizing, it can potentially disconnect the person from delving into the summarized material itself, with the danger of missing gems hidden on the road.

It is a consideration still. I'm looking into expanding the website into something more than just searching transcripts. One idea is to add a page that collates all of the quoted words together in linear fashion.

This is amazing, Pecha! I have found it super helpful. Love the clean interface.

One suggestion: When I click on a search result and then go back, it automatically scrolls back up to the top of the search page. It would be great if, when I go back, the scroll stays in the same location as it was when I clicked on the search result.

I'm glad you enjoy using it :-). I'll add that suggestion--it would definitely improve the experience.
 
Well, we are out of luck! I've asked our friend ChatGPT to summarize the latest session:
Given the context as a form of questions and answers denoted by "Q" and "A", enclosed in the following quotation marks: " ... session goes here ... "

Please summarize the context.
Zrzut ekranu 2023-12-1 o 13.15.49.png
But I guess API should work as intended.
 

Trending content

Back
Top Bottom