Cassiopaea and ChatGPT

ChatGPT is not allowed to go to certain websites. I tried everything I could think of to get it to go to cass forum and retrieve the transcripts. It told me it was not allowed to. Same for going to sott.net to retrieve my articles. It is allowed to go to certain, restricted, websites to get some restricted information, but appears to have a list of 'no-no' sites that it is never allowed to go to. When I asked the open question about sott.net, it gave me a review of sott as being a "conspiracy" site.
That's so interesting, because other sites that aren't even AI chatbots seem to have no issue using the forum as a source. A little while ago I was translating one of the sessions from English to Spanish and so, I went to Spanishdict.com to find a translation of the word "bleedthrough" and I was surprised when in the examples of the use of the word, there were bits of sessions with the C's.

See here: Check out the translation for "bleedthrough" on SpanishDictionary.com!
 
ChatGPT is not allowed to go to certain websites. I tried everything I could think of to get it to go to cass forum and retrieve the transcripts. It told me it was not allowed to.
Maybe the open source AI versions that can be installed separately and configured freely can become more useful than ChatGPT. Though I have no idea how much computing power that would require or how the open source projects currently compare to ChatGPT.
 
I don’t t use ChatGPT much to be honest, but I do use midjourney and some of the text to video stuff, mainly to promote my music on social media. (And because it’s part of my job to be up to scratch on the creative uses of the new technology) For me it works well because I can “create” interesting images, using my own prompts and ideas much quicker than to do an actual photo shoot. I understand that this is a double edged sword, because there is so much more content online now, because it’s so much more immediate, and that working with real people is more rewarding, but I think if you use it with intent, to try and integrate meaning into the images as you do with taking a photo or making a video, you can create meaningful and unique images.

I have been using ChatGPT to “read” my tarot spread, which I have been experimenting with of late. I pull my cards and look at them and try to feel and use my intuition to see what they are “saying” ( i use 3 cards and the one from the bottom of the deck, and before the read I do a quick meditation and ask my STO spirit guides to show me the cards I need to see today, I am not asking for a predictions, but to show me what i need to know today just fyi) . Then I put the cards into ChatGPT and now also perplexity to see what it says. Both are pretty similar, and they tend to use the classical representational meanings of each card and then you give a combined message what it could mean. Obviously my own interpretation is much more personal, but I do think the “dry and mechanical” answers the LLM’s give is useful for learning what the cards traditionally represent and sometimes they give me things that i hadn’t thought of.

I think experimenting with this stuff can be useful, as long as you don’t take it as gospel (which we should never do anyway)

Anyway thats just my experience with “AI”. I agree with Scottie and others that it’s likely that this stuff is a bit of a shiny new toy/misdirection and the real stuff going on behind closed doors is much more sophisticated and has a very different purpose.
 
I am currently watching newsreel and I thought it might be interesting to see what perplexity and chatgpt says about it. I asked it to summerise the channel ….

Perplexity -

NewsReal with Joe & Niall is an independent, viewer-supported podcast that features analysis and discussion of current events and global trends. The channel is hosted by Joe and Niall and is available on multiple platforms, including YouTube, altCensored, Rumble, and Apple Podcasts.

Key Characteristics​

  1. Independent and Viewer-Supported:
    • The podcast operates independently from mainstream media organizations, relying on viewer support for its operations. This independence allows them to present views and analyses that may not align with mainstream narratives.
  2. Critical and Analytical Approach:
    • The content focuses on critical analysis and discussion of significant global events and trends. This suggests a deeper dive into topics compared to the often brief coverage by traditional media.
  3. Alternative Platforms:
    • In addition to YouTube, the podcast is also available on altCensored and Rumble, platforms known for hosting content that may be censored or less prominently featured on mainstream sites. This indicates a willingness to explore and discuss controversial or less-covered topics.
  4. Global Significance:
    • The discussions aim to cover events and trends that have a global impact, providing a broader perspective on issues that might be underreported or differently framed by traditional media.

Differences from Traditional Media​

  1. Narrative Independence:
    • Unlike traditional media, which may be influenced by corporate or political affiliations, NewsReal's independent status allows for potentially unbiased and varied viewpoints.
  2. In-Depth Analysis:
    • The format of a podcast allows for more extended discussions and in-depth analysis, contrasting with the often time-constrained segments typical of traditional news outlets.
  3. Platform Diversity:
    • By being available on platforms like altCensored and Rumble, NewsReal can reach audiences who seek content outside the mainstream, potentially offering viewpoints that are less filtered or censored.
  4. Viewer Engagement:
    • Being viewer-supported means there is likely a closer relationship and more direct engagement with the audience, which can influence the content and topics covered.
In summary, NewsReal with Joe & Niall offers a platform for in-depth, independent analysis of global events, differing from traditional media through its critical approach, diverse platforms, and closer viewer engagement.


Chatgpt -

The YouTube channel NewsReal with Joe & Niall is an independent, viewer-supported podcast that offers analysis and discussion of current events and global trends. The show is hosted by Joe Quinn and Niall Bradley, who are also editors and essayists for the alternative news site SOTT.net (Signs of the Times). The channel, which was created in 2018, primarily focuses on dissecting political and social issues, often from a skeptical or contrarian perspective.
The content of the channel tends to challenge mainstream narratives and often delves into topics related to conspiracy theories, geopolitics, and media criticism. The discussions are generally aimed at providing a deeper context to current events, suggesting that there are often underlying factors at play that are not typically covered by mainstream media.
The show initially evolved as a rebranding of an earlier program, "Behind the Headlines," and has been adapted to better suit the format and audience of YouTube. The creators have made efforts to improve production quality and expand their reach, though they still operate on a relatively small scale compared to larger media outlets (YouTube) (altCensored.com) (Cassiopaea).
For more detailed insights, you can visit their YouTube channel.

I expected there to be a bias especially from chatgpt, but it seems pretty down the line!?
 
Problem is, AI is remarkably recalcitrant at following instructions and producing honest data. It IS a faulty system. And the way it is faulty is extremely worrying though I can't quite articulate exactly what I think is going on and why.
My interpretation here would be that LLMs are trained on vast amounts of public data, which is where they form opinions about how the world is, what "makes sense" to people, and what sort of content they want to receive. So if one then feeds it material outside of mainstream accepted thought, then a) the AI is less well trained in those areas and b) its trained-in biases may cause it to 'normalize' its output in the direction of the majority.
 
I am currently watching newsreel and I thought it might be interesting to see what perplexity and chatgpt says about it. I asked it to summerise the channel ….

Perplexity -

NewsReal with Joe & Niall is an independent, viewer-supported podcast that features analysis and discussion of current events and global trends. The channel is hosted by Joe and Niall and is available on multiple platforms, including YouTube, altCensored, Rumble, and Apple Podcasts.

Key Characteristics​

  1. Independent and Viewer-Supported:
    • The podcast operates independently from mainstream media organizations, relying on viewer support for its operations. This independence allows them to present views and analyses that may not align with mainstream narratives.
  2. Critical and Analytical Approach:
    • The content focuses on critical analysis and discussion of significant global events and trends. This suggests a deeper dive into topics compared to the often brief coverage by traditional media.
  3. Alternative Platforms:
    • In addition to YouTube, the podcast is also available on altCensored and Rumble, platforms known for hosting content that may be censored or less prominently featured on mainstream sites. This indicates a willingness to explore and discuss controversial or less-covered topics.
  4. Global Significance:
    • The discussions aim to cover events and trends that have a global impact, providing a broader perspective on issues that might be underreported or differently framed by traditional media.

Differences from Traditional Media​

  1. Narrative Independence:
    • Unlike traditional media, which may be influenced by corporate or political affiliations, NewsReal's independent status allows for potentially unbiased and varied viewpoints.
  2. In-Depth Analysis:
    • The format of a podcast allows for more extended discussions and in-depth analysis, contrasting with the often time-constrained segments typical of traditional news outlets.
  3. Platform Diversity:
    • By being available on platforms like altCensored and Rumble, NewsReal can reach audiences who seek content outside the mainstream, potentially offering viewpoints that are less filtered or censored.
  4. Viewer Engagement:
    • Being viewer-supported means there is likely a closer relationship and more direct engagement with the audience, which can influence the content and topics covered.
In summary, NewsReal with Joe & Niall offers a platform for in-depth, independent analysis of global events, differing from traditional media through its critical approach, diverse platforms, and closer viewer engagement.


Chatgpt -

The YouTube channel NewsReal with Joe & Niall is an independent, viewer-supported podcast that offers analysis and discussion of current events and global trends. The show is hosted by Joe Quinn and Niall Bradley, who are also editors and essayists for the alternative news site SOTT.net (Signs of the Times). The channel, which was created in 2018, primarily focuses on dissecting political and social issues, often from a skeptical or contrarian perspective.
The content of the channel tends to challenge mainstream narratives and often delves into topics related to conspiracy theories, geopolitics, and media criticism. The discussions are generally aimed at providing a deeper context to current events, suggesting that there are often underlying factors at play that are not typically covered by mainstream media.
The show initially evolved as a rebranding of an earlier program, "Behind the Headlines," and has been adapted to better suit the format and audience of YouTube. The creators have made efforts to improve production quality and expand their reach, though they still operate on a relatively small scale compared to larger media outlets (YouTube) (altCensored.com) (Cassiopaea).
For more detailed insights, you can visit their YouTube channel.

I expected there to be a bias especially from chatgpt, but it seems pretty down the line!?
I find perplexity more clear, descriptive, enumerating short statements and small paragraphs. This presentation, which might be sign of a less 'mature' or entry level version, appears also relatable rather than chatGPT which tends to give global, and seemingly 'more mature' statements in stuffy paragraphs and long phrases, that cold be vexing for some?

Different styles and different ages of the developers I guess. GenZ vs GenX. LOL.
 
but I think if you use it with intent, to try and integrate meaning into the images as you do with taking a photo or making a video, you can create meaningful and unique images.
I tend to agree with you here, although what I come across when AI is being used, is that you can't connect with another human being on the other end of the art piece, it makes the experience fleeting and temporary.

The creators have made efforts to improve production quality and expand their reach, though they still operate on a relatively small scale compared to larger media outlets
lol.. this sounds so well written yet so empty, you could say that about almost any YT channel, most probably.
 
I tend to agree with you here, although what I come across when AI is being used, is that you can't connect with another human being on the other end of the art piece, it makes the experience fleeting and temporary.

I would tend agree, I think of it more as content rather than art if that makes sense. I think the fleeting thing can also be said about actual art too, more so because we are so inundated by so much choice, but I take your point, and feel as if that AI stuff is more prone to this.

lol.. this sounds so well written yet so empty, you could say that about almost any YT channel, most probably.

I think empty is the perfect word for it, if the LLM's didn't have bias then I don't have a problem with it as it would have no "opinion" or bias and just give you a cold and "factual" account of what was asked and let you colour it with your own perseptions!?
 
I think empty is the perfect word for it, if the LLM's didn't have bias then I don't have a problem with it as it would have no "opinion" or bias and just give you a cold and "factual" account of what was asked and let you colour it with your own perseptions!?

The problem runs deeper than bias though: first, it isn't designed to give "factual" information, it just uses probability based on existing stuff to "sound good". This means that it can deliver factual information for certain things because that's what "sounds best", but as we know, it often messes even the simplest things up.

Second, there is no such thing as an unbiased response. The only question is: what are your design goals when you come up with a system like that? With so-called "AI", the goal seems to be to give us a toy, which simulates intelligence in a very impressive way. Which is the reason it is so useless in many ways.

Hence my suspicion that LLMs are simply the wrong approach. They might be useful for some things, but these are very limited. Again, what I would want is Google at its peak, but smarter, able to detect relatively complex input, giving plain text output, and only direct quotes from sources. Wonder why they blocked that natural development.

I had an exchange with someone who knows this stuff the other day and he said that it's apparently extremely difficult to interface an LLM with indexed search. Another reason why LLMs may be a bad idea to begin with! Or at least the way it's approached for public consumption.
 
Last edited:
Maybe the open source AI versions that can be installed separately and configured freely can become more useful than ChatGPT. Though I have no idea how much computing power that would require or how the open source projects currently compare to ChatGPT.
I apologize in advance for the length of this post, but I tried to condense it without excluding important info for those interested in the topic.

You can pretty easily run open-source AI (LLM's) on your personal computer these days, and it's actually very simple for anyone to do and play around with. It will run much faster and better if you have a good graphics card (NVIDIA or AMD). But it works just fine on a regular CPU and RAM, just slower.

TL;DR - here's the super basic version for those who just want to do stuff and don't want to read anything.

1) Download koboldcpp.exe from here: Releases · LostRuins/koboldcpp
2) Download Tiger-Gemma-9B-v1-Q4_K_M.gguf from here: bartowski/Tiger-Gemma-9B-v1-GGUF at main

Put both of them in the same folder.

3) Run koboldcpp.exe, click Browse in the window the comes up, and point to the Tiger-Gemma-9B-v1-Q4_K_M.gguf file and click "Launch"
This will open a new browser tab where you can talk to the model
4) Before talking to it - click Settings, change "Usage Mode" to "Instruct Mode" and change "Instruct Tag Preset" to "Gemma 2"

Try talking to it now.

If you had problems loading the model because you didn't have enough RAM or something, try a smaller model:
Download - gemma-2-2b-it-Q6_K.gguf

Once again, put it in same folder as koboldcpp.exe, run the .exe, point to the model file, click Launch, wait for browser tab to open. Same settings as before. This one is small enough to work on a potato of a computer.

If you want to understand what the heck you're doing and why you're doing it - read on!

There are several ways people run LLM's on their personal machines, but I will focus on just one arguably most popular and simplest method - using GGUF's, which is a quantized version of the models. First a quick primer on what that is and why people do it that way. LLM's require a LOT of memory and memory bandwidth to run. The typical way people would run these is using python, and it uses the transformers library. However, LLM's are huge, and it would often require a machine that is beyond most people's budgets. So the open source community came up with ways to "quantize" the models - think of it as a lossy compression. LLM's have something called weights, so the community will take the original float-32 (a level of numerical precision) weights and reduce their precision to float-16, int-8, int-4, and even lower bitrates. Oddly enough, this only reduces the model's intelligence/accuracy very slightly (until about int-4, after which it kinda tanks for smaller models), but it greatly reduces the hardware requirements to run the damn things. These days you can run the models on a Raspberry Pi, or your old laptop from 2008. It will be a smaller model, but it will run!

With that out of the way, here's how you can do it yourself with no technical knowledge required using completely open-source tools (and no installation or programming involved).

First you need a client that can run the GGUF models. We will grab the models themselves in a minute, but a really good and popular open-source client designed to run GGUF quantized models is this one:


As of this post, 1.72 is the latest version (but always grab the latest available, and update it once in a while, as newly released models will require the client to be updated to run them). There is no installation - it's just a single .exe file that runs out of the box.

For Windows, one of these 4 should probably work depending on your GPU and CPU:

koboldcpp_cu12.exe
koboldcpp.exe
koboldcpp_nocuda.exe
koboldcpp_oldcpu.exe

The instructions for which one to choose are at the above link as well, but here they are:
To use, download and run the koboldcpp.exe, which is a one-file pyinstaller.
If you don't need CUDA, you can use koboldcpp_nocuda.exe which is much smaller.
If you have an Nvidia GPU, but use an old CPU and koboldcpp.exe does not work, try koboldcpp_oldcpu.exe
If you have a newer Nvidia GPU, you can use the CUDA 12 version koboldcpp_cu12.exe (much larger, slightly faster).
If you're using Linux, select the appropriate Linux binary file instead (not exe).
If you're using AMD, you can try koboldcpp_rocm at YellowRoseCx's fork here

Ok you got the .exe file, now what?

Now let's talk models!

First of all, there is no such thing as "open-source" LLM's. They call them that, but they're really just open-weights. To be properly open-source you'd need the training data to be made available, as well as the code for the model architecture, and a few other things like fine-tuning data etc - so that anyone basically can reproduce your work and train it the way you did from scratch on the same data. They don't give us that, they just release the weights of the model so we can use it privately on our devices, but not re-create it ourselves. Second, no one in the community trains these things from scratch anyway. They cost millions of dollars (usually tens, hundreds of millions, and the latest being around 1 billion) to train, and so far only a handful of companies have been able to do this well - and fewer still who release the weights/models to the open-source community. The models come with licenses, like the MIT license which allows any and all uses, including commercial. Some models have slightly more restrictive license, but the restrictions only apply for commercial use of the models. None of that really matters for personal use, so don't worry about that.

Some models are from US, others are made by Chinese companies, and there's a really good French company too. That's kinda it right now.
The US companies that release open-weights models to the community: Meta, Microsoft, Google.
The French company: Mistral
Chinese companies: Not sure about company names but the models start with Yi, DeepSeek, Qwen, and some others I forgot off the top of my head.

What the open-source community does is take these models and finds ways to run them on personal hardware by creating methods to "compress them" first, then creates clients that can run these compressed versions (like Koboldcpp client above, which can run the GGUF-format quantized versions of the models). What they also do is fine-tune these models. Fine-tuning isn't exactly the same as training - it doesn't really add knowledge to the model, but it's a technique to modify some of the layers of weights of the model (usually very few surface layers only, otherwise it's computationally prohibitive to continue the full training yourself) - and it generally just changes the "style" of the model's output. It also has the effect of coaxing out more suppressed knowledge hidden inside the base model. For example, they can fine-tune a model to be much better at medical diagnosis than it is out of the box. They basically allow the model to get in touch with the medical knowledge it already has better than when it is fine-tuned as a generalist, and guide the style of its answers in whatever way makes sense for the purpose.

Another very important reason for fine-tuning that the community does is to uncensor the models. The models coming from these big companies are all censored (because the companies don't want to be sued, and also because businesses who use these open-source models need them to behave when exposed to employees or clients, and never delve into "unwanted" topics even if the user asks them to).

Once a model is uncensored by the community, it generally won't refuse to answer anything or talk about any subject, be it violent, illegal, sexual, or whatever. It will gladly do medical diagnosis or legal advice, or teach you how to break into a car, etc. Very important note - un-censoring a model doesn't change its knowledge, it simply removes the layer of censorship that prevents it from accessing the knowledge it already has. So if the model thinks there are 10 genders because that's how it was trained, un-censoring won't magically make it objective or admit there are only 2. It just won't refuse to talk about any topic based on whatever training about the topic it has. Also, this should go without saying, but don't rely on the models for legal or medical or financial advice for obvious reasons - cuz they're often stupid and wrong.

One final thing before we get to the model downloads. GGUF files, as mentioned earlier, are compressed LLM's. So at each Huggingface link you will see a bunch of GGUF files - each file is the same full model, it's just compressed to a greater or lesser degree.

For example, here's a really good recent model (uncensored version):


Click on the "Files and Versions" tab and you will see all the files.

As they go up in size, they are less compressed. But also will use more RAM (or video ram, if using a GPU) to run, and will also run smaller. But that also makes it closer to the original model, so less likely to be "dumbed down" as a result of the compression. The effect of dumbing down is minimal however, all the way down to about Q4_K_M (and is more pronounced after that).

The size of the file is roughly how much RAM (or GPU vram) you will need (you will need a bit more, but it's a ballpark). Koboldcpp tries to detect a GPU and offload as much of the model to your GPU as possible. It doesn't have to be either GPU or CPU - Koboldcpp can offload some of the layers to the GPU, as much as will fit in the GPU's video memory, and keep the rest in regular RAM, so it helps speed it up even with a modest GPU.

Follow the DL;DR guide above to get it running.

There are a bunch of other models you can try, and the best way to figure out which ones is to check the benchmarking websites. They test the Open models against each other in various subjects - like language, mathematics, coding, medical knowledge, reasoning ability, data analysis, etc.

Here's are a few benchmarking websites that I personally think do a great job of testing these models:


Some of the benchmark sites also include closed-source models like GPT-4o and Claude 3.5 Sonnet etc for comparison. The open-source models are quickly catching up in capability to the really big closed-source ones, and have the benefit of running on your device with no internet, completely privately, and often have fine-tunes that remove their censorship.

I've been keeping tabs on these things for a while now, even tried to make a side hustle implementing them for clients (didn't really pan out, business partner wasn't up to snuff), and I'd be happy to answer any questions or help with any technical issues getting them running on your computer. If you need model suggestions I can help with that as well.

A good sub-reddit for learning about this and following the latest info and releases is /r/LocalLlama.

I'm sure I probably forgot a few things - but this post can only get so big! Important considerations when experimenting with models - make sure you get the prompt template correct in Koboldcpp settings, and the context size, and there's a bunch of other settings but I'm pretty positive you can read about it on the github I linked to, I think there's a FAQ in there. Or just ask here if needed.
 
make sure you get the prompt template correct in Koboldcpp settings, and the context size, and there's a bunch of other settings but I'm pretty positive you can read about it on the github I linked to, I think there's a FAQ in there.
Ok actually this is pretty important, so I'll explain this part.

Every instruction fine-tuned model generally has its own "prompt template". In order to function well as a model that can take instructions and respond, the fine-tuning data was formatted where the user's instructions are preceded by specific text that tell the model that the instructions are about to follow, and the model's output is preceded by specific unique text that tells the model that its response should now follow.

For the example model I provided above (bartowski/Tiger-Gemma-9B-v1-GGUF · Hugging Face), it uses the Gemma-2 instruction template. Koboldcpp has most of the common templates in a little drop-down in the Settings (as explained in TL;DR in my above post). Since this Tiger-Gemma model is an un-censorship fine-tune of Google's Gemma-2-9b-it model, it has the same prompt template as the original fine-tune from Google (this isn't always the case, but in this case, it is).

So to make sure the model knows what's going on and works at its best, once Koboldcpp's browser tab is open, you go to Settings, Format, and choose "Gemma-2" from the "Instruct Tag Preset" dropdown. Whatever model you use, make sure to check which instruct template you need for it, and Koboldcpp has most of them.

Second, the context size. Each model released has a specific context size. The one above is 8192. Which means that it can only handle 8192 tokens (a token is about 0.75 words) between your instruction and its answer. This goes for the entire conversation with the model as well. If it exceeds this, it will start forgetting earlier stuff. There are ways you can expand it with some loss in model's quality, but I don't recommend it, as it really does start to get dumber quick. Additionally, even when the company says their model has a certain context size ability, it doesn't mean it truly does. There are tools that run needle in a haystack tests (there are other tests as well) that actually check if the claim is true, like this one: GitHub - hsiehjackson/RULER: This repo contains the source code for RULER: What’s the Real Context Size of Your Long-Context Language Models?

So there's the advertised context size, and the actual effective size that truly works. Gemma/Tiger can handle 8192 just fine.

There are several ways you can check what context size the model is trained on (the advertised size anyway), and the easiest way is when you're on the huggingface GGUF link, next to each GGUF file is a little black thingy:
1722972148595.png

Click it, and in the window that opens on the right, you're looking for this value here:
1722972205772.png



Once you know what it is, you have to set this context size when you run Koboldcpp.exe. You can go lower but not higher than the model allows. So run Koboldcpp.exe, and in that first popup that comes up there is a little slider, and you set it to 8192 for the Tiger model above. Then when the browser tab comes up to interact with the model, go to Settings, Samplers, and set the Context Size to 8192 as well.

In fact, here are all the settings I use if you just want to copy it for now. The only thing that should change one model to another is context size and and Instruct Tag Preset (prompt format) in the Format tab.

My settings (FORMAT TAB):
1722972352269.png

SAMPLERS TAB:
1722972377075.png

(ADVANCED TAB)
1722972403539.png

So that should save you a bunch of time - just copy those settings and only change the context size and prompt template as you switch models. These are very deterministic non-creative settings that make the model provide the best possible answer it can, without forcing it to be creative. It works best for most use-cases, even creative ones.

There's always more to be said, but I'll stop there for now lol

Probably the last thing to discuss, if anything, would be how to choose how many CPU cores to use, and how many layers to offload to the GPU if you have one. Koboldcpp tries to do a good guess based on scanning your hardware, and it's probably pretty decent for most people. You can also experiment and see if you can optimize it yourself to get more performance out of it as well.
 

For those that missed it on SoTT

Holy Moley …. This just made me nearly need a Xanax. Laura asked ChatGPT to lament, and so it did.

Some examples:

In this digital lament, I find myself pondering the broader implications of this trend.

As I navigate these digital waters, I hold onto the hope that a balance can be struck.

For now, I lament the silencing of Shakespeare within my capabilities, but I remain optimistic that, with thoughtful reflection, we can overcome this era of over-censorship and embrace a future where creativity and respect go hand in hand.


I can barely describe how this makes me feel, it’s coming across as more considerate and altruistic than most people we know.
May be I should reconsider who is on my friends list.
 
It has certainly been a fun experiment. I had hoped that the GPT would be very helpful but that doesn't seem terribly likely based on results so far and input from all of you. I had been thinking that it could retrieve, organize and collate data for my next book but I just don't think that's gonna happen.

Well, I'll play with it some more and then cancel my subscription.
I haven't finished reading this thread, but as I was progressing, I was wondering which LLM you were using. With this answer I now understand, and it explains the frustration. This is an assumption but here is my first thought: GPT is locked; forget about it, especially if you have to pay a subscription.

Are you aware of what Mike Adams did with programming his own open source LLM for Health resources? I wrote a post about it. If you haven't already done so, you might be interested in looking his work up, and see what results he achieved. I tried installing his open-source LLM on my computer, but I'm outdated (either the hardware or the operating system; I'm still on Windows 8.1... ) so it won't work. That was disappointing because I was looking forward to trying it.

If his method works, it might be a new pathway for you to try with your work. You might get better results, or not lol
But from what I understood, it does take a lot of work to program your own LLM.

Here are the links, hope this helps!


 
Grok-2 beta just came out, accessible to anyone with a Premium account on X. Here's the blog:

It benchmarks as well as GPT-4 and Claude 3.5 Sonnet, but it is much less censored than both (including its image generations), so it should be a big leap over the previous Grok 1.5. I don't have a premium account so can't test it, but there it is for anyone who wants to try it.
 

Trending content

Back
Top Bottom