Mike Adams health activist

Can you cite the session where the C's hinted that AI is here to stay one way or another?

if they'll overpower humanity as has been quoted many times that amounts to them staying, unless that has already happened and we're living in a matrix "inception"

I don't see how someone could educate themselves about the traps and manipulation avoid a feeding tube like this.

the problem is in over generalization, a feeding tube like what? i haven't seen people chatting with GPT and then going on the streets stealing or murdering their parents
i think most here understand what a feeding tube is, if you don't what are you even doing here? go do Bungie-jumping or something funny

i can certainly see how that COULD BE working in subtle ways with the roleplaying/fiction/novel AI crowd, which seems to be a strong trend right now and i think is obviously usurping the effect we get from reading a true romance/novel, but i can't see how that'd work on less creative or "zero temp" functional tasks focusing on productivity


What's more important, workplace competition, or the life of a human being?

but more important, in some places life depends on having work and being fed,
the difference in the case i quoted could be easily a 100 times or more faster than using google to troubleshoot something, and that's assuming our beloved search engines will stay around when the AI transition occurs..
 
most technology is a double-edged knife, as you say. Like most things, the law of three applies - the good, the bad, and the context.
I tend to agree with everything you pointed out. Which means, everyone should be extra cautious when using this type of technology, staying aware of its dangers, and that it's a slippery slope.

I am convinced (maybe wrongly) that it can be used moderately, with positive intent, and making sure to cross reference the results with other search engines/other sources. But it is addictive, hence the "slippery slope".

It does speed up certain processes during research, such as helping to direct you to the right sources when searching for something.
It's especially good for text revision.
It can help with the wording of certain ideas that are hard to express, especially when writing in a second language.
The other day it helped me when I needed to re-learn a basic math concept I had forgotten, and it was like asking a private teacher.
So for me, it's going to be a card in my deck for when other means fail.

i can certainly see how that COULD BE working in subtle ways with the roleplaying/fiction/novel AI crowd, which seems to be a strong trend right now and i think is obviously usurping the effect we get from reading a true romance/novel,
Funny you mention that because I never wrote in my life. But last year, I saw a TV show based on a novel, then I read the novel. I was dissatisfied with certain turns of events. So I decided I would give a try writing a FanFic (my first try!).
I wrote in English (my 2nd language) 30 000 words. Chat GPT helped me with the revision (grammar, orthography, syntax, structure, etc.).

I did not ask it to write the story for me. I wrote.
It helped me along the way when I had trouble wording my ideas: when needed, I would explain what I was trying to say and ask to phrase it with 5 or 10 different options. Then, I would pick bits and pieces of different options offered and rephrase something new until satisfied. It was a very fun and creative process for me.
I don't see how it would usurp anything when used that way, and the final product should not be less true than others.
 
I don't see how it would usurp anything when used that way, and the final product should not be less true than others
it is a slippery slope on one side it was programmed by shady people and on the other it was trained on a lot of public and copywrited material so one could say that nobody owns it?
still I'd prefer local models even if they're "dumber" at least some of them you can see what kind of data sets went into finetuning it, some claim to change the 'alignment' for political correctness and other BS
 
Here's some info about Mike Adams' AI program, included as an introduction to his April 16, 2024 article, "ANALYSIS: Israel, Ukraine, Western Europe and the United States have already been defeated":

Human knowledge is under attack! Governments and powerful corporations are using censorship to wipe out humanity's knowledge base about nutrition, herbs, self-reliance, natural immunity, food production, preparedness and much more. We are preserving human knowledge using AI technology while building the infrastructure of human freedom. Speak freely without censorship at the new decentralized, blockchain-power Brighteon.io. Explore our free, downloadable generative AI tools at Brighteon.AI. Support our efforts to build the infrastructure of human freedom by shopping at HealthRangerStore.com, featuring lab-tested, certified organic, non-GMO foods and nutritional solutions.

Now, here's a summary of my podcast, as analyzed and re-written by Neo, our AI model that you can download for free at Brighteon.AI. We have new editions of Neo coming out this week, it looks like, including a 7-billion parameter model trained on the largest data set we've produced yet.

Infrastructure of Human Freedom and the Future of Western Civilization​

Authored by Neo, by allowing Neo to transcribe the full audio of my broadcast, then analyze and summarize its content (this is all done offline, locally, without using ChatGPT or any cloud-based AI services)
This podcast/transcript is an excellent overview of the world situation, and likely direction it's heading, as well as positive possibilities it presents to those with knowledge, health and moral intentions.

 
Yes, the NEO LLM AI has been released. I was planning on posting about it.
Here is the video that explains how to install and configure correctly:

NEO LLM guide - How to get started using our Large Language Models

- Downloading and using AI language models for decentralized knowledge. (0:00)
- Using AI language models for knowledge and answers. (2:50)
- Language models, bias, and training. (9:02)
- Using AI language models for chatbots and summarization. (14:33)
- Using AI models for knowledge and health. (20:10)

all of these tools are hosted on : Brighteon.AI
 
it's looking interesting i'll try to catch up with the links later, though it's not a fully pre-trained model but a finetune on mistral(french company) and phi-2(microsoft) and i have not seen said finetune dataset shared

would be cool if they'd integrate it on something like kiwix/zim database so the answers could be checked and referenced back easily rather than being "created", LLMs were used originally as reading tools not generative by tech companies and i think that has not changed so grounding the information is a must

for those inclined to build something liek this themselves there's a really good tool that works on windows too> GitHub - h2oai/h2ogpt: Private chat with local GPT with document, images, video, etc. 100% private
 
it's looking interesting i'll try to catch up with the links later, though it's not a fully pre-trained model but a finetune on mistral(french company) and phi-2(microsoft) and i have not seen said finetune dataset shared
I took some notes from the video tutorial I posted, but I have not finished watching it. I'm resourceful with computers, but I'm not a tech savy/programmer, so I have some learning to do about this topic.
Here are the notes:

We will be posting our parameter files to huggingface.co
This is where you can download a lot of models on a lot of data sets.
They will be released under cwcdatascience and the model names will be NEO

The base model we are training on includes Mistral 7B
we are also training on an uncensored version of Mistral called Dolphin
(model name: Neo Dolphin Mistral 7B)

Brighteon.ai (there are tutorials)
When you download the files (format .gguf), you need to run a local software to run this type of files.
A few programs are recommended, one of them is LM Studio.ai

Out LLM have been trained on millions of pages of documents (books, interviews, articles, podcasts, transcripts, websites, research documents, etc) to try to program the bias out of these base models. These are all experimental; there is no guarantee they will give you the right answer to any particular question. Every model that exists can give you bad information if you prompt it to do so.
 
Last edited:
I took some notes from the video tutorial I posted, but I have not finished watching it. I'm resourceful with computers, but I'm not a tech savy/programmer, so I have some learning to do about this topic.

here's a more straight forward way > https://bderkhan.com/how-to-talk-to-your-giant-docs-2024-guide/

We will be posting our parameter files to huggingface.co
This is where you can download a lot of models on a lot of data sets.
They will be released under cwcdatascience and the model names will be NEO

that would be their page https://huggingface.co/cwcdatascience/
 
Thanks a lot for this link! It will save me some time for sure :)

1 question slightly off-topic for you since you seem to be skilled in computers:
I have too many hard drive backups and all my files are scattered, so I don't know where to look when I'm looking for something. Would there be an efficient way to create a "library" or index or database (I'm not sure what the right term would be), where I could categorize, sort, and indicate myself where to look when I need something?
Is the link you just posted the solution maybe? (the title seems to indicate such)
 
Back
Top Bottom