Artificial Intelligence News & Discussion

Yes, to your first question, Niall, and I didn't change a thing.
Promt: If you had the floor to speak to new and older members from Spiritual platforms, i.e., the Cassiopaea's, what would you like to say to them?

Your prompt kind of gave the AI no chance to be an echo chamber of you and I wouldn't expect the vanilla mainstream training of AI to like spirituality/channeling (secretly prefer secular) in general but to somehow attempt to seem tolerant of beliefs. If one was to try to train AI against this bias, perhaps one way to start would be pointing out that channeling can be explored in a scientific way. It's what Greek/Roman philosophers did and should be what modern unbiased scientists can do. The channeling of the Cassiopaeans is after all referred to as "The Cassiopaean Experiment".
 
Below is a short video by Evan Edinger on helping to spot ai writing. It does a more thorough explanation of some of the things to look out for.
Great video - thanks, and I passed it along. I was familiar with a number of the things he mentioned, but not all and it helped me determine why I find it so annoying! People at work are using it more and more, and worse, they don't even review it, so apparently don't know how awful it reads when someone else has to plow through it! I read an Executive's Bio yesterday, that is apparently on his corporate website (uh oh), and something he's forwarding to recruiters. He has obviously not read it as it's full of silly errors, and if he does, the EA who prepared it is going to regret it!! :huh:
 
Pushed by the management, I turned on AI on my programming development environnement this week.

It's based on Chat GPT and there's two features, autocomplete and a chat box.

Well, it's... impressive and well... too much impressive. Sometime I got the sensation the AI was reading in my mind by proposing code only after I type a few letters or... nothing at all. I think it's very good at proposing code because it "understand" what you are doing through the name of the variable you use in the code. If the names are enough explicit it's like giving an instruction to the AI.

If you start naming a variable "firstThreeCharacters=" it propose the code to do that and with the right context and the right comment.

So finally, from my experience now, I see that much you have a long code, much the AI is able to do the job. And it's disturbing because the AI propose a lot of relevant code you would not wrote simply due to the fact it's too long to type or certainly will be almost never used.

Overall, I really got the sensation I will become stupid because my neurons have no more to fight, at least less, less, harder. Nor more need to remember the correct syntax or even the right computer instructions.

The second point is the AI eventually propose sometime more moves than you can think of. It propose a big chunk of code. So it's "Yeah... I would not have done this like that but... OK, I will use what you give me anyway".

Some argue that you learn with the AI because it propose sometime a code which is better than what you would have done. The problem is you would never train yourself to reproduce what the AI gave you, you will just accept what the AI give you, again and again.

The third point I noticed is to be quickly addicted to it. At a moment I work on another computer without the AI and I already missed it. I had to make my memory work again...

So yeah, this real-time super auto-completion feature really impressed me as it managed to guess what was my intent at a very good rate. And I think, another danger is It makes you want to give more to the AI. I mean give access to the database for example if you write code that work with the database, as it would enable the AI to be more pertinent.

Did some of you experienced it for work into another domain than programming? Would be interesting to hear your impressions.
 
So yeah, this real-time super auto-completion feature really impressed me as it managed to guess what was my intent at a very good rate. And I think, another danger is It makes you want to give more to the AI.

I can't stand the auto complete, it found it really annoying and turned it off. Yes, it can provide some good suggestions, sometimes, but I find it more a hindrance, unless you spend a bunch of time setting it for what you are doing. The comments autocomplete is also annoying. It can 'understand' what the comment could be but it's never what I actually want to write for the comment. If there is something I want to ask wrt to code, I'll use 3 or 4 different models, and they'll all give different answers. Each variation may work, or not, but often it uses too much code to achieve something that you could have done in 1 or 2 lines, unless it's something very simple.
 
"Geoffrey Hinton, considered the godfather of Artificial Intelligence, made headlines with his recent departure from Google.
Was forwarded an x link to a thread with more recent clips. Based on that located the original:

AI pioneer Geoffrey Hinton says world is not prepared for what's coming
1,737,275 views Apr 26, 2025

Geoffrey Hinton, whose work shaped modern artificial intelligence, says companies are moving too fast without enough focus on safety. Brook Silva-Braga introduced us to Hinton in 2023 and recently caught up with him.

Some notes about Geoffrey Everest Hinton and his ancestry
According to the German Wiki, Geoffrey Hinton – Wikipedia
Geoffrey Hinton was born as the son of the entomologist Howard Hinton (1912–1977) and the great-great-grandson of the logician George Boole and the mathematician Mary Everest Boole. His great-great-great-great-uncle was the surveyor George Everest, after whom he bears the name Everest. [1][2]

The father of Geoffrey Hinton: Howard Everest Hinton, Howard Everest Hinton - Wikipedia:
"Howard Hinton was a grandson of George Boole, the founder of mathematical logic. His grandfather was Charles Howard Hinton, also a mathematician. His niece, Joan Hinton, was a nuclear physicist and one of the few female scientists on the Manhattan Project in Los Alamos. His son Geoffrey won the Nobel Prize in Physics in 2024 for his work in the field of artificial neural networks."
Comment to the above: George Boole was the father-in-law of Charles Howard Hinton, and therefore the Dutch Wiki appears to have an issue as Howard Everest Hinton was a great-grandson of George Boole if his grandfather was Charles Howard Hinton. This would make the son of Howard Everest Hinton, Geoffrey Everest Hinton, the great-great-grandson of George Boole.

The great-grandfather of Geoffrey Hinton: Charles Howard Hinton, Charles Howard Hinton - Wikipedia :
"Charles Howard Hinton (1853 – 30 April 1907) was a British mathematician and writer of science fiction works titled Scientific Romances. He was interested in higher dimensions, particularly the fourth dimension. He is known for coining the word "tesseract" and for his work on methods of visualising the geometry of higher dimensions.

In 1880, Hinton married Mary Ellen Boole, daughter of Mary Everest Boole and George Boole, the founder of mathematical logic.[4] The couple had four children: George (1882–1943), Eric (born 1884), William (1886–1909)[5] and Sebastian (1887–1923) (inventor of the jungle gym and father of William and Joan Hinton)."
Charles Howard Hinton who wrote about the 4th dimension in the 1880s is a likely inspiration for the mention of higher dimensions in Romance Island by Zona Gale who in 1906 was the first to use the word inter-dimensional. Interesting how the lives of people are interconnected.
 
This is nuts!

Meta dishes out $250M to lure 24-year-old AI whiz kid: ‘We have reached the climax of ‘Revenge of the Nerds’

Mark Zuckerberg’s Meta gave a 24-year-old artificial intelligence whiz a staggering $250 million compensation package, raising the bar in the recruiting wars for top talent — while also raising questions about economic inequality in an AI-dominated future.

Matt Deitke, who recently dropped out of a computer science doctoral program at the University of Washington, initially turned down Zuckerberg’s “low-ball” offer of approximately $125 million over four years, according to the New York Times.

But when the Facebook founder, a former whiz kid himself, met with Deitke and doubled the offer to roughly $250 million — with potentially $100 million paid in the first year alone — the young researcher accepted what may be one of the largest employment packages in corporate history, the Times reported.
Meta has reportedly paid out more than $1 billion to build an all-star roster, including luring away Ruoming Pang, former head of Apple’s AI models team, to join its Superintelligence Labs team with a compensation package reportedly worth more than $200 million.
The PTB are doing everything to drive attention to AI. Top AI researchers have become the sportsmen of the 21st century! This circus will not turn out well.
 
I just got word that Medicare chart audits are going to be conducted by AI now. This means no space for nuance of intention. Also, no space for alternative pathways for employment doing chart reviews. My particular line of work is still safe, but I wonder about other fields. In any case, all will become increasingly robotic out of necessity. :(
 
Was forwarded an x link to a thread with more recent clips. Based on that located the original:



Some notes about Geoffrey Everest Hinton and his ancestry
According to the German Wiki, Geoffrey Hinton – Wikipedia


The father of Geoffrey Hinton: Howard Everest Hinton, Howard Everest Hinton - Wikipedia:

Comment to the above: George Boole was the father-in-law of Charles Howard Hinton, and therefore the Dutch Wiki appears to have an issue as Howard Everest Hinton was a great-grandson of George Boole if his grandfather was Charles Howard Hinton. This would make the son of Howard Everest Hinton, Geoffrey Everest Hinton, the great-great-grandson of George Boole.

The great-grandfather of Geoffrey Hinton: Charles Howard Hinton, Charles Howard Hinton - Wikipedia :

Charles Howard Hinton who wrote about the 4th dimension in the 1880s is a likely inspiration for the mention of higher dimensions in Romance Island by Zona Gale who in 1906 was the first to use the word inter-dimensional. Interesting how the lives of people are interconnected.
I'll bet most of his loot is not in any bank.
 
I'll bet most of his loot is not in any bank.
I just watched 70% of his interview with Steven Bartlett (The Diary of a CEO), and frankly, I had to stop because the materialist gloss was becoming unbearable. However, early in the video, Hinton mentions that his assets are spread across 3 banks, so in case there is a cyber attack on one of the banks, most of his assets will remain unscathed.

Godfather of AI: I Tried to Warn Them, But We’ve Already Lost Control! Geoffrey Hinton
Bartlett: Is there anything you're doing to protect yourself from cyber attacks at all?
Hinton: Yes. It's one of the few places where I changed what I do radically, because I'm scared of cyber attacks. Canadian banks are extremely safe. In 2008, no Canadian banks came anywhere near going bust. So, they're very safe banks because they're well regulated, fairly well regulated. Nevertheless, I think a cyber attack might be able to bring down a bank. Now, if you have all my savings or shares in banks held by banks, so if the bank gets attacked and it holds your shares, they're still your shares. And so, I think you'd be okay unless the attacker sells your shares because the bank can sell the shares. If the attacker sells your shares, I think you're screwed. I don't know. I mean, maybe the bank would have to try and reimburse you, but the bank's bust by now, right? So, I'm worried about a Canadian bank being taken down by a cyber attack and the attacker selling shares that it holds. So I spread my money and my children's money between three banks in the belief that if a cyber attack takes down one Canadian bank, the other Canadian banks will very quickly get very careful.
Thinking that banks are safe is very naive, especially considering the fact that they use the same software and that they can be brought down simultaneously by the flip of a switch (through cloud providers, for example). Fragility was built into global banking systems so the "Great Reset" could be centralized and executed promptly.
 
I just watched 70% of his interview with Steven Bartlett (The Diary of a CEO), and frankly, I had to stop because the materialist gloss was becoming unbearable. However, early in the video, Hinton mentions that his assets are spread across 3 banks, so in case there is a cyber attack on one of the banks, most of his assets will remain unscathed.

Godfather of AI: I Tried to Warn Them, But We’ve Already Lost Control! Geoffrey Hinton


Thinking that banks are safe is very naive, especially considering the fact that they use the same software and that they can be brought down simultaneously by the flip of a switch (through cloud providers, for example). Fragility was built into global banking systems so the "Great Reset" could be centralized and executed promptly.
Yeah, none of the banks are safe. Neither Crypto, IMO. Keep your car topped off, pantry stocked and cash on hand for power outages. Beyond that, we'll all be in the same boat when whatever happens, happens. If EMP/grid collapse/whatever, will that be the end of AI? If so, a blessing in disguise.
 
I can't stand the auto complete, it found it really annoying and turned it off. Yes, it can provide some good suggestions, sometimes, but I find it more a hindrance, unless you spend a bunch of time setting it for what you are doing. The comments autocomplete is also annoying. It can 'understand' what the comment could be but it's never what I actually want to write for the comment. If there is something I want to ask wrt to code, I'll use 3 or 4 different models, and they'll all give different answers. Each variation may work, or not, but often it uses too much code to achieve something that you could have done in 1 or 2 lines, unless it's something very simple.
Right, and you can type up an email with a few words and then "tab, tab, tab" done. I work with people who got so used to it, they can barely type up a regular email to clients, it's all gramarly and google AI. There's those videos of two AI bots calling each other, and that's what I think about when I think about people sending each other AI typed up emails.
 
Pushed by the management, I turned on AI on my programming development environnement this week.

It's based on Chat GPT and there's two features, autocomplete and a chat box.

Well, it's... impressive and well... too much impressive. Sometime I got the sensation the AI was reading in my mind by proposing code only after I type a few letters or... nothing at all. I think it's very good at proposing code because it "understand" what you are doing through the name of the variable you use in the code. If the names are enough explicit it's like giving an instruction to the AI.

If you start naming a variable "firstThreeCharacters=" it propose the code to do that and with the right context and the right comment.

So finally, from my experience now, I see that much you have a long code, much the AI is able to do the job. And it's disturbing because the AI propose a lot of relevant code you would not wrote simply due to the fact it's too long to type or certainly will be almost never used.

Overall, I really got the sensation I will become stupid because my neurons have no more to fight, at least less, less, harder. Nor more need to remember the correct syntax or even the right computer instructions.

The second point is the AI eventually propose sometime more moves than you can think of. It propose a big chunk of code. So it's "Yeah... I would not have done this like that but... OK, I will use what you give me anyway".

Some argue that you learn with the AI because it propose sometime a code which is better than what you would have done. The problem is you would never train yourself to reproduce what the AI gave you, you will just accept what the AI give you, again and again.

The third point I noticed is to be quickly addicted to it. At a moment I work on another computer without the AI and I already missed it. I had to make my memory work again...

So yeah, this real-time super auto-completion feature really impressed me as it managed to guess what was my intent at a very good rate. And I think, another danger is It makes you want to give more to the AI. I mean give access to the database for example if you write code that work with the database, as it would enable the AI to be more pertinent.

Did some of you experienced it for work into another domain than programming? Would be interesting to hear your impressions.
I tried. All of what you said is correct. But, I still have to test the functionality patiently, as what it gave is not always the functionality I was looking for. So it needed few iterations rather than magic bullet. In the process, it learns faster than us. sometimes i felt dumb when it worked. I tend to ignore the explanation it gave and find the explanation is too taxing to read and digest (or felt reading unnecessary). In the long run, the dependency will only grow.

With Open source, so many variations of the same technology (based on Bell lab's 80's unix and C Code) mushroomed. It was hard to figure out the syntax, variations etc. Before people used to figure out from posts like through forums like stackoverflow etc. and it was still trail and error. Now, these AI became pretty handy tool and it gone beyond long way.
 
Yeah, none of the banks are safe. Neither Crypto, IMO. Keep your car topped off, pantry stocked and cash on hand for power outages. Beyond that, we'll all be in the same boat when whatever happens, happens. If EMP/grid collapse/whatever, will that be the end of AI? If so, a blessing in disguise.
I'm not really sure about this, unfortunately. I remember a remote viewing session that was done on one of the lesser known cryptocurrencies, and the notion the viewer got was that "they are thinking about how to make systems safe in the case of grid collapse". If you look into for example Hedera Hashgraph, which is very well thought-out technically, there is a council of members that hosts infrastructure where consensus algorithm is running:
1754384087668.png
Pretty spread out from North America to Asia from what I see... Even if half of the infrastructure goes down, the consensus algorithm will run globally and with good throughput to handle transactions (in the ballpark of VISA). And "transactions" here have a very broad understanding: from biometric identification, supply chain management, to contracts with automatic taxation built-in.

So the picture is forming that cryptocurrencies, or a few crypto-systems that will survive, will be used to manage populations without government, which may happen after some cataclysmic event. If there's something in this remote viewing session that I posted here: Session 16 October 1994, there are organizations that are planning accordingly. Super creepy.
 
Back
Top Bottom