Artificial Intelligence News & Discussion

I also think that in the part of Musk, part of the problem is that he seems to have a more or less materialistic view of what the mind is, and that makes him think that we can actually merge with a machine in a reductionistic manner. So perhaps the 'merging' needs people to be a certain way in order for it to be more effective in that reductionist manner.
Yes, I think that's a problem with tech-gurus out there in general, both cheerleaders and critics of AI. When they talk about AI and compare it with human intelligence, they show not only their materialistic-only view, but also their lack of understanding of human psychology. Klaus Schwabb's mini-me Harari is a good example of very poor understanding of human nature.

Leaving the spiritual aspect aside for a moment, they overlook that AI mimics the type of intelligence of the left brain, but not of the right brain, as I said before. It is the right brain that gives us the human experience in this reality as a whole, for example. Even if a robot had all sorts of cameras and microphones and sensors, that would just be separate streams of data but not an integral subjective experience.

The ex Google officer Mo Gawdat said in the interview that he thought that AI would develop even more complex emotions than humans, because it is intelligence that determines emotional complexity, and he thinks so because the more primitive the animal, the less emotional complexity. And this is another example of a shallow understanding of human nature, because he is totally ignoring that there is a biological and hormonal component to emotions - and again, the right hemisphere of the brain.

So I totally agree that Elon Musk is oversimplifying the matter in dangerous ways with his chips on the brain, and so are all the transhumanists.
 
Fox News: AU has trained AI to determine people's political views from photos of smiles.

According to a study conducted at Aarhus University (AU) in Denmark, artificial intelligence has achieved a new breakthrough by learning to recognize people's political views by their smiles.

According to Fox News, scientists have noticed that right-wing politicians smile more often in photos, while people with a neutral facial expression tend to belong to the left political spectrum. Interestingly, artificial intelligence can predict political ideology with up to 61% accuracy from one photo.


Interesting, if we smile all the day or the opposite - will we trick the system?! 🤔
Only mind reading tech is left by PTB to be imposed...🤦‍♂️
 
Well, no comment on this one...

Text With Jesus

"

A Divine Connection in Your Pocket 🙏


Discover a new, interactive way to engage with your faith through "Text With Jesus," a revolutionary AI-powered chatbot app for iPhone and iPad, designed for devoted Christians seeking a deeper connection with the Bible's most iconic figures.

jesus-ipad.webp

From elsewhere we learn about this app:

"

You can chat with Satan too​

But it's not only the "good guys" that users can chat with. The app comes with a "Chat with Satan" setting as well which is off by default but can be turned on by the user.


Satan is included in the app to provide a comprehensive understanding of biblical narratives, reflecting the character's role as described in the Bible," says the app's website in a Q&A section. "The portrayal is rooted in Christian teachings, and users have full control over their engagement with all figures within the app."


Contrary to what some people might expect from certain biblical characters, Peter told Religion News that the characters are designed to be inclusive and tolerant and avoid offending users.


For instance, the app tells its users to “prioritize love and respect for all people regardless of their sexual orientation or gender identity.”
 

Attachments

This New AI Technology Could Give You Almost Telepathic Communication Abilities​

Unbabel’s Halo is paired with an app on the user’s phone, as well as wireless headphones. The app can receive messages, which are delivered to the user through the headphones, similar to how many people get notified of their texts these days. In the example used by the TechCrunch reporter who witnessed a real-time test of the Halo, CEO Vasco Pedro was asked what kind of coffee he had that morning, through a text that was relayed to his earpiece.

When Pedro thought of the phrase “black coffee,” his arm muscles physically responded in a way the wearable EMG could interpret. This is where the AI comes in. Because the large language model AI (LLM) had previously gotten to know Pedro’s physical responses to specific words and phrases, it was able to then ask him, through the headphones, if he was thinking of “Americano.” When Pedro confirmed, that response was sent out through Telegram — all without typing, or even moving his body.

The “personalized LLM” uses ChatGPT 3.5 and its experience with the user to facilitate communication between the user, their inner thoughts, and the person they are communicating with. So while other similar devices have been built or are in development, by incorporating an LLM, Unbabel may have unlocked the true potential of this “telepathic” type of communication.

 
Artificial intelligence right inside your computer⚡

Intel will implement AI in all its products. Earlier, Intel CEO Pat Gelsinger was optimistic about the use of artificial intelligence technologies in the company's products. Now it has become known that Intel plans to start using AI technologies in almost all of its products.

Later this year, Intel will release Meteor Lake, its first consumer chip with a built-in neural processor for machine learning tasks. AMD has recently done the same, following Apple and Qualcomm.

 
Artificial intelligence right inside your computer⚡
Just wanted to point out the real-world performance of the Neural Engine in Apple Silicon processors:
A quick survey of the thread seems to indicate the 7b parameter LLaMA model does about 20 tokens per second (~4 words per second) on a base model M1 Pro, by taking advantage of Apple Silicon’s Neural Engine.

Keep also in mind that at least 8GB of RAM is needed to load highly quantized LLMA models, so we need a serious breakthrough in technology to pack even more transistors and more memory, keeping low manufacturing costs.
I've been testing running local LLMAs, and it's a half-baked toy most of the time compared to ChatGPT, which probably does a lot of natural language processing not related to the AI. So I really feel that these press releases are just marketing material, and the real use case will be offloading real-time image processing tasks, like for example upcoming macOS's "presenter overlay":
1692952378826.png
 
Just wanted to point out the real-world performance of the Neural Engine in Apple Silicon processors:


Keep also in mind that at least 8GB of RAM is needed to load highly quantized LLMA models, so we need a serious breakthrough in technology to pack even more transistors and more memory, keeping low manufacturing costs.
I've been testing running local LLMAs, and it's a half-baked toy most of the time compared to ChatGPT, which probably does a lot of natural language processing not related to the AI. So I really feel that these press releases are just marketing material, and the real use case will be offloading real-time image processing tasks, like for example upcoming macOS's "presenter overlay":
View attachment 80382
8GB RAM on graphical card is quite common today and there is other (somewhat more) interesting tasks to be done by IA than chatgpt: tasks on pictures, voice...

And eventually we can imagine that networks of those local AI processor will emerge so the power will add up outside centralized servers.
 
The humans generator. Create hyperrealistic full-body photos of people in real time.

View attachment 80306
8GB RAM on graphical card is quite common today and there is other (somewhat more) interesting tasks to be done by IA than chatgpt: tasks on pictures, voice...

And eventually we can imagine that networks of those local AI processor will emerge so the power will add up outside centralized servers.
Agree, but you have to upload the model to the VRAM and keep it there all the time if you intend to have an "AI assistant" responsive and pump some serious Watts on inference. The human generator using GANs (posted above) is indeed very fast, but very limited in what it can do, very different from the computationally-heavy "AI art" that we've seen utilizing Stable Diffusion. LLMA-based voice recognition is struggling to be performed in real-time, even if I use a highly optimized build for Apple Silicon's Neural Engine on my Apple M2 laptop. Just wanted to share my doubts about "local AI", I just don't see it without any breakthrough in regard to Alien Technology™.
 

The A.I. Surveillance Tool DHS Uses to Detect ‘Sentiment and Emotion’​

·AUG 24, 2023
Internal DHS and corporate documents detail the agency’s relationship with Fivecast, a company that promises to scan for “risk terms and phrases” online.

Customs and Border Protection (CBP), part of the Department of Homeland Security, has bought millions of dollars worth of software from a company that uses artificial intelligence to detect “sentiment and emotion” in online posts, according to a cache of documents obtained by 404 Media.

CBP told 404 Media it is using technology to analyze open source information related to inbound and outbound travelers who the agency believes may threaten public safety, national security, or lawful trade and travel. In this case, the specific company called Fivecast also offers “AI-enabled” object recognition in images and video, and detection of “risk terms and phrases” across multiple languages, according to one of the documents.


Marketing materials promote the software’s ability to provide targeted data collection from big social platforms like Facebook and Reddit, but also specifically names smaller communities like 4chan, 8kun, and Gab. To demonstrate its functionality, Fivecast promotional materials explain how the software was able to track social media posts and related Persons-of-Interest starting with just “basic bio details” from a New York Times Magazine article about members of the far-right paramilitary Boogaloo movement. 404 Media also obtained leaked audio of a Fivecast employee explaining how the tool could be used against trafficking networks or propaganda operations.

The news signals CBP’s continued use of artificial intelligence in its monitoring of travelers and targets, which can include U.S. citizens. In May, I revealed CBP’s use of another AI tool to screen travelers which could link peoples’ social media posts to their Social Security number and location data. This latest news shows that CBP has deployed multiple AI-powered systems, and provides insight into what exactly these tools claim to be capable of while raising questions about their accuracy and utility.

“CBP should not be secretly buying and deploying tools that rely on junk science to scrutinize people's social media posts, claim to analyze their emotions, and identify purported 'risks,'” Patrick Toomey, deputy director of the ACLU's National Security Project, told 404 Media in an email.

404 Media obtained the documents through Freedom of Information Act requests with CBP and other U.S. law enforcement agencies.

One document obtained by 404 Media marked “commercial in confidence” is an overview of Fivecast’s “ONYX” product. In it Fivecast says its product can be used to target individuals or groups, single posts, or events. As well as collecting from social media platforms big and small, Fivecast users can also upload their own “bulk” data, the document says. Fivecast says its tool has been built “in consultation” with Five Eyes law enforcement and intelligence agencies, those being agencies from the U.S., United Kingdom, Canada, Australia, and New Zealand. Specifically on building “person-of-interest” networks, the tool “is optimized for this exact requirement.

Related to the emotion and sentiment detection, charts contained in the Fivecast document include emotions such as “anger,” “disgust,” “fear,” “joy,” “sadness,” and “surprise” over time. One chart shows peaks of anger and disgust throughout an early 2020 timeframe of a target, for example.

The document also includes a case study of how ONYX could be used against a specific network. In the example, Fivecast examined the Boogaloo movement, but Fivecast stresses that “our intent here is not to focus on a specific issue but to demonstrate how quickly Fivecast ONYX can discover, collect and analyze Risks from a single online starting point.”

That process starts with the user inputting Boogaloo phrases such as “civil war 2.” The user then selects a discovered social media account and deployed what Fivecast calls its “‘Full’ collection capability,” which “collects all available content on a social media platform for a given account.” From there, the tool also maps out the target’s network of connections, according to the document.

Lawson Ferguson, a tradecraft advisor at Fivecast, previously showed an audience at a summit how the tool could be used against trafficking networks or propaganda operations. “These are just examples of the kind of data that one can gather with an OSINT tool like ours,” he said. Jack Poulson, from transparency organization Tech Inquiry, shared audio of the talk with 404 Media.

Ferguson said users “can train the system to recognize certain concepts and types of images.” In one example, Ferguson said a coworker spent “a huge amount of time” training Fivecast's system to recognize the concept of the drug oxycontin. This included analyzing “pictures of pills; pictures of pills in hands.”

Fivecast did not respond to a request for comment.

CBP’s contracts for Fivecast software have stretched into the millions of dollars, according to public procurement records and internal CBP documents obtained by 404 Media. CBP spent nearly $350,000 in August 2019; more than $650,000 in September 2020; $260,000 in August 2021; close to $950,000 in September 2021; and finally almost $1.17 million in September 2022.

Full article:
 
Back
Top Bottom