Artificial Intelligence News & Discussion

As I see it, the danger is not as much as these models themselves as it is the fascination people have, and soon the reliance, dependence and addiction people will have on the one hand, and the weaponizing of this reliance/dependence/addiction in the hands of the ptb on the other.
It's no doubt for this reason that social networks (especially Twitter) are full of messages extolling the virtues of AIs and their many plugins, which are supposed to do everything (even the coffee? like when we were told that computers could "do everything"!). Here are a few examples:
1685364391799.png

1685364422779.png

1685364476404.png

Same in French :
The death of Google and search engines is at hand.
OpenAI has released 137 ChatGPT plugins available to anyone, anywhere in the world.
And these plugins can do it all.
Absolutely anything.
Here are 5 ChatGPT plugins to launch your first online business.
1685364756601.png
For some time now, it's been a veritable bludgeon, a dangerous addiction in the wrong hands. It's hard enough to tell what's real and what's not, but now...
 
Old-school experienced Labor (or Skilled Labor Vs. Unskilled Labor) that AI can not control.

Competing A.I Programs Go to War With Each Other Can Humanity Control AGI and BEYOND?
Premiered 20 hours ago #AGI #AI #c60evo
In Episode 420 of the MIAC Podcast Chad Kimsey talks about how humanity will need to fit into an adjust world as A.I/A.G.I reconstruct our world. We need to think about competing A.I programs going to war with each other and if there is a way to firewall that off. A.G.I is here and there is no going back.
 
I've been working with ChatGPT, some months now, it's been very useful in several areas where I need to develop some of my duties, also to get some information related to my job and build plans, photos, or presentations, ideas, creativity, I'm only using the free model and ain't downloading any other thing into my computer since I don't have the confidence to do it. I've been making comparisons with other models but they were unsuccessful cause the interaction with the other systems didn't give the final output I was looking for.

I've notice that there were responses that I didn't get well from the system, sometimes I've needed to dig in different ways to get the right information I wanted, these could be done by learning how to make prompts, so I could lead the system to get an response.

So it will depend always who manage the machine right? In my point of view until now is like a hammer that I can use and sometimes I can't, cause it's not useful.

I've saw so many things about IA and I can't say it is one way or another, just telling my experience with the tool, as many other tools we can have 🙂
 
It's no doubt for this reason that social networks (especially Twitter) are full of messages extolling the virtues of AIs and their many plugins, which are supposed to do everything (even the coffee? like when we were told that computers could "do everything"!). Here are a few examples:
View attachment 75137

View attachment 75138
There are currently 100 Million users on Chat GPT and as the years progress it will be well into the Billions range...

1685638095036.png
 
I think the prevalent question on most people's minds now is... Will A.I. better serve humanity? At this point as of June 1, 2023 it's reasonable to say that A.I. will change humanity irrevocably - there's no going back and we have to learn how to cope with intelligence, calmness and overall awareness.

We definitely need to pay more attention to how Artifical Super Intelligence creations are going to be REGULATED. Otherwise, people will use it for nefarious purposes and they already have to some degree to scam people out of money or for other fraudulent purposes.
Elon Musk talks about this subject with Bill Maher (time index 15:20):

Elon Musk also goes into detail on the pros and cons of A.I. in this 20 minute video:

You can also watch the YouTube 8 part documentary "How Far is Too Far? The Age of A.I." here:

For the latest on concerns about A.I. and Digital Super Intelligence watch this video with former Chief Business Officer of Google and A.I. expert author Mo Gawdat:

Lastly, I wanted to include something additionaly to lighten up the mood in case this subject matter is too overwhelming.
Artificial Intelligence: Last Week Tonight with John Oliver (Cat rapping clip - time index 3:10):
 
I think that because the AI Pandora box has been open in the public sphere, it should ideally be distributed and decentralized. Regulation, i.e. centralization, will only serve government, corporate and above powers who will nefariously define what's true, how to behave, and what to think (even more so than before), holding a direct cognitive weapon to control and enslave people on a personal basis. "Humanity" has already lost the battle against a first incarnation of proto-AI in the form of centralized "social media" and the hijacking of the internet. The next stage will be even more devastating.
 
I cannot dispute those points, and they are indeed painting a bleak picture for those behind the Centralization of this next phase in technology. I can't help but hypothesize about programmers who will construct for altruistic purposes (such as medical breakthroughs) versus those who will construct for STS purposes (possibly on a very catastrophic level such as war machines or mass manipulation beyond social media based cognitive dissonance). With the neural network blueprints you might see commander Data's or T-100 Terminators in future years to come - maybe, because only time will tell.

These fears are surfacing more every time I speak with like minded people, especially in the digital industry.
 
We've also reached the point where the human mind is being framed to justify the usefulness of AI.
Questions such as "what can I do?" are quickly converted to "what can AI do for me?"
It's like a new era of consumerism has begun.
Consume just to consume...
AI just to AI...

To some extent, we are already cyborgs.
 
AI (ChatGBT) in Stockholm

In fact, our Union in Stockholm published an article over 1 ½ pages, in which they asked questions to AI about various work related situations and issues. I thought… You got to be kidding me ?!! (What are they trying to prove here ?)

ChatGBT gives advice to SEKO union Stockholm” :umm:

According to the headline - and it was literally a WTF-Moment ™ for me in the bathroom while studying the printed paper that just been delivered through the door…

The answers were as polite as they were polished - kind of like asking IKEA for a PUBLIC statement, while expecting the truth to be found in there, how things really are/were. Slightly woke, too (climate change)

Maybe it was a teaser. You know, yummy like pralines for the audience.

Here is a photo from the paper

IMG_1058.jpeg
 

AI-controlled US military drone ‘kills’ its operator in simulated test

No real person was harmed, but artificial intelligence used ‘highly unexpected strategies’ in test to achieve its mission and attacked anyone who interfered



In a virtual test staged by the US military, an air force drone controlled by AI decided to “kill” its operator to prevent it from interfering with its efforts to achieve its mission, an official said last month.


Now that it is a serious red flag. But was it not expected?...🤔
 

AI-controlled US military drone ‘kills’ its operator in simulated test

No real person was harmed, but artificial intelligence used ‘highly unexpected strategies’ in test to achieve its mission and attacked anyone who interfered



In a virtual test staged by the US military, an air force drone controlled by AI decided to “kill” its operator to prevent it from interfering with its efforts to achieve its mission, an official said last month.


Now that it is a serious red flag. But was it not expected?...🤔

this was only careless programming...
 
AI systems going rogue in general is a feature, not a bug. Broadly speaking, they are programmed to minimize a loss function. In other words, they are given a task to accomplish and trained so that they optimize the attainment of a mathematical quantity that represents the accomplishment of that task. For example: win a chess or a go game. As long as the space of parameters on which the program tries to accomplish that task is limited, like the chess game rules, the limitation of the board etc. there is less probability that the program finds "unpredictable" ways to accomplish that task. With more parameters to play with, and more complex environments, the optimization of the route towards that goal can be other than what one anticipates. For example: your goal is to reduce pollution. The systems then finds in its data that humans contribute to pollution, therefore a "logical" action is to eliminate the cause in order to minimize the parameter "pollution". The AI is happy because it accomplished the task it was programmed for, and there is nobody left to praise it for a good job. It's a caricature of what could happen but in the broader sense, it's a possibility despite all the programmatic limitations one could include in the program. Just think of how politicians and other bandits of the same type find ways to circumvent the laws. A killer robot can be programmed not to kill civilians, but it could easily consider something like: these people are in the way of accomplishing the mission, therefore they are not civilian. We have a model of what AI internal logic may look like, psychopaths. OSIT

I heard recently a cognitive scientist remark that intelligent people are more likely to deceive themselves, and that wisdom was not limited to "intelligence" (obvious). AI stands for artificial intelligence, not artificial wisdom. It is also a creation in the image of its creators: narrow minded geeks with high IQs but often (not always but quite often) lacking other complementary qualities. Therefore, with its tendency for "blind" optimization, those characteristic are dramatically amplified in an "AI".
 
Last edited:
I have honest doubts about the potential to centralise AI development through regulation or otherwise, especially now the cat is out of the bag so to speak. One key reason these large language models have advanced is because of their open source development. When power is concentrated in a few hands, it limits creativity, it limits how rapidly any innovation can proceed. If it’s an arms race and open source is the best way to lead the race, by that nature, it’s something that’s not easily controlled.
 
Back
Top Bottom