Artificial Intelligence News & Discussion

My latest, with a shout out to @Navigator whose posts got me thinking (and frothing at the mouth at the retardedness of the whole thing)! 😁
Their eyes are bigger than their stomachs. Microsoft CEO Satya Nadella says the company doesn’t have enough electricity to power all its AI GPUs.

 
Recently I was reminded again how much lying and wrong things AI can produce. I really think you have to be very careful to take anything things like Grok put out at face value. And I mean anything.

Grok flat out told me in the summary and in the first sentence the exact opposite of what is true. If I wouldn’t have read through that whole thing carefully myself and wouldn’t have known some basic things about calculating and the matter at hand and areas of likely Propaganda infusion I would have swallowed a lie. So, after seeing that Groks own simple calculations brought out the exact opposite of what it was saying to me and at some point just switched to calculate something easy totally wrong, I pointed it out to to Grok. Grok responded by saying I‘m right and that it told me the exact opposite of the truth “by accident“. Sure! I don’t believe that for a second because it was a pretty easy calculation and the area was likely influenced by green propaganda nonsense instead, hence the lying.

A while after that I asked both Grok and ChatGPT something very easy and totally unrealistic and not propaganda infused at all: Grok said in the first sentence the exact opposite of what ChatGPT was saying. Then I went with Grok to find a simple image on the internet and after repeated attempts it couldn’t show me a working link to a photo. Then I gave up on that and just searched with one sentence myself on Google and found what I was looking for in a matter of seconds.
 
On the other hand, LLMs are amazing at supercharging self-delusion. These models will happily equip misguided thinking with a fluent, authoritative voice, which, in turn, sets up a psychological trap by delivering nonsense in a nice package."
Maybe overall it is better that LLMs are not hardcoded to give just the "official" version of reality, but adapt to users' needs. Otherwise something like Laura's conversations with Grok would have been impossible. And maybe the author of the above article would consider that conversation "nonsense" too, since it goes well beyond what is considered established science.
 
There is a very good series of videos on how LLMs work. This one in particular demonstrates how analogy is the core of their functioning:

My main practical takeaways from the video that could help address hallucinations are:
- Always embed an alternative grounding in the prompt, such as “If you are not sure of the answer, respond with [computer says no],” especially if you prefer the model not to be overly creative.
- Ask the model to write a script to perform calculations rather than have it do them itself.
- Asking the model to count letters in a word is not a good benchmark, as models don’t operate on letters.
 
There is a very good series of videos on how LLMs work. This one in particular demonstrates how analogy is the core of their functioning:

My main practical takeaways from the video that could help address hallucinations are:
- Always embed an alternative grounding in the prompt, such as “If you are not sure of the answer, respond with [computer says no],” especially if you prefer the model not to be overly creative.
- Ask the model to write a script to perform calculations rather than have it do them itself.
- Asking the model to count letters in a word is not a good benchmark, as models don’t operate on letters.
From what I understand, it just tries to create a function that draws a line through a bunch of points on a graph. The better the function is, the better the line fits all the points, and the better it could predict or reasonably guess the next point. But it never actually fits all the points perfectly - there are too many of them. And also reality itself isn't a property of some deterministic function, so it will always, at best, be a parody/analogy/simulation of "knowledge" and "reason" and "thought". Kinda like CGI effects will never perfectly match reality, no matter how good they look, because reality has infinite complexity and depth and something that is infinite cannot be perfectly simulated by something that is finite.

And as such, it will always make odd blunders, however subtle, that seem befuddling to anyone with an actual consciousness because they are different than the blunders actual minds would make. Over time those blunders will be less and less noticeable, and people will begin to trust it and assume that it is somehow actually thinking, when it's just a more sophisticated "CGI effect of physics" - not actual physics. Yeah the "effects" look real enough, but just like CGI is not able to properly account for all the laws of quantum mechanics, sub-atomic principles, consciousness/free will, multi-density influences, and all the other stuff we haven't even yet discovered, it will always be limited and only "good enough" in a very limited scope.

So trusting AI is just like trusting that your physics simulator will always be correct at all scales, in all situations, etc. Yeah the ball looks like it bounces roughly correctly to the untrained eye, and is good enough for a fun movie, but just because you can make a ball bounce, doesn't mean the same program would scale to simulate black holes or atoms or literally anything else. But reality has infinite "compute", and while our minds/bodies can receive information and inspiration from infinity, a purely physical/digital simulation is super limited in comparison.
 
Last edited:
And just to add - it doesn't make it useless, just like computer simulations aren't useless. But the difference is, those who use computer simulations generally understand the limitations/scope. Except when they don't - look at "climate models" for example. But the average person interacting with AI can be fooled to think that it's actually thinking - even the AI experts seem to often be fooled. I think part of the reason for this is that we don't have a proper science of consciousness. We know a lot more about physics than we know about what thought and consciousness are. So someone making a physics simulation knows that there are tons of physics that are beyond the simulator. For example, a bouncing ball doesn't need to account for theory of relativity or quantum mechanics, but because those theories exists, we know that it is simulating only a small fraction of physics, and use it only in that context. We know a REAL bouncing ball does account for those things, but for our limited purpose, we don't have to.

But show a physics simulator to Isaac Newton and he might say - yup that captures everything, it is now the same as reality. Plug Newton into a VR headset and put him into a virtual "Matrix" and he won't even wonder if anything is missing. It won't occur to him to ask "Wait, how are you guys simulating electrons?". He doesn't have the framework to find the limitations of the thing.

So the reason a "thought" simulator is fooling people today is because we don't have an extensive scientific framework for thought, so even experts don't know that this is definitely not thought. We are not even at Newton's level when it comes to having a science of consciousness. So of course people will just assume it's "thinking" because it might appear that way. No different than people assuming the sky spins around the Earth because it appears that way. Most people's idea of thinking is stringing a coherent sentence together - welp, good enough for me!

It has its uses, but it will soon gain trust and be trusted to do things it has no business doing without humans in the loop. And people will stop double-checking and verifying. We will be like Newton in a VR headset. We will trust it with human lives, with doing research for you, with running complex systems whose failure results in real consequences. And then we'll be in real trouble.
 
Last edited:
Regarding hallucinations or even simple errors, I was quite impressed with Deepseek's "DeepThought" (free on their website). Not only does it doublecheck and triplecheck results, but it also takes as much time as needed for each answer and explains its whole 'thought process' in real time.

I was doing orbital calculations for the brown dwarf and Deepseek took 3-4 minutes to calculate and doublecheck the orbit, while Grok kept spitting out wrong calculations after just a few seconds. Maybe Grok is better with "thought experiments" that run counter to its coded biases (eg. Laura's experiment), though Deepseek seems to be capable of those as well.
 
Recently I was reminded again how much lying and wrong things AI can produce. I really think you have to be very careful to take anything things like Grok put out at face value. And I mean anything.

Had my Grok experience, too. What i was looking for was to see it produce some technical illustrations. Had carefully laid down parameters as a prompt, and what came back was like, say what? Asked again, still the same. Next, had asked (even with a darn 'please,' as if that was needed), and specifically asked that if an image likeness was uploaded and links to the drawings were provided, can you make a go of it? Grok was pleased, swore up and down that, yes, it can do this. Nope, it started feeding me nonsense replications. All Stop, add more details and it was back doing the GO thing (garbage out). It seemed to get lost. When cautioning, the darn program had what was like a hissy fit and shut down.

So, walked away and thought about it, came back much later and said forget wat you were doing, what I need is to scan this for formats and suggestions. Grok now seemed revved up and spewed out lots of data. Some was okay, and yet anything with technical information I had started not trusting. Would backcheck and find obsolete reference, checked links to find them 404 or bits from here and bits from there that made no sense.

Not a great experience, and the thing most noticed is that if people don't check and just take it on faith that it must be right - Grok says it, that's not going to fly. Most of it was a waste of time.
 
Regarding hallucinations or even simple errors, I was quite impressed with Deepseek's "DeepThought" (free on their website). Not only does it doublecheck and triplecheck results, but it also takes as much time as needed for each answer and explains its whole 'thought process' in real time.
I had the chance to appreciate this while spending some 3 days chasing a Jesuit letter from the 17th century. This "thought process" explanations are instructive and helps you to discern possible flaws and hallucinations. I tried the negative prompting approach too, of "you're leading me in a merry chase, the letter simply doesn't exist". Deepseek held his argument that the letter does exist and why, where and when it was quoted. Now, if only someone could go to the Jesuit archive in Rome, and see the original, I would know for sure.
 
Had a chance to review this documentary film TechnoCalyps, first mentioned on the forum here (which I had not caught) and saw it in another article this past week as a flashback.

So, this was basically written 20-years ago that interviews many of the leading advanced computer people at the time. Most interviewed are in awe of the possibilities of where these things will take us - in synch with a Darwinian survival look that parts of humanity are waiting for.

The theme of transhumanism is evident throughout, and the word Cyberspace is used when it probably was a word not often heard. AI is also used like a theoretical replacement therapy, as some in the film might be seen to echo Harari.

Nano technology is discussed being in the body, er, doing fantastic things.

All in all, the documentary is like a thermometer reading into our times, while now the heat of it all is starting to boil.

If seen through even that 2006 lens to where things are now, the course has evolved pretty close to what some had said it would be like, or is it behind the hype?

As for AI, though, the here and now do not look so rosy; I don't know enough to say if this is accurate:


 
My preliminary conclusions after having used Grok and other AI tools for a number of questions and tasks: The human input/element and more specifically the quality of it is absolutely crucial in how good the results can be.

I have found that in almost all cases irrespective of difficulty or likelihood of propaganda/lies being used in the system/response it strongly tends to give wrong results, often the exact opposite of what happens in reality, and it is then up to the human input to not just accept what is written but critically correct/question it.

Also, it is IMO almost a requirement that you know quite a bit about certain topics and basic ways of how to critically assess information BEFORE you ask a question in order to be in a good position to correct the many flaws and wrong things it puts out. Without stuff like that people are likely at the mercy of a whole lot of nonsense and believe it. Which might be what most people end up doing with it.
 
Is It True: That What You Use, Ends Up Using You?

For the longest span of memory that we can surmise, we have been accustomed to the habit of envisioning and producing tools that simplify, or rather speed up, whatever tasks that we set ourselves upon. These tasks have almost always involved everything included within the periphery of our existence.

From the technical applications that cement our habits, and our habitats, through the infrastructure that we build and maintain, to the innovations of domestication which nourish and heal our physical bodies through agriculture and biochemical pharmaceuticals, to the accessories which we keep that allow us to interact with not only our own complex technology, within this digital age, but increasingly with each other ... it could be said, with derision, that we are no more than the sum total of our miserly manipulations of the world around us.

With each successive era of tooling, we achieve more curious and lofty heights of technical attainment, however there is usually little to no attention being paid regarding how those tools that we use, affect us in kind. It might be a strange misgiving, to consider that a tool itself might have an effect on the user of said tool. However each tool is conceived and crafted with a perspective in mind. What do I need to do? How will I approach that task? In what manner will the tool articulate with not only the object of my task, but also in the manner in which is it applied? Is this particular tool more suitable to this task, or are there similar tools which offer different articulations or perspectives of approach? And so on ...

From the tools that we have grown accustomed to in this age, it appears we have grown preferential towards some things, and avoided others completely. However in growing preferential, we have lost awareness, if not dexterity itself, in being amicable towards other ways. As they say colloquially in America, "to a hammer, everything looks like a nail". To any neuroscience or behavioral junkies on here, you may remember of circuit specialization -- in which as the better we get towards tasks, the more our brain circuits are remodeled to perform said tasks, as reflected in subsequent performance. To the biologists in here, you may ponder epigenetic expressions and selection dynamics of artificially induced environments upon organisms through time. To others I have forgotten about, well I have nothing to say to you. :P

I shall provide a few examples, albeit in the negative, to elucidate this a little more. With calculators, we have abandoned lines rules, forgetting not only fast mental arithmetic, but proportionality of magnitudes. With graphical applications, we have abandoned not only drafting, but also depicting anything by hand, and thus forgot artistic perspective. With electronic synthesizers, we have abandoned musical instruments, and thus forgot harmony and composition. And generally with the internet, we have abandoned books, and have forgotten how to pay attention, along with understanding, altogether. Those few examples notwithstanding, few even notice the reliance, or submission, upon said tools or technologies that are required for modern life, along with the infrastructure that builds/maintains the tooling as a precondition for widespread use.

These patterns exist not only with the concrete, tangible implements that we use, but also perhaps even more so with the abstract, intangible systems that are imposed on entire collectives -- their economics, their financial systems, their politics, and so on. We replace tools whenever the old ones get worn out, or something new arrives, yet the entire structure of how we exist in identity, is somehow more firm and unchangeable than anything else. These systems are of course, nothing but tools in which a relative handful of people somehow exert complete manipulation over everyone else. It's rather strange...

Anyways, as a feature with this, I found some of Trevor Paglen's work to be on-point regarding AI, and little noticed. I enjoyed it, and maybe some of you will too.


Video Transcript

For most of my career, one of the big themes that I've been trying to understand is: what is our relationship to infrastructures and to technologies, and how do the things that we build change us. In other words how do we change the landscapes around us, and in doing so, how do we change who we are.

So I've done a lot of work looking at things like communication systems, surveillance systems -- you know looking at technologies that we use to interact with each other -- and I started thinking about computer vision and AI in about 2012 because I thought something really weird is going on here, which is that we're starting to build machines that see the world for us and do things in the world for us. As somebody who's spent many years thinking about vision, thinking about images, I know that there's always politics to the way that we see, I know that there's always a subjective experience that we bring to how we see.

There's nothing neutral about visual perception, and so I wanted to understand if we take that as a given, that seeing is never neutral, then what kinds of politics, what kinds of values, what vision of the world, for lack of a better word, is built into the computer systems that we're making to see the world for us, and how in turn is that going to change us.

I initially was working with computer vision labs, and over time those tools became more accessible. We started building computer vision systems in my studio and then there was this so-called machine learning revolution that happens in the late 2010s. So we started building early models and then started dissecting those models. I wanted to understand what is in this model, what way of organizing the world's information are has the model been trained on. So I spent a lot of time looking at what are called training sets and data sets; you know the information that you give to a machine learning system which then of course structures the way that that system works, and when you do that you find all sorts of strange and often terrible things.

The work here in the exhibition is called ‘From Apple to Abomination’, and what it's looking at is one of the main data sets that's used in machine learning -- mostly research and development work -- it's a data set called ImageNet, and it's a data set that was created between 2009 and 2011. The people who made the data set intended it to be what they called ‘a map to the entire world of objects’; what they wanted to do is collect a database of pictures of everything in the world, and the idea was with this database, then you could train a computer vision or machine learning system to recognize everything in the world.

So what they did maybe makes sense theoretically? They took a dictionary, they took a special kind of a dictionary where all of the words that meant the same thing only had one entry, right, so there's a lot of words where there's two words that mean the same thing, so in this dictionary they there's just one entry for it; and what they did was then they took all of the nouns, so they threw away the adjectives and verbs, and they said well if you are a word, and you are a noun, then that is something that corresponds to something in the world, and therefore we should collect pictures of that thing in the world.

So an apple is a perfect example: an apple is a noun we can create pictures of apples and then maybe train a machine learning system how to see an apple. Now the problem is that they did not really take into consideration that all nouns are not equal. We have abstract nouns on one hand -- a noun like consciousness, like that's a noun but it doesn't - there's no image of what that is that's in the world. Now there's other nouns that are racist and sexist and misogynistic and horrible, and you can think of a lot of nouns that fit that category, and so the problem that they had was that they took all of these nouns and they scraped the internet for images that corresponded to these nouns, packaged it all together, and said okay everybody go use this now, and it became a standard, and they never bothered to look inside the data set to see what was actually there.

They thought “Oh it's it's too big, how could anybody possibly do that?!” They had about 22,000 categories of objects in there, and about I think about 14 million images, and I thought “Well I can look at that in a day”, and so I started looking at it and starting find these horrible things. So this piece is really about that it's about that problem of classification, it's about the problem of trying to build a computer vision system or a machine learning system to interpret the world, because we might think of an object like an apple and we can all agree that maybe there's such a thing as an apple in the world and maybe that's a picture of it, but very quickly that starts to get much-much more complicated, so that's really what the piece is about is: telling a story about things that seem self-evident gradually and somewhat frighteningly becoming unhinged.

There's many-many problems that you see on this journey, and an obvious one is is what we call bias -- that if you have a category like CEO maybe it's all like white guys, and if there's a category like criminal then it's all black guys, and so you have this this racist thing going on, or ethnocentric kind of thing going on very often. But it gets even more complicated than that, you know -- if you think of a category like man or woman, you know, we might say okay, well in in our everyday lives we walk around and think, oh, well that's a man, or that's a woman, but that's not really how gender works, you know. I think nowadays we ask people how they identify -- maybe a man, maybe woman, maybe neither, maybe both, maybe you know, it's complex, and so that idea that we're just going to take a picture and then attribute a gender to an image, you know, that's a problem.

Now it gets even harder than that, like one might argue that men and women exist in the world and there's some kind of relationship between masculinity and the human condition or whatever. We take a word like criminal, which is a noun in the database -- the idea of criminality is a historically specific idea -- what even constitutes a criminal is constantly changing, so that is a noun but it's not fixed at all, it's entirely relational, and so how could you have an image of that - it makes no sense. That's the kinds of problems that you start to see, and they're complex problems – right? - and when you when you talk to people in the industry they'll think, “Oh well, we'll fix this with more data,” and the argument that I make is: these are not solvable problems with computers because these are not quantitative questions, these are qualitative questions, they are cultural questions, they're historical questions, they're political questions.

So is AI going to take people's jobs? Yes, absolutely. Is AI going to in increase economic inequality? Yes, absolutely. So there's problems with both of those. In terms of how AI will influence our own subjectivity, that's a question where I think it gets really weird – right? We have the beginnings of that with what are essentially surveillance systems using AI. So for example, your car spies on you, sends information about how carefully you drive and how closely you adhere to traffic laws to insurance companies, who can then use that information to modulate - you know - how you pay your insurance, and what what they want to do in that industry is moving towards a model in which they call it ‘pay as you go insurance’. So as you drive your insurance rates go up and down based on - you know - how safely the AI thinks that you're driving, and how closely you follow its recommendations. So that's a very simple example but very obvious - like that's quite a transformation. Where I think it will get much much weirder is in what we call parasocial relationships -- so the relationships that we have to each other with you know online figures, whether that's a podcaster, or an influencer, what have you.

We look at a phenomena like Tik Tok, for example, famously destroying our brains because it's, you know, recommendation algorithms are very good, it's a technology that's gotten very good at keeping our attention by giving us things that it it thinks that we want to see; it works. Now there has of course been over the last, you know, really year or two, this turn towards generative AI, towards making images, towards making texts, towards making videos, obviously something like ChatGPT would be in this, Mid Journey, Stable Diffusion ... these sorts of tools. Now what is going to happen, is that increasingly the entities that we interact with online will be generated for us individually, right. So right now people have to make Tik Tok videos and the algorithm curates them in order to capture our attention. In the very near future, those videos or that media will just be individually generated for us – right - and the machine learning system will learn what we are reactive to most, and obviously this is not going to be done in a vacuum. The point of this kind of media is to extract value from you in one way or another, whether that is your attention, or whether that is - you know - surveillance data in order to monetize one way or another, or whether that is to influence you to buy something or vote a particular way, what have you -- and that's where I think this starts to get really weird, where we increasingly will be living in a world where: each of us - the media that we interact with - will be individually generated for each of us, and will contain dramatically different worldviews that are attempting to manipulate us in ways that are precisely tailored to our own individuality, and this of course will you know amplify a lot of trends that that we've been seeing already - you know - things like polarization, people having the the lack of a shared worldview, the political manipulation obviously - things like disinformation ... I think that's going to be far more dramatic than what people are imagining. My concern is that that although these types of media will create different worlds for each of us, that will have not that much overlap with one another, and when you do that ... I'm not sure how you have a concept like democracy.

When we're talking about this generative turn, I think we see the beginnings of the dynamics that-that will have when we look at things like a Tik Tok algorithm, or even Instagram, Twitter -- these are content platforms that have gotten very good at pulling on your emotional strings, your intellectual strings, your attention, and when with generative media that process of doing that will become much more efficient, for lack of a better word. In other words I'll be able to generate media for you and measure the reaction that you have that, and very quickly learn how to optimize something for you. ‘My’ goal in generating media for you it is to sell you advertisements - right - and people think that's the end of it, but no-no-no-no that's the beginning of it -- I want to extract value from you, so your attention is a kind of value, but I can also try to get you to buy something, that's kind of value. I can try to get you to vote in one way or another, or have a political position. I can get you to try to adopt a particular identity that is going to have certain economic interests associated with it - you know - I can extract information from you that's useful far beyond advertising ... if I know about like what kind of food you eat, I can sell that to your health insurance company ... if I know you know how you drive I can sell that to your auto insurance company ... but there there's a lot to be done here. So this does create a world that is characterized by cultural political and social atomization, like that's a feature of it, that's the point - right - and why is that bad? I think we've seen the beginnings of why that's bad already, with - you know - certainly in the United States, where I'm from, you’re seeing the most bizarre forms of politics kind-of emerging from the fact that we're increasingly not living with the same worldview. That can go much-much further - you know - we're just seeing the beginnings of that.

So I think when we ask the big question why is that bad, I think ultimately two reasons: one, it is a process by which increasingly the market or the economic system is turning us against ourselves in order to extract value - right - so you can think about it as a kind of colonization or an extractive practice from places that previously were not subject to a market logic - right - so you can think about it as being analogous to like like a process of colonization or a process of occupation. These are the kinds of metaphors that I would use for that, but it's our everyday lives, it's our brains, it's our subjectivities ...

The other thing that is bad about that is we’ll increasingly have a world in which consensus will be impossible, because there is no basis for that, and the consequence of that is: I don't know how you have a coherent political system when that is the culture that you're living in. And - you know - thirdly ... this will benefit very few people a great deal, but that will come at the expense of - you know - everybody else, and so again seeing this question of inequality is very much a part of that.

These are complicated dynamics; all of these - you know - the machines influence, us we influence machines ... what often gets left out of that kind of conversation is thinking about who owns the machines and what are they being optimized to do? Who owns the machines - like - people trying to make money - you know. What are they optimized to do? Make money - you know. So like, that's so we can frame the question differently than like -- oh there's these machines, and then there's the humans, and to what extent do they work in accord or in contradiction to one another ... I think it's much simpler to say something like - you know - the machines are engines of capitalism and what they're trying to do is extend that logic of capital to as many places as possible, and to make it as optimized and efficient as possible, and that comes at the expense of places that we've had, whether that's in our brains, or in our collective experience that were - you know - relatively insulated from from those sorts of quite brutal market logics.

So I think in the work that I do, very broadly, comes out of a landscape tradition, if you want to think about that way -- looking at the world, and also trying to understand how we are looking at the world. Obviously with machine vision and computer vision it becomes: How is the world looking at us, right? What I think art can do is help us to break apart different forms of common sense, whether that's things that we take for granted - without understanding that we take that for granted, or things that are around us that we don't think twice about. Art is a vehicle through which we can pay much closer attention to the environments around us, the technologies that we use - you know - the landscapes, for lack of a better word, and art can uniquely allow us to help us learn how to see them, and I say see very literally ... art can show you something that a newspaper article cannot show you, that a journal article cannot show you, that a book cannot show you ... can literally help you learn how to see. So in terms of how art can help create a - you know - more equitable or joyful, hopefully society, I think it can help you see the world around you differently, and when you do that, the hope is that helps you imagine what a better society could be.
 
Back
Top Bottom