Mind-Blowing AI Image Generator - Give Visual Representation to C's Concepts?

I wonder if it has to do with the emotional energy that the artist imprints on the work.

It is said that every work of art has a part of the creator/artist in it...
TC Lethbridge could dowse original artwork, there wasn’t anything in the replicas such as pictures in books.

Makes one think of what was lost when the Spanish burned all the leather scrolls of the Aztec Empire. There was lots more information placed into books before the invention of the printing press. Hypothetically with dowsing, you could trace the distortions in certain texts from translations if they were all hand written.
 
I wonder if it has to do with the emotional energy that the artist imprints on the work.

It is said that every work of art has a part of the creator/artist in it...
I guess firstly it's the eyes that are weirdly misplaced, soulless, in every AI generated piece. Plus the brush strokes are the same everytime and the way that some parts blend with each other in a weird way - strokes are not recognizable, details are fuzzy and blended. After looking at AI art one can easily tell if something was AI generated or not.
It's still fun tho, and generated ideas can be cool.

Let's take those as an example:

aldona123_a_cult_carrying_a_ritual_in_a_big_dark_room_high_tech_baf5a650-2e8c-4685-a7a6-90027b...png1670434493275.png

Can you tell which one was AI generated and which wasn't? :) Maybe this little test will give you some clues why AI pieces seem soulless and machine generated.
 
TC Lethbridge could dowse original artwork, there wasn’t anything in the replicas such as pictures in books.

Makes one think of what was lost when the Spanish burned all the leather scrolls of the Aztec Empire. There was lots more information placed into books before the invention of the printing press. Hypothetically with dowsing, you could trace the distortions in certain texts from translations if they were all hand written.
I am in progress reading 'The Power and the Pendulum' by Lethbridge. I really want to read about his dowsing of original artwork! Will look for it.

A quick experiment inspired by benkostka's hypothesis, devised off the top of the head by me, FWIW. And total hypothesis here on my part of how to experiment, but here it is and from a specific dowsing perspective I use. So looking at a reproduction of a painting by a B.C. artist, L. Bisset, on a greeting card in front of me now, I asked how accurate the representation on the card was related to the original art work. It is a painting titled 'Off Season', Duke and Loyal II. I got the reproduction is 93% accurate, true to the original work of art. It suggests to me that the 'accuracy' of information conveyed is reduced but still available in reproductions. Basing the dowsing on an arbitrary 100% is very limiting but for a general test, it's what I use.

Looking at the greeting card, the art work would have to be the canvas, paint composition, subtle thickness differences for accuracy. I asked if a skilled someone perfectly replicated the painting with the same canvas, paints and artist's energy, how close to 100% accurate rendering of the art work it would be? It would be 96% if perfectly replicated by another artist to show the scene. This is without a detailed frame of reference for what constitutes 100% accuracy of replication but I take a guess that it is the key element of 'energy' from the artist. (It would be interesting doing this on an abstract work of art, which I may just do when I have more time.)

I also get that it would depend on the artist's consciousness imprinted on the painting as he or she painted how much of the individual energy is imprinted. I remembered seeing a real Van Gogh at the gallery in Toronto and today, checking it, it seemed that the impression of the painting was at least 25% the energy of the artist - that would be FRV I guess. With this Bisset painting, it is much less to my testing. I was riveted by the painting for a long time.

Then how accurate was the original artist's representation of the scene? Interesting, I get 100%. It was his perception perfectly rendered in paints. His accuracy at depicting what was actually there - I get 90%. So, based on my flimsy experiment, in terms of ancient texts benkostka wrote about, the accuracy would depend on the transcriptor's accuracy of understanding and intention and vibe .... with the understanding that that 'something' of the original creator was missing (but maybe could be dowsed, as Lethbridge may have been dowsing it in the report benkostka wrote about)? .... I hear the phrase, 'angels dancing on the head of a pin', might as well dowse that, lol! Oh well!
 
I guess firstly it's the eyes that are weirdly misplaced, soulless, in every AI generated piece. Plus the brush strokes are the same everytime and the way that some parts blend with each other in a weird way - strokes are not recognizable, details are fuzzy and blended. After looking at AI art one can easily tell if something was AI generated or not.
It's still fun tho, and generated ideas can be cool.

Let's take those as an example:

View attachment 67901View attachment 67900

Can you tell which one was AI generated and which wasn't? :) Maybe this little test will give you some clues why AI pieces seem soulless and machine generated.
Upper is AI. Dark and gloomy.
 
I wonder if it has to do with the emotional energy that the artist imprints on the work.
I think it's a bit more than that, but that is certainly present. It's the experience of the artist, the time spent, the human effort and life that was invested in producing something that the observer probably can't but understands... it connects us to the creator of the piece, whatever it happens to be. It's the humanity at a distance, both in time and space, that we recognize in us thanks to the artist. McGillcrhist does a great job explaining that in his book.

On the topic of AI, I remembered I had watched this video of Miyazaki reacting to some experimental AI animations, here's what he had to say, I think I posted this elsewhere, but I daresay it fits in this thread:

Compare the above against some of these scenes, which were created by Miyazaki (and the music by Joe Hisaishi)

There's a lot of subtle movements (mistakes, trips, missing the target, and so on) that you might not initially pick up, but they make the animation feel very relatable, and not only that, despite the fact that there's a lot of fantastical elements in the scenes, the humanity is still present and that connects you with the creators.

 
I love how Miyazaki made such a strong statement. It looked like those kids weren’t ready for his reaction. Thanks for sharing. I picked up on Totoro about 20 plus years ago and have followed Miyazaki’s work ever since. Some of the most psychologically healthy and family-friendly content anywhere by anybody ever.
 
There is something about art in general that is often overlooked: empathic connection with the artist. For instance, when shown a cave paintings, we imagine prehistoric people with torches painting on the walls of the caves. When we look at a painting, we can see the brush strokes and think that a human set there, thought about a view, chose the colors and did those details with their hands. When looking at a statue, we can imagine the carver obsessing about the details in the textures, while taking care of not deforming the big proportions. When listening to a piece of music, we wonder about what went through the composer's mind, or what state a performer was in to give it its unique texture. Art doesn't exist on its own, it is embedded in a uniquely shared human experience.
If art is limited to skill and precision, of course a machine programmed for that would do it, but maybe art is not only skill, and maybe we tend to forget that. Just a few thoughts on the matter.
 
I remembered I had watched this video of Miyazaki

Oh yes, I remember that video very well. ( BTW are you also a fan of Joe Hisaishi? :D )

I think that what I am commenting here goes with the thread. Yesterday I received an email from one of the platforms where I upload some of my drawings. (Well, not analogical drawings, but rather in 3D using software but trying to bring the intention to the drawing, trying to give it beauty. Mostly fanart)

Greetings from pixiv Requests.

From Thursday, December 8th, all creators using pixiv on desktop or mobile will be able to use the pre-rollout version of the feature to specify in their terms whether they are posting AI-generated work.

■ The pre-rollout is available from Thursday, December 8th

Only the pre-rollout is available from Thursday the 8th. The content of the settings you choose will be displayed on each page from Thursday, December 22nd. The timing of the release on Thursday the 22nd will be announced on Twitter.

■ If you don't set your terms by Thursday, December 22nd, they will be set automatically

If you don't change this setting in your request terms by Thursday the 22nd, your terms will be set automatically based on your creator page and whether you've posted AI-generated work using the Requests feature. If the automatic setting does not reflect the actual content of your terms, please change it manually.

We encourage all creators to use this pre-rollout period to change the settings in their terms before Thursday the 22nd. Thank you for your understanding, and we apologize for the inconvenience.

In short, if one as an artist is willing to accept work where it involves AI.

Now, every time someone uploads a drawing, regardless of the commissions, whether it is fan art or simple fun, it asks you if you did it, or if it was produced by the AI. This change was introduced two months ago...
 
Now, every time someone uploads a drawing, regardless of the commissions, whether it is fan art or simple fun, it asks you if you did it, or if it was produced by the AI. This change was introduced two months ago...
Interesting, so it is gaining traction, and I suppose it will and there's got to be a market for it, and it'll continue to fool people all over... it's just the way of the world today apparently.

If art is limited to skill and precision, of course a machine programmed for that would do it, but maybe art is not only skill, and maybe we tend to forget that. Just a few thoughts on the matter.
yes, that is my thinking, Art is more than the sum of its parts.. the colors, brightness and hues and elements, or the notes, and tempo and so on.. AI can only emulate beauty aesthetically, but for something to gain that meaning, this timeless and spaceless connection between souls, if you will, must exist IMO
 
From Japan, this article summarizes and highlights some of the challenges that these AI image generators pose to society:

Text-to-image AI: Powerful, easy-to-use technology for making art – and fakes

BERKELEY, Calif

Type “Teddy bears working on new AI research on the moon in the 1980s” into any of the recently released text-to-image artificial intelligence image generators, and after just a few seconds the sophisticated software will produce an eerily pertinent image.
Seemingly bound by only your imagination, this latest trend in synthetic media has delighted many, inspired others and struck fear in some.

Google, research firm OpenAI and AI vendor Stability AI have each developed a text-to-image image generator powerful enough that some observers are questioning whether in the future people will be able to trust the photographic record.
As a computer scientist who specializes in image forensics, I have been thinking a lot about this technology: what it is capable of, how each of the tools have been rolled out to the public, and what lessons can be learned as this technology continues its ballistic trajectory.

Adversarial approach

Although their digital precursor dates back to 1997, the first synthetic images splashed onto the scene just five years ago. In their original incarnation, so-called generative adversarial networks (GANs) were the most common technique for synthesizing images of people, cats, landscapes and anything else.
A GAN consists of two main parts: generator and discriminator. Each is a type of large neural network, which is a set of interconnected processors roughly analogous to neurons.
Tasked with synthesizing an image of a person, the generator starts with a random assortment of pixels and passes this image to the discriminator, which determines if it can distinguish the generated image from real faces. If it can, the discriminator provides feedback to the generator, which modifies some pixels and tries again. These two systems are pitted against each other in an adversarial loop. Eventually the discriminator is incapable of distinguishing the generated image from real images.

Text-to-image

Just as people were starting to grapple with the consequences of GAN-generated deepfakes – including videos that show someone doing or saying something they didn’t – a new player emerged on the scene: text-to-image deepfakes.
In this latest incarnation, a model is trained on a massive set of images, each captioned with a short text description. The model progressively corrupts each image until only visual noise remains, and then trains a neural network to reverse this corruption. Repeating this process hundreds of millions of times, the model learns how to convert pure noise into a coherent image from any caption.
While GANs are only capable of creating an image of a general category, text-to-image synthesis engines are more powerful. They are capable of creating nearly any image, including images that include an interplay between people and objects with specific and complex interactions, for instance “The president of the United States burning classified documents while sitting around a bonfire on the beach during sunset.”
OpenAI’s text-to-image image generator, DALL-E, took the internet by storm when it was unveiled on Jan. 5, 2021. A beta version of the tool was made available to 1 million users on July 20, 2022. Users around the world have found seemingly endless ways to prompt DALL-E, yielding delightful, bizarre and fantastical imagery.
A wide range of people, from computer scientists to legal scholars and regulators, however, have pondered the potential misuses of the technology. Deep fakes have already been used to create nonconsensual pornography, commit small- and large-scale fraud, and fuel disinformation campaigns. These even more powerful image generators could add jet fuel to these misuses.

Three image generators, three different approaches

Aware of the potential abuses, Google declined to release its text-to-image technology. OpenAI took a more open, and yet still cautious, approach when it initially released its technology to only a few thousand users (myself included). They also placed guardrails on allowable text prompts, including no nudity, hate, violence or identifiable persons. Over time, OpenAI has expanded access, lowered some guardrails and added more features, including the ability to semantically modify and edit real photographs.
Stability AI took yet a different approach, opting for a full release of their Stable Diffusion with no guardrails on what can be synthesized. In response to concerns of potential abuse, the company’s founder, Emad Mostaque, said “Ultimately, it’s peoples’ responsibility as to whether they are ethical, moral and legal in how they operate this technology.”
Nevertheless, the second version of Stable Diffusion removed the ability to render images of NSFW content and children because some users had created child abuse images. In responding to calls of censorship, Mostaque pointed out that because Stable Diffusion is open source, users are free to add these features back at their discretion.

The genie is out of the bottle

Regardless of what you think of Google’s or OpenAI’s approach, Synthesis AI made their decisions largely irrelevant. Shortly after Synthesis AI’s open-source announcement, OpenAI lowered their guardrails on generating images of recognizable people. When it comes to this type of shared technology, society is at the mercy of the lowest common denominator – in this case, Synthesis AI.
Synthesis AI boasts that its open approach wrestles powerful AI technology away from the few, placing it in the hands of the many. I suspect that few would be so quick to celebrate an infectious disease researcher publishing the formula for a deadly airborne virus created from kitchen ingredients, while arguing that this information should be widely available. Image synthesis does not, of course, pose the same direct threat, but the continued erosion of trust has serious consequences ranging from people’s confidence in election outcomes to how society responds to a global pandemic and climate change.

Moving forward, I believe that technologists will need to consider both the upsides and downsides of their technologies and build mitigation strategies before predictable harms occur. I and other researchers will have to continue to develop forensic techniques to distinguish real images from fakes. Regulators are going to have to start taking more seriously how these technologies are being weaponized against individuals, societies and democracies.

And everyone is going to have to learn how to become more discerning and critical about how they consume information online.

Hany Farid is a professor at the University of California, Berkeley with a joint appointment in electrical engineering & computer sciences and the School of Information.
 
That is how people are willing to lie for the sake of ego satisfaction. This is going from bad to worse.
Not an artist me thinks. don't get me wrong, I've worked with online resources in the past and they work wonders specially in tight timelines.. for instance, I wouldn't want to distill colors directly from nature before I paint.. as some extreme purist, I'd simply go buy some paint that someone else created.

But! I wouldn't simply skip the process. One thing I've heard of artists in the past is that it's the journey that matters more, or the most satisfaction is the road there and not being there already, if that makes sense.

an interesting video from a music creative I follow on YT about AI
He makes a good case for AI as a tool, and that is undeniable. And as such I think it will be incorporated into the paradigm of creation, inevitably. The question, which he didn't answer is, how farther removed will the artist be from the observer once AI takes more of the creative work away from the human?
 
He makes a good case for AI as a tool, and that is undeniable. And as such I think it will be incorporated into the paradigm of creation, inevitably. The question, which he didn't answer is, how farther removed will the artist be from the observer once AI takes more of the creative work away from the human?

Fair point, and one that I don't think we can answer until it happens. I would also say that until the AI can think for itself, it's not being creative, it's merely doing what it is told by us and basically copying styes and tropes from what ever it has learned from. So, for the moment I would say that the creativity lies with us, and indeed, if you are creatively minded you could create some very interesting things with this tech, things nobody has thought of or seen before. But! I agree, in the future when it evolves this could be a very real problem.

I guess one could also argue that the observer is already far removed form the artist in todays's world, we view most things through a screen including art. It's an interesting observation, now I can look at pretty much any "great work of art" be it a building or a painting anywhere in the world, but not seeing it in the flesh as it were, does that take away from the work itself? Not seeing it as the artist intended?
 
Fair point, and one that I don't think we can answer until it happens. I would also say that until the AI can think for itself, it's not being creative, it's merely doing what it is told by us and basically copying styes and tropes from what ever it has learned from. So, for the moment I would say that the creativity lies with us, and indeed, if you are creatively minded you could create some very interesting things with this tech, things nobody has thought of or seen before.
Well that’s not true, the artist using this AI doesn’t really know the limits of what it will produce or how it produces it, only the guys who wrote the code have that information.

So the AI might be copying styles and tropes, but one thing it isn’t doing is being told what to do by the artist using it, it’s more like the artist is being captured by using a tool that in no way could let him or her express their full creative capacity. How can you when you’re confined by the tool?
 
Back
Top Bottom