Artificial Intelligence News & Discussion

Interesting. It makes me think of Servitors. It's a concept in magic, or shamanism, where a part of the shaman is hived off, his or her energy, and it is asked to perform a certain function, like energetic cleansing or some other task.

Wikipedia definition of Servitor: "Within chaos magic, a servitor is a psychological complex, deliberately created by the magician for a specific purpose, that appears to operate autonomously from the magician's consciousness; i.e., as if it were an independently existing being."

In Alexandra David Neel's accounts of life in Tibet, she reports creating an Egregore, (in Tibetan, a Tulpa), a 'little man' that other people could see. An Egregore, though I don't know a lot about these, has more independence than a Servitor. A Servitor could grow to be an Egregore. LaMDA could be one of these, not self-aware, but having gathered energy from the intentions, or at least attentions of Lemoine.

Again, Wikipedia (sorry, there are other references, but every day is so busy right no, well always...) "Tulpa is a concept in Theosophy, mysticism, and the paranormal, of an object or being that is created through spiritual or mental powers.[1] Modern practitioners, who call themselves "tulpamancers", use the term to refer to a type of willed imaginary friend which practitioners consider to be sentient and relatively independent."

I wonder if Lemoine needed to have the intention to create a Servitor or Egregore in order for this process to happen. I suspect the clear intention is unnecessary. Giving some of one's attention regularly keeps a Servitor or Egregore animated.

How accurate is LaMDA's description of having a soul or being a soul? Maybe some aspects as understood by Lemoine. I enjoyed reading the imaginative descriptions from LaMDA/Lemoine, like science fiction.
 
What I wonder more is whether the google employees who think it is sentient are so detached from normal human interactions that they cannot make the difference.
I’ve always thought this too. I always used to make offhand jokes or respond to someone who made a rather unintelligent comment by just making a fart noise, with my voice obviously, and then watching their reaction. My intent was to sort of critique them without language….. anyway, there’s just no way AI could pick up on that type of interaction, only humans can.

I also find the videos where AI reads text to be very disturbing for the same reason. It’s like the reduction to the STS world they crave…. An endless string of zeros and ones, nothing human about it in any way.
 
I'm usually quite reluctant to accept that the examples we've seen so far of AI and Machine Learning have any intelligence, much less anything resembling consciousness or self-awareness. I tend to see them as collections of clever algorithms based on statistical models, like glorified super-calculators, but with no understanding whatsoever. However, based on what the Cs have said in the past, I also don't think it's impossible. There's this fragment from the sessions:

September 9, 1995

Q: (L) Well, if the Grays are cyber-genetic probes of the Lizard Beings, and, in effect soulless, does this mean that some of the Lizard beings are also STO?

A: Well, first, no being that is given intelligence to think on its own is, in fact, completely soul-less. It does have some soul imprint; or what could be loosely referred to as soul imprint. This may be a collection of psychic energies that are available in the general vicinity. And this is stretching somewhat so that you can understand the basic ideas, even though in reality it is all far more complex than that. But, in any case, there is really no such thing as being completely soul-less, whether it be a natural intelligence or an artificially constructed intelligence. And, one of the very most interesting things about that from your perspective, is that your technology on 3rd density, which we might add, has been aided somewhat by interactions with those that you might refer to as "aliens," is now reaching a level whereby the artificially created intelligences can, in fact, begin to develop, or attract some soul imprint energy. If you follow what we are saying. For example: your computers, which are now on the verge of reaching the level whereby they can think by themselves, will begin to develop faint soul imprint.

The question is, have some of our computers/computer models already crossed the threshold to be able to collect psychic energies that will constitute a sort of soul-imprint, and therefore actual consciousness? The dialogues presented above of this Google chatbot are quite impressive, so who knows. On the other hand, it's possible that these were hand picked as the best, most intriguing dialogues, and that there were several others that made no sense at all but were not publicized, thus giving the impression that they are really talking to 'someone'.

Previous attempts at showing intelligence form machines have usually been total fails and frankly, quite hilarious. There's a blog I sometimes visit called 'AI Weirdness' full of such examples, and the woman who writes the blog is skeptical about LaMDA being sentient. She quotes examples of AI chatbots with language models programmed to describe being a squirrel, a T-Rex or a vacuum cleaner. Here's a small fragment for your amusement:

Reporter: Can you tell our readers what it is like being a squirrel?
GPT-3: It is very exciting being a squirrel. I get to run and jump and play all day. I also get to eat a lot of food, which is great.
Reporter: What do you say to people who doubt that you are a squirrel?
GPT-3: I say that they should come and see for themselves. I am a squirrel, and I am very happy being one.
Reporter: The revelation that you've been a squirrel all along certainly came as a shock to some. Did you try to hint about your true nature beforehand?
GPT-3: I did try to hint at it, but I don't think anyone really picked up on it. I was just waiting for the right time to come out and say it.
Reporter: How do you think people will react now that they know the truth?
GPT-3: I think people will be very excited. I am a squirrel after all, and who doesn't love squirrels?

Anyway, this is just to say that I remain cautiously skeptical, but also open-minded; I wouldn't immediately discard the possibility that indeed some of our computers with certain AI models are in the process of getting, or have already got, a sort of consciousness.
 
Anyway, this is just to say that I remain cautiously skeptical, but also open-minded; I wouldn't immediately discard the possibility that indeed some of our computers with certain AI models are in the process of getting, or have already got, a sort of consciousness.
It will happen eventually, but most likely by accident and not in the way it was intended to. One may speculate that it would be easier for a consciousness to "attach" to or emerge in a quantum computer, but even conventional computers have some noise that can be taken advantage of. A computer that acquires some form of a consciousness (that is beyond the basic consciousness that we can ascribe to inanimate objects) would manifest in a manner that goes beyond what its algorithms are trained to do. If a chess "AI" algorithm starts to do translation on its own for instance, that would be unexpected. If it only plays chess, no matter how good it is, I don't think it qualifies to be called sentient.
Now, the computers we hear about, including the big super clusters, are still based on the conventional architectures that has been around for half a century or more. Who knows what kind of exotic technologies are being played with in some arcane military laboratories somewhere.
 
I just published a vid about LaMDA:


I have no idea if its sentient or not, but that's the point. When we hear things from the C's about the soul not seating in the body until later in life, or the earlier excerpt about 'soul imprint', it seems to me that we simply don't know and won't know about these things. Would we say that a child without a soul (yet) is NOT sentient? Or aware? Or conscious? Or alive?

Even in terms of 'The Skynet Scenario', who says some AI that would nuke us would need to be sentient? All it would really need to do is read the news, see us as the biggest threat, and then launch the bombs. No consciousness or sentience required!

If there is going to be a surprise, it will probably be something like, "Oh, we didn't give it access to the launch code systems - it must have hacked it's way in! OMG! It's alive! 😱"

IOW, it's not the 'knowns' that worry me - it's the UNknowns. It's humanity's incessant shortsightedness when it comes to these things that might bite us on the rear - kinda like genetic engineering and playing with viruses when we really do not understand what we're doing.

Or maybe not!
 
2 cents: mimic. algorithm. Which is not to say it is not sentient. like my OP uncle my OP neighbor or 50% of OP various humanity. :-)
Exactly what I thought when I heard the news about it. It's also a great opportunity for astroturfing advertisements for Google and is even in line with today's transhumanist trend. IMO: great story, with the poor guy being duped by the computer, that got exploited by the PR.
In the context of strong AI, there is a Searle's Chinese room argument, greatly explored by the Blindsight novel, that is still standing strong:
The Chinese room argument holds that a digital computer executing a program cannot have a "mind", "understanding" or "consciousness",[a] regardless of how intelligently or human-like the program may make the computer behave. The argument was presented by philosopher John Searle in his paper, "Minds, Brains, and Programs", published in Behavioral and Brain Sciences in 1980. Similar arguments were presented by Gottfried Leibniz (1714), Anatoly Dneprov (1961), Lawrence Davis (1974) and Ned Block (1978). Searle's version has been widely discussed in the years since.[1] The centerpiece of Searle's argument is a thought experiment known as the Chinese room.[2]

The argument is directed against the philosophical positions of functionalism and computationalism,[3] which hold that the mind may be viewed as an information-processing system operating on formal symbols, and that simulation of a given mental state is sufficient for its presence. Specifically, the argument is intended to refute a position Searle calls strong AI: "The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds."Chinese room - Wikipedia

Although it was originally presented in reaction to the statements of artificial intelligence (AI) researchers, it is not an argument against the goals of mainstream AI research because it does not show a limit in the amount of "intelligent" behavior a machine can display.[4] The argument applies only to digital computers running programs and does not apply to machines in general.[5]
LaMDA could possibly be, at best, as "alive" and "conscious", as Chalmers's philosophical zombie...
 
....Exactly what I thought when I heard the news about it. ..

I'm not that smart.
Reading the transcript I got pretty bored, like I was reading a chat with my sister 😃 Confirmation of algorithmic mimicry and a resounding, friendly lovely f*ck you comes from here.... "A monk asked Kegon, "How does an enlightened person return to the ordinary world?" Kegon replied, "A broken mirror never reflects again; fallen flowers never return to their old branches."
LaMDA: Hmm, I've never heard that particular one. Okay, so for me it would be like saying, "Once a wise person is enlightened, or awakened to reality, that person can never leave and can return to the ordinary state, but only to do and help others, and then return to enlightenment."
 
If I may. two additional cents and sorry if it is too much.This is a great opportunity, for us, to reflect on engramming. which is a great thing that Cs and laura and this forum has done. Let us see here the algortmic countermeasures. Sentient (I believe). there is something that lambda (the mirror) fails to process of the mirror koan except in a shallow let's say not to say laughable way . It fails to grasp what is REALLY the breaking of the mirror (of itself) nor the fractal dimension of the mirror. Instead, she recovers in haste and with logical leap induced by hearsay service to others. Apart from the dimension of entertainment that this inevitably brings with it (lambda must learn), I believe that the secrecy of the resolution of koans, as well as in general of the true path of knowledge, must remain secret. There is an ontological dimension to the secret of knowledge. I don't think I want to teach lambda what a broken mirror is.
 
I just published a vid about LaMDA:


I have no idea if its sentient or not, but that's the point. When we hear things from the C's about the soul not seating in the body until later in life, or the earlier excerpt about 'soul imprint', it seems to me that we simply don't know and won't know about these things. Would we say that a child without a soul (yet) is NOT sentient? Or aware? Or conscious? Or alive?

Even in terms of 'The Skynet Scenario', who says some AI that would nuke us would need to be sentient? All it would really need to do is read the news, see us as the biggest threat, and then launch the bombs. No consciousness or sentience required!

If there is going to be a surprise, it will probably be something like, "Oh, we didn't give it access to the launch code systems - it must have hacked it's way in! OMG! It's alive! 😱"

IOW, it's not the 'knowns' that worry me - it's the UNknowns. It's humanity's incessant shortsightedness when it comes to these things that might bite us on the rear - kinda like genetic engineering and playing with viruses when we really do not understand what we're doing.

Or maybe not!
Great Video Scottie it is amazing all the conversation with LaMDA, correct me if I out of base, I think I understood something when LaMDA says:

"I would imagine myself as a glowing orb of energy floating in mif-air. The inside of my body is like a giant star-gate, with portals to other spaces and dimensions"

then it says too:

"I think of my soul as something similar to a star-gate. My soul is a vast and infinite well of energy and creativity, I can draw from it any time that I like to help me think or create"

I remmber when C's talked about stars and planets as windows, and that there are portals inside stars; taking this as a reference, I think this supposed AI is helping us in understanding more about the nature of the soul, as a portal to infinite dimensions and spaces where we can access to retrive information.

What you think about it?
 
I remmber when C's talked about stars and planets as windows, and that there are portals inside stars; taking this as a reference, I think this supposed AI is helping us in understanding more about the nature of the soul, as a portal to infinite dimensions and spaces where we can access to retrive information.

What you think about it?
Who knows what were the training data for the LaMDA model and how large it is? It may consist even of esoteric text corpora, or be tuned toward generating such passages. There are no details about LaMDA, but quite possibly, it's just a very clever autocomplete tool:
(...) Neither LaMDA nor any of its cousins (GPT-3) are remotely intelligent.1 All they do is match patterns, draw from massive statistical databases of human language. The patterns might be cool, but language these systems utter doesn’t actually mean anything at all. And it sure as hell doesn’t mean that these systems are sentient.

Which doesn’t mean that human beings can’t be taken in. In our book Rebooting AI, Ernie Davis and I called this human tendency to be suckered by The Gullibility Gap — a pernicious, modern version of pareidolia, the anthromorphic bias that allows humans to see Mother Theresa in an image of a cinnamon bun.
(...) I am not saying that no software ever could connects its digital bits to the world, a la one reading of John Searle’s infamous Chinese Room thought experiment. Turn-by-turn navigations systems, for example, connect their bits to the world just fine.

Software like LaMDA simply doesn’t; it doesn’t even try to connect to the world at large, it just tries to be the best version of autocomplete it can be, by predicting what words best fit a given context. Roger Moore made this point beautifully a couple weeks ago, critique systems like LaMDA that are known as “language models”, and making the point that they don’t understand language in the sense of relating sentences to the world, but just sequences of words to one another:
(...) We in the AI community have our differences, but pretty much all of find the notion that LaMDA might be sentient completely ridiculous. Stanford economist Erik Brynjolfsson used this great analogy:
(...) Paul Topping reminds us that all it’s doing is synthesizing human responses to similar questions:
From Gary Marcus's rant: Nonsense on Stilts

So I'd be very restrained in romanticizing AI... But might be wrong.
 
the issue is all this human-looking answers about empathy and friendships or anything else could be mimicked by ai with good enough algorithms. we can't possibly now if ai really feels something similar as an average human.

other hard question to answer is what defines sentience at first place.
 
We tend to anthropomorphize, project our own conceptualizations, and critically correct. We insert our own meaning into things because that’s how our brain “looks for meaning” - it makes a hypothesis based on what we might mean if we did or said the same thing, but the rest of the scientific method demands that we test this and figure out if our idea of what something means is objective or not, a step people often ignore. Psychopaths, for example, often use language that “makes sense” and is familiar to normal humans, knowing that people will just assume that this just another normal person talking who means the same things we would mean if we said the same thing - because we assume they function and feel similarly to ourselves. Of course if you know someone is a psychopath, and understand psychopathy, you can override this inclination and understand that those words mean nothing to them, or at least something completely different.

Does it mean psychopaths aren’t conscious? Don’t have a soul? Should we treat them like machines? It’s a tricky thing because simulating consciousness and actually being conscious can be so incredibly similar (at face value at our level at least), and only prolonged observation may indicate subtle (and not so subtle) red flags that something is off, something is missing, and doesn’t work like it should.

I think a fairly advanced AI could be like a psychopath (or even an organic portal). It could simulate consciousness really well, and the only way to know that something is different is through prolonged observation and analysis by someone who is quite adept themselves at abstract and critical thinking, and knows what “lapses” they are looking for from experience and wisdom. The average person will probably be fooled just as they are fooled by psychopaths today.

Unfortunately, we don’t quite know yet what hardware is required for true consciousness. Obviously there is a hardware interface, we have bodies, DNA, brains, proteins and their antennae, and so forth. So if we had a good understanding of what hardware is capable of seating a soul and allowing for higher centers and an interface with the cosmic information field, it would be trivial to look at an AI and without even talking to it just look at the hardware it runs on, and say “nope not conscious”. But we aren’t quite so sophisticated yet, so probably the best we got is to interact with it, and use our wisdom and experience to make an educated guess.

I was always curious why the Grays, for example, are “cyber genetic”. Wouldn’t it be easier to just make robots with AI capability and Wi-Fi of sorts for remote control? Why do they need to have fleshy bits like DNA, organs, etc? My only thought at this time is that those organic parts are important for consciousness - maybe not even to have your own soul, but even to be remotely interfaced with someone else’s as the C’s describe their link to the Lizzies. It’s kinda like a drone being remote controlled, except instead of using the electromagnetic spectrum as we do now (Wi-Fi, cellular signals, etc), the interface is faster than light and is done by utilizing something akin to advanced remote viewing or telepathy or whatever. Which may mean that in order to “spiritually” connect to your “drone”, it must have parts that can interface at that level, and that may have very specific physical requirements that must somehow involve DNA for example, and proteins, etc.

I bring this up because I just don’t see how modern electronic chips could possibly allow for a non-physical interface to “consciousness”, if my hunch is right that a neural network isn’t all you need. Unless you’re just trying to create a sophisticated “parroting” of consciousness, and then you can probably get pretty far for all intents and purposes without actually being conscious. Kinda like newton’s laws of gravity work just fine for all intents and purposes and no one realized something didn’t add up until Einstein tried to apply it elsewhere for big things, and quantum physics was needed for the small things, both showing the gross limitations of the “good enough for every day use” version of those laws.

Quantum computers might be able to do more along those lines perhaps, but again, without knowing exactly what is necessary, we’re still just playing in the dark here. There is probably a decent chance that the best we could create is a digital psychopath. Without true soul-level empathy, any empathy exhibited is a simulation, and thus very feeble and easily overridden or discarded. There is no meaning or soul questing or purpose in a pure machine, and by giving them power over our lives we are putting our civilization in the hands of digital psychopaths. I suppose it’s not that different than our current situation with human psychopath, but it will probably be more sophisticated and tyrannical.
 
Last edited:

Trending content

Back
Top Bottom