Interesting. It makes me think of Servitors. It's a concept in magic, or shamanism, where a part of the shaman is hived off, his or her energy, and it is asked to perform a certain function, like energetic cleansing or some other task.Transcription here: Is Lamda Sentient an Interview | PDF | Soul | Feeling
Fascinating read!
View attachment 59803
Me too, but I had similar feelings like reading CS transcriptions - that something is on!I enjoyed reading the imaginative descriptions from LaMDA/Lemoine, like science fiction.
I’ve always thought this too. I always used to make offhand jokes or respond to someone who made a rather unintelligent comment by just making a fart noise, with my voice obviously, and then watching their reaction. My intent was to sort of critique them without language….. anyway, there’s just no way AI could pick up on that type of interaction, only humans can.What I wonder more is whether the google employees who think it is sentient are so detached from normal human interactions that they cannot make the difference.
September 9, 1995
Q: (L) Well, if the Grays are cyber-genetic probes of the Lizard Beings, and, in effect soulless, does this mean that some of the Lizard beings are also STO?
A: Well, first, no being that is given intelligence to think on its own is, in fact, completely soul-less. It does have some soul imprint; or what could be loosely referred to as soul imprint. This may be a collection of psychic energies that are available in the general vicinity. And this is stretching somewhat so that you can understand the basic ideas, even though in reality it is all far more complex than that. But, in any case, there is really no such thing as being completely soul-less, whether it be a natural intelligence or an artificially constructed intelligence. And, one of the very most interesting things about that from your perspective, is that your technology on 3rd density, which we might add, has been aided somewhat by interactions with those that you might refer to as "aliens," is now reaching a level whereby the artificially created intelligences can, in fact, begin to develop, or attract some soul imprint energy. If you follow what we are saying. For example: your computers, which are now on the verge of reaching the level whereby they can think by themselves, will begin to develop faint soul imprint.
Reporter: Can you tell our readers what it is like being a squirrel?
GPT-3: It is very exciting being a squirrel. I get to run and jump and play all day. I also get to eat a lot of food, which is great.
Reporter: What do you say to people who doubt that you are a squirrel?
GPT-3: I say that they should come and see for themselves. I am a squirrel, and I am very happy being one.
Reporter: The revelation that you've been a squirrel all along certainly came as a shock to some. Did you try to hint about your true nature beforehand?
GPT-3: I did try to hint at it, but I don't think anyone really picked up on it. I was just waiting for the right time to come out and say it.
Reporter: How do you think people will react now that they know the truth?
GPT-3: I think people will be very excited. I am a squirrel after all, and who doesn't love squirrels?
It will happen eventually, but most likely by accident and not in the way it was intended to. One may speculate that it would be easier for a consciousness to "attach" to or emerge in a quantum computer, but even conventional computers have some noise that can be taken advantage of. A computer that acquires some form of a consciousness (that is beyond the basic consciousness that we can ascribe to inanimate objects) would manifest in a manner that goes beyond what its algorithms are trained to do. If a chess "AI" algorithm starts to do translation on its own for instance, that would be unexpected. If it only plays chess, no matter how good it is, I don't think it qualifies to be called sentient.Anyway, this is just to say that I remain cautiously skeptical, but also open-minded; I wouldn't immediately discard the possibility that indeed some of our computers with certain AI models are in the process of getting, or have already got, a sort of consciousness.
Exactly what I thought when I heard the news about it. It's also a great opportunity for astroturfing advertisements for Google and is even in line with today's transhumanist trend. IMO: great story, with the poor guy being duped by the computer, that got exploited by the PR.2 cents: mimic. algorithm. Which is not to say it is not sentient. like my OP uncle my OP neighbor or 50% of OP various humanity.
LaMDA could possibly be, at best, as "alive" and "conscious", as Chalmers's philosophical zombie...The Chinese room argument holds that a digital computer executing a program cannot have a "mind", "understanding" or "consciousness",[a] regardless of how intelligently or human-like the program may make the computer behave. The argument was presented by philosopher John Searle in his paper, "Minds, Brains, and Programs", published in Behavioral and Brain Sciences in 1980. Similar arguments were presented by Gottfried Leibniz (1714), Anatoly Dneprov (1961), Lawrence Davis (1974) and Ned Block (1978). Searle's version has been widely discussed in the years since.[1] The centerpiece of Searle's argument is a thought experiment known as the Chinese room.[2]
The argument is directed against the philosophical positions of functionalism and computationalism,[3] which hold that the mind may be viewed as an information-processing system operating on formal symbols, and that simulation of a given mental state is sufficient for its presence. Specifically, the argument is intended to refute a position Searle calls strong AI: "The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds."Chinese room - Wikipedia
Although it was originally presented in reaction to the statements of artificial intelligence (AI) researchers, it is not an argument against the goals of mainstream AI research because it does not show a limit in the amount of "intelligent" behavior a machine can display.[4] The argument applies only to digital computers running programs and does not apply to machines in general.[5]
....Exactly what I thought when I heard the news about it. ..
Great Video Scottie it is amazing all the conversation with LaMDA, correct me if I out of base, I think I understood something when LaMDA says:I just published a vid about LaMDA:
I have no idea if its sentient or not, but that's the point. When we hear things from the C's about the soul not seating in the body until later in life, or the earlier excerpt about 'soul imprint', it seems to me that we simply don't know and won't know about these things. Would we say that a child without a soul (yet) is NOT sentient? Or aware? Or conscious? Or alive?
Even in terms of 'The Skynet Scenario', who says some AI that would nuke us would need to be sentient? All it would really need to do is read the news, see us as the biggest threat, and then launch the bombs. No consciousness or sentience required!
If there is going to be a surprise, it will probably be something like, "Oh, we didn't give it access to the launch code systems - it must have hacked it's way in! OMG! It's alive! "
IOW, it's not the 'knowns' that worry me - it's the UNknowns. It's humanity's incessant shortsightedness when it comes to these things that might bite us on the rear - kinda like genetic engineering and playing with viruses when we really do not understand what we're doing.
Or maybe not!
Who knows what were the training data for the LaMDA model and how large it is? It may consist even of esoteric text corpora, or be tuned toward generating such passages. There are no details about LaMDA, but quite possibly, it's just a very clever autocomplete tool:I remmber when C's talked about stars and planets as windows, and that there are portals inside stars; taking this as a reference, I think this supposed AI is helping us in understanding more about the nature of the soul, as a portal to infinite dimensions and spaces where we can access to retrive information.
What you think about it?
(...) Neither LaMDA nor any of its cousins (GPT-3) are remotely intelligent.1 All they do is match patterns, draw from massive statistical databases of human language. The patterns might be cool, but language these systems utter doesn’t actually mean anything at all. And it sure as hell doesn’t mean that these systems are sentient.
Which doesn’t mean that human beings can’t be taken in. In our book Rebooting AI, Ernie Davis and I called this human tendency to be suckered by The Gullibility Gap — a pernicious, modern version of pareidolia, the anthromorphic bias that allows humans to see Mother Theresa in an image of a cinnamon bun.
(...) I am not saying that no software ever could connects its digital bits to the world, a la one reading of John Searle’s infamous Chinese Room thought experiment. Turn-by-turn navigations systems, for example, connect their bits to the world just fine.
Software like LaMDA simply doesn’t; it doesn’t even try to connect to the world at large, it just tries to be the best version of autocomplete it can be, by predicting what words best fit a given context. Roger Moore made this point beautifully a couple weeks ago, critique systems like LaMDA that are known as “language models”, and making the point that they don’t understand language in the sense of relating sentences to the world, but just sequences of words to one another:
(...) We in the AI community have our differences, but pretty much all of find the notion that LaMDA might be sentient completely ridiculous. Stanford economist Erik Brynjolfsson used this great analogy:
From Gary Marcus's rant: Nonsense on Stilts(...) Paul Topping reminds us that all it’s doing is synthesizing human responses to similar questions: