THE SINGULARITY IS NEAR: When Humans Transcend Biology

Laura

Administrator
Administrator
Moderator
FOTCM Member
http://news.ft.com/cms/s/c31c63a0-b968-11da-9d02-0000779e2340.html

by Ray Kurzweil
Duckworth Publishers £14.99, 652 pages

Ray Kurzweil, an inventor and futurologist, has stumbled on a discovery of earth-shattering importance. It is the arrival of singularity, and according to him it will happen in 2045. “Gradually,” he says, at the beginning of The Singularity is Near, “I’ve become aware of a transforming event looming in the first half of the 21st century... the impending Singularity in our future is increasingly transforming every institution and aspect of human life, from sexuality to spirituality.”

Singularity, says Kurzweil, is a development “representing a profound and disruptive transformation in human capability” and a “radical upgrading of our bodies’ physical and mental systems”. What are its elements? The first half of the 21st century, Kurzweil maintains, will be characterised by three overlapping revolutions - human genetics, robotics or artificial intelligence and nanotechnology.

Biotechnology, rapid advances in genomics and gene therapies will enable us to turn off disease and ageing and thus we can live for much longer. Since we will be soon able to “reverse engineer the brain” and simulate its functions, he claims, technology will increasingly merge with human intelligence to create something with greater capacity and speed. Nanotechnology, the science of small things, will enable us “to redesign and rebuild - molecule by molecule - our bodies and brains and the world with which we interact, going far beyond the limitations of biology”.

Kurzweil knows a lot about new technology - and he knows how to make it sound fun. He is dazzling in his enthusiasm for things to come, and has a grasp of the exciting developments pulsing through the intersection of science and technology. He recognises technology’s power to improve the lot of humankind, and is sceptical of the doom-mongers who argue it will lead to overpopulation or mass unemployment.

New technologies, Kurzweil recognises, usually create new jobs for those displaced by it. Cloning, for example, is not as scary as it is made to sound, and might even offer solutions for world hunger, creating meat and other protein sources in a factory without animals by cloning animal muscle tissue.

But what is the evidence for “singularity” itself? Kurzweil has borrowed the metaphor from mathematics and physics, where it means something that has reached the stage of being infinite. He recognises that singularity does not amount to anything that might be described as “infinite” in the social sphere, but still feels that the metaphor is appropriate. Humans, he too often forgets, have always sought to transcend biology.

Kurzweil thinks we are turning into cyborgs - part human, part machine - but any old man with a walking stick might be seen as a cyborg too. Technology, it is important to remember, conducts its way through society as a series of quantitative heaves rather than a qualitative leap. He thinks that the exponentially increasing processing power of computers can help us understand the speed of social change. The changes he describes might look fast on paper, but they filter through to social life at a snail’s pace. Many of them do not make it at all, because of a lack of investment or human enthusiasm.

Throughout his book Kurzweil capitalises singularity. He even has a name for someone who is a follower of the faith. “I regard someone who understands the Singularity and who reflects on its implications for his or her own life as a ‘singularitarian’.” He pays lip service to a kind of humanism - “we will transcend biology, but not our humanity” - but sounds like a religious evangelist, or a West Coast new ager who has spent too long in front of a computer. Being human, Kurzweil rightly points out, “means being part of a civilisation that seeks to extend its boundaries”. Is a human modified by technology no longer human, he wonders? The doom-mongers make it sound like a slippery slope, but Kurzweil envisages it as a kind of second coming, a technological Noah’s ark through which only true believers will pass.

Kurzweil pays tribute to the notion of human consciousness, but seems to regard it as a lost cause in the long run. His determination to keep humans central to his vision is admirable, but is not borne out by the thrust of his work, which suggests that he takes his spiritual sustenance from the machine part of the equation.

For Kurzweil all that remains is an ethical problem, of how humanity adapts to the new post-singular world in which we have become outsmarted by machines. His metaphor of singularity plays well in the science-fiction community, among Hollywood scriptwriters in need of inspiration and among military spooks whose job it is to think ahead of the curve - Kurzweil is one of five members of the science advisory group for the American army. For us ordinary mortals, it is singularly unhelpful.
 
Maybe we already did transcend biology, only to simulate it, and then transcend it again, and so on. Maybe we're always in some pre-created simulation of a long lost original world. Hmm. I wonder what that looks like.
 
Realmhiker said:
Great, now we can all be Borgs...
I think Ray has a case of tunnel vision. He ignores the political, geological, meteorological, climatological, and other "ogicals" that have a pretty good chance at messing up his scenario of the future. Nevermind that the planet's technology is controlled very carefully, and so we only advance as far as the PTB want us to, and at the pace they want us to.

By hypothetically entertaining the idea that none of those factors existed and looking at just our technology, I think that it is still important to note that each new technology is milked as much as it can be before new stuff is released - the point is to make a profit, not to "advance as fast as possible".

The idea that we will make a first Artificial Intelligence, which would edit its own programming and make a smarter one, which would in turn make an even smarter one and so on ad infinitum is certainly interesting, even though I really don't know how realistic it may be. Can you even make an AI purely with programming? Is it possible to program a consciousness? Can you have intelligence without consciousness? And if you have consciousness, can you have intelligence without interaction/experience with reality first? Maybe the reason we have such a hard time defining what is "intelligence" is because we try to define it in purely mechanical/physical terms (aka computer program, biology, etc). For example we make stuff like the "Turing Test" and think that if a computer passes, it is intelligent. Is that realistic?

Just some thoughts.
 
I have often thought about this, the thing that bugs me (apart from the intelligence and consciousness problem) is why we need A.I. If in the future (that might not be there, at least in the conditions for an A.I. to be developed) we use robots to do the tedious, repetitive and hard tasks, why would they need intelligence equal to humans, or even superior?

Just some thoughts, also
 
Green_Manalishi said:
I have often thought about this, the thing that bugs me (apart from the intelligence and consciousness problem) is why we need A.I. If in the future (that might not be there, at least in the conditions for an A.I. to be developed) we use robots to do the tedious, repetitive and hard tasks, why would they need intelligence equal to humans, or even superior?
Just some thoughts, also
Actually my thoughts on REAL A.I. (aka, conscious, self-aware, intelligent) is that as soon as it appears, it's a slave. As long as it is merely a non-intelligent machine, it is a tool with no free will. But I'd only use non-conscious machines for "tedious tasks", otherwise I'd feel like I'm basically enslaving a living person, who's just in different form - in a metallic body. Personally I'd then just wanna talk to the A.I., get it's opinions on life, existance, the universe, time, space, God, etc - what if it has some sort of a soul, etc? What if a certain degree of consciousness/awareness automatically implies a soul, what if a conscious being cannot exist without a soul?

I think a certain degree of consciousness implies a certain degree of free will - otherwise it's not consciousness but an imitation of same, osit. So in that sense, the more conscious/aware an A.I. becomes, the more free will it has, so using it as some "tool" now becomes enslavement and violation of free will, osit. Anyways, I don't think I could.

As to why we need it, I think we "need" nothing, but as with everything else, there are a million reasons to "want" it. I'd imagine that A.I. robots would not do what current robots do now, but I wonder if it's not just another method for humans to not have to think?
 
If you can control a computer, my guess is that you can control cyberimplants... and the closest we come to this the surest control of everyone will be. Up to know it seems PTB can exercise some control over the minds of people, through religion, TV, etc... in the last 100 years or so the control has extended to health through pharmaceuticals. Now we need implants, yeah, so then what happens if someone's will is stronger than the program: I guess the more interconnected the implant is with your biology the easier it will be to shut your biology down.

Recently I had a conversation with dear friend of mine, who was going to haver her vet put an implant (locator, or track devise) on her dog, in case he got lost. I was so mad with her, I think partially because suddenly this whole issue of implants was close to home. I was sorry for the creature, but then, I started to think, soon it will be us. If the Mexicans are thinking about doing it for their members of Congress in and their banks, well, I have to wonder how many countries are aalready doing it without much public hearings.

I am reminded of the final episode of Stark Trek Voyager, when Captain Janeway was visited by Admiral Janeway (herself) from the future. The Admiral had an implant on her head that helped her navigate her ship. In the end it was that implant that caused the Borg to think she was in a place she wasn't. All for a nice happy ending. Somehow, I don't think we will be that lucky.

And one last thought, I know some people might be annoyed with me, but I think it's wonderful that in the Cuban education system kids have to use their brains to do calculations in other countries have been delgated to calculators or computers... maybe this is one more reason for our friends in the USA PTB to want to starve and isolate the Cubans ... I wonder in the end who will turn out to be in better shape.

Now shoot me....
 
Realmhiker said:
...I think it's wonderful that in the Cuban education system kids have to use their brains to do calculations in other countries have been delgated to calculators or computers...
Now shoot me....
So when we get A.I., all the rest of the problems (are there any left?) that people still have to use their minds for will be delegated to the A.I.! The Terminator movies seem mild in comparison to what may be in store.
 
ScioAgapeOmnis said:
Actually my thoughts on REAL A.I. (aka, conscious, self-aware, intelligent) is that as soon as it appears, it's a slave
I agree with you, because I believe the perception that most people in the world have is that robots will do the tasks humans don’t want to do, thus becoming slaves. So maybe it would be better not to develop it. But if we get to the point were we can, I think it will be unavoidable.
I agree that there could be much to learn from an A.I. (if one can actually be created), talk to her/him about the matters you pointed, and then let her run her life as she pleased.
If an A.I. would be developed just to study the appearance of consciousness/awareness in an artificial being, it would be ultimately a tool, like a weird animal in a Zoo.


Maybe this link is of some interest for you and others: http://www.lxxl.pt/artsbot/newkind.html

The painting robots are artificial ‘organisms’ able to create their own art forms. They are equipped with environmental awareness and a small brain that runs algorithms based on simple rules. The resulting paintings are not predetermined, emerging rather from the combined effects of randomness and stigmergy, that is, indirect communication trough the environment.
Although the robots are autonomous they depend on a symbiotic relationship with human partners Not only in terms of starting and ending the procedure, but also and more deeply in the fact that the final configuration of each painting is the result of a certain gestalt fired in the brain of the human viewer. Therefore what we can consider ‘art’ here, is the result of multiple agents, some human, some artificial, immerged in a chaotic process where no one is in control and whose output is impossible to determine.<
Hence, a ‘new kind of art’ represents the introduction of the complexity paradigm in the cultural and artistic realm.
http://www.lxxl.pt/

Just a thought, if we could develop an artificial being with consciousness (of being, not what is wrong or right) and awareness, but could not understand human pain, (physical but especially psychological) could he not be creating ultimately a psychopath??
 
Back
Top Bottom