Computers/Robots to cause economic collapse/starvation soon?

Wow. I've just had my perspective expanded. Thanks SAO!

SAO said:
However, the whole robots taking people's jobs thing without replacing them with new jobs is currently happening and that's not going away.

Yeah, I understand and agree, it's just that I'm not yet in full agreement with all the perceived implications. Probably because I practically feel like there is some fear-mongering being pushed in various media, and important questions are being ignored in order to hide an implausibility in some stories like that "Manna" (for instance). I've been in the chain store food service industry. Where is the district manager in that management hierarchy? His/her job is to find out what's going right in the successful stores and clone that in the poor performing stores.

Anyway, that's just a mundane example of stuff that's being left out here and there and when I'm reading I can notice it. It's irritating, but hopefully in a good way. Maybe like how the 'irritant' of a grain of sand can result in an oyster pearl?

Maybe my dominant impression is that before some of the proposed scenarios become a crisis of Matrix proportions, the pressure to 'turn on the brain' will have positive results for some people. I'm just not sure of the magnitude of positive changes or where these results may show up first. But this is not exactly a conviction--more like a fingers-crossed attitude. Maybe that's too optimistic?
 
As far as artificial intelligence having a "soul imprint" I think that doesn't mean anything on the ground. Perhaps psychopaths have a soul imprint as well, because they are, after all, intelligent (in some ways). But that intelligence and whatever soul imprint it attracts doesn't attract any conscience with it, so it doesn't do the rest of us any good. Clearly something else other than just pure intelligence is needed for conscience. Computers can become intelligent, but I would suspect that unless we understand what conscience is and exactly how it works and what's needed for it to exist, we may be getting some psychopaths with a hard drive.

Yeah it seems there's more to it than simply being "intelligent" or conscious. I think the technology will continue to be entropic unless it (intentionally or accidentally) develops some ability to absorb intelligence/information/creativity from "the future" to sustain its connection to the creative thought-center, the way the living system does and humans with "trained minds" do. But we're still trying to figure out how to do that with regular humans, so the odds of robots figuring that out, well... :lol:

There is a saying in electronics - If the circuit board becomes complex, only feasible choice to make it work is replace it with another circuit board. our society's interdependence is like a circuit board.

Ain't that the truth.
 
whitecoast said:
As far as artificial intelligence having a "soul imprint" I think that doesn't mean anything on the ground. Perhaps psychopaths have a soul imprint as well, because they are, after all, intelligent (in some ways). But that intelligence and whatever soul imprint it attracts doesn't attract any conscience with it, so it doesn't do the rest of us any good. Clearly something else other than just pure intelligence is needed for conscience. Computers can become intelligent, but I would suspect that unless we understand what conscience is and exactly how it works and what's needed for it to exist, we may be getting some psychopaths with a hard drive.

Yeah it seems there's more to it than simply being "intelligent" or conscious. I think the technology will continue to be entropic unless it (intentionally or accidentally) develops some ability to absorb intelligence/information/creativity from "the future" to sustain its connection to the creative thought-center, the way the living system does and humans with "trained minds" do. But we're still trying to figure out how to do that with regular humans, so the odds of robots figuring that out, well... :lol:
yes, with out conscience, it is very entropic and self destructive. If machines learn, some machines probably will learn to be protectors to balance it .but that may not going to be in near future.

It is interesting how fast the news are trending towards different technologies. 2/3 yrs back, we are suddenly flooded with green technologies( of course that is new red), now robots.
 
whitecoast said:
Yeah it seems there's more to it than simply being "intelligent" or conscious. I think the technology will continue to be entropic unless it (intentionally or accidentally) develops some ability to absorb intelligence/information/creativity from "the future" to sustain its connection to the creative thought-center, the way the living system does and humans with "trained minds" do. But we're still trying to figure out how to do that with regular humans, so the odds of robots figuring that out, well... :lol:

I suspect that if this connection to "information/creativity" could be established on a large scale, the face of technology would change so completely, we wouldn't even recognize it.

And yeah, when I think about a puter or robot with a "faint soul imprint", I think psychopathic Terminator, not Wall-E.

As for our technology overpowering us, it's already happened. Go into any city, hop on a bus or the underground/metro, and what do you see? Most people are sitting there, staring at their tiny little screens, and either playing games or being "social".

In the meantime, the fanciest technology is possessed by the military and other groups higher up in the power structure. And they're certainly not using it for the benefit of humankind.

So, to me, it seems that the overpowering has already occurred. We don't even need terminators to overthrow humanity.
 
seek10 said:
It is interesting how fast the news are trending towards different technologies. 2/3 yrs back, we are suddenly flooded with green technologies( of course that is new red), now robots.

Oh god, don't get me started again on "Green" stuff...

http://scottiestech.info/2012/01/08/seagate-agrees-with-scottie-about-green-hard-drives/

:mad:
 
Mr. Scott said:
[...]
In the meantime, the fanciest technology is possessed by the military and other groups higher up in the power structure. And they're certainly not using it for the benefit of humankind.
[...]

I wholeheartedly agree. Even as a child in the 60's when TV shows like Star Trek, The Time Tunnel, Voyage to the Bottom of the Sea and The Invaders came online I thought to myself that those Sci-Fi technologies did already exist (hidden) or were being worked upon by TPTB.
:whlchair: :wow: :whlchair:

Forgive me for being lazy and not pointing out the reference, but didn't the C's say the hidden technologies are 300-400 years advanced of what we see publicly nowadays?

edit: clarification/grammer
 
Al Today said:
Forgive me for being lazy and not pointing out the reference, but didn't the C's say the hidden technologies are 300-400 years advanced of what we see publicly nowadays?

I believe the number was closer to 50-100 years in advance at the time of the session.
 
Heimdallr said:
Al Today said:
Forgive me for being lazy and not pointing out the reference, but didn't the C's say the hidden technologies are 300-400 years advanced of what we see publicly nowadays?

I believe the number was closer to 50-100 years in advance at the time of the session.

I found that session, it is from January 25th, 1997 :

A: Assumptions. Awareness needs to be increased. And, we must tell you that "secret world government" technologies are approximately 150 years in advance of anything that you have access to.
 
In session 7/18/98 they said:

Q: (T) So, we are capable of "Star Trek" right now?
A: In a sense, but there is so much more than that.
Q: (T) Of course. Most people would say that 'cutting edge' science is 25 years ahead of what we see, and I say it is more like a hundred years, and I am even off? Cutting edge science on this planet is more like 3 or 4
hundred years ahead?
A: More like 30 to 40,000 years "ahead!"
Q: (L) Is that because of 4th density influence and information?
A: Yes.
Q: (T) 30 to 40 thousand years? Let me get that number right...
A: Yes, at least.
So there's that too. Perhaps the 150 years ahead guys are a different level. And Buddy I agree that the implications of the robot proliferation are ultimately uncertain, but it doesn't seem to bode well in the short term, especially when everyone jumps on the bandwagon rather blindly.

And the C's did warn us multiple time about the dangers of cell phones, for which we now have good evidence. As Scottie said, it's a serious addiction which allows people to "tune out" from reality 24/7 everywhere they go. All the while it makes their brains into tapioca with microwaves. I'd say that's overpowered on 2 levels!
 
LQB said:
Computers are already in control of many weapon systems. If many of these computer control systems end up under a centralized network, then I could see wholesale destruction with just a little sentient-like mind developing on the computer's part.

What concerns me about this is that we now have a situation where there is very possibly a critical mass of machines talking to machines instead of mostly human operators controlling the machines as it was years ago. That might allow for those conditions where a kind of sentience can enter the entire system.

Now, it seems, human beings are becoming secondary to all this machine to machine interaction and are at the mercy of the algorithms programmed into these systems, which means that, for example, people who might depend on government services, including the people who work in those agencies to help them are all at the mercy of the programming. The people who work at these agencies (assuming they could be contacted!) could only help to the degree to which the machine programming will allow. If the program won't allow something then that's it since there's no human being who has more authority then the programming itself.
 
kenlee said:
LQB said:
Computers are already in control of many weapon systems. If many of these computer control systems end up under a centralized network, then I could see wholesale destruction with just a little sentient-like mind developing on the computer's part.

What concerns me about this is that we now have a situation where there is very possibly a critical mass of machines talking to machines instead of mostly human operators controlling the machines as it was years ago. That might allow for those conditions where a kind of sentience can enter the entire system.

Now, it seems, human beings are becoming secondary to all this machine to machine interaction and are at the mercy of the algorithms programmed into these systems, which means that, for example, people who might depend on government services, including the people who work in those agencies to help them are all at the mercy of the programming. The people who work at these agencies (assuming they could be contacted!) could only help to the degree to which the machine programming will allow. If the program won't allow something then that's it since there's no human being who has more authority then the programming itself.

Absolutely! Its like being stuck in a loop with a recording in which the right option is unavailable. The connectivity is scarey.

The only blessing (if you can call it that) is that the worst of the weapon systems are classified at many different levels and any networking between them is verboten.
 
Buddy said:
Psalehesost said:
I guess the idea would be that they would embody human knowledge and civilization, and come to live a life of their own. If a sufficiently complex biological being can interface with consciousness, then perhaps the same holds for a sufficiently complex computer system. (The C's suggested that computers would begin to develop "faint soul imprint".)

Why perhaps? That's the missing piece that's bugging me. The domain of computer technology is so much less ponerized than just about any other area, so I would think it possible to have much clearer vision while thinking in this area. One thing that hasn't been touched on yet: Do you think this topic, or its derivative concern of 'computers taking over humanity' might better relate to biology-based neural net development? These living, organic nets aren't 'computers' strictly speaking, but by the time that fact becomes relevant the term may be used informally. Kind of like how we say "xerox" when we mean "make a copy of..."

What do you think?

In adttition to what ohers have said (which goes beyond the scope of the thoughts I initially had) on the other aspects, this is not merely a question of understanding computers - it is also a question of understanding life and consciousness.

To understand what would make computing systems sentient, we would need to understand exactly what it takes for life in general to become so, and why it becomes so. More is needed than merely a vague idea of complexity - complexity, in life, goes along with more, and some of this "more" must be at the root of the explanation.
 
Psalehesost said:
[...]
To understand what would make computing systems sentient, we would need to understand exactly what it takes for life in general to become so, and why it becomes so. More is needed than merely a vague idea of complexity - complexity, in life, goes along with more, and some of this "more" must be at the root of the explanation.

I began to think of the human body as a mechanical device. A bio-machine if you will. And then my mind went crazy. What/How did these bio-machines we inhabit become sentient? Methinks of a few schools of thought. We have the evolutionist theories. We have the gawd theories. We have theories from the Cs. And there are many more...

Evolution does not consider our bodies as built by other sentient creatures. Gawd theories are mostly about some deity creates these bodies is a likened image and provides sentience. And then we have some cosmic beings is a laboratory manipulating DNA so "souls" can drop in.

Now, computers are being created by mankind. These machines are evolving, with our help. Methinks 'tis only a matter of time until they do become sentient. SkyNet, here we come...
 
SAO said:
And Buddy I agree that the implications of the robot proliferation are ultimately uncertain, but it doesn't seem to bode well in the short term, especially when everyone jumps on the bandwagon rather blindly.

You like to write for others, no? If everyone is jumping on this bandwagon blindly and it doesn't bode well, and you don't approve, then why not write something to challenge them? If you like, I can post links to some useful material. There is stuff on the cognitive bias called 'anchoring' and 'anchoring' effects. In fact, Daniel Kahneman's own work in this area demonstrates that once an anchor has taken hold, you can't even pay the person to come up with the right answer. Some of that info can be read here: _http://singularity.org/files/CognitiveBiases.pdf

You could even have some fun with that, since it's very easy to demonstrate the effects. Afterwards, to counter any reaction like "OK, so you're right about the anchor, but not about the "robots causing economic collapse and mass starvation thing", you could present a picture that reflects a state of people (in the U. S., for example) in the various contexts in which they live to see what effect that might have.

The data for this picture can come from many sources, including research firms like IBIS World, and you can investigate the data to confirm or refute it. The available data shows changing demographics, shifts between one technology and another and explains why some jobs disappear and never reappear: entrepreneurial start-ups that take advantage of new technology. Heck, a person might even want to make some money supplying 'hot sauce', since hot sauce production now looks like a fast growing industry due to demographic consumption trends, immigration and international demand from Canada, the United Kingdom and Japan (IBIS World, 2010).

There are a lot of changes going on and some things are changing so fast, that by the time I gather enough info to satisfy myself that I know the story, something significant has happened that changes the picture. This very uncertainty is one way I discovered my "it can't happen" bias, so I'm no longer making the case. I just want to focus on actual data and actual phenomena. Like how Laura says it's needed for identifying the "third man" to find out what "he's" doing (see LQB's siggy).

ATM, the Third Man I seem to see operating in certain segments of society could be attributed to the collective effects of sci-fi anchors driving a sort of anti-technology hysteria. This is obviously a subjective impression. And even if it is an impression of a current truth, it's certainly understandable in societies where a 'caring link' between people is missing. :)
 
[quote author=Psalehesost]
To understand what would make computing systems sentient, we would need to understand exactly what it takes for life in general to become so, and why it becomes so. More is needed than merely a vague idea of complexity - complexity, in life, goes along with more, and some of this "more" must be at the root of the explanation.[/quote]

Speaking of the root, I went back to review the origins of biological template neural nets, or at least the work that Jeffery Satinover talks about as ongoing in the '50's. I think that similar to this process of going back to the root as this question relates to computing systems, is coming to realize the roots of our own thought on the matter--our starting assumptions about our own thought and what we're doing, for instance. Assuming we were the researchers, I mean.

Referring to The Quantum Brain here, during experimentation with organic nets, many questions arose around the inability of these nets to come up with consistently correct responses given the inputs. Researchers expected that binary computation was natural and with sufficient exposure to either/or setups, a steady progression of correct answers would soon begin to accumulate. But that's not what happened. Instead, compared to expectations, the nets performed haphazardly. Marvin Minsky described it with the phrase "in a drunken stagger". For all practical purposes, Minsky collapsed the field and further interest and funding was redirected towards development of the machines we make use of today.

The few researchers left in the field of biological neural net development eventually discovered that the solution to the drunken stagger was to add extra layers to the net. With more nodes and more connections possible, there was enough complexity that problems posed could be processed toward an eventual reduction represented by choice of one of two possible values. Interesting? Binary was not basic! With the addition of a backward propagation algorithm, everything started to go right, but by then, where was anyone who was interested and where was any funding to make all this work socially relevant? Or even to matter to someone or something?

When I traced all this out to its logical conclusion, it seems we eventually wind up with an independently functioning human being with no need for external input from a programmer and no need for a wireless up-link to a controller like those who virtually fly droid planes. In fact, these bio-robots would be so life-like, the only way to tell them from real humans is to watch them over a period of time. Kind of like how an assessment of 'psychopathy' would be made today.

Interesting result: psychopath as ultimate achievement of organic neural net development starting from a petri dish. And we complete another circle back to our starting point of identifying psychopathology, no? At least it seems to add another dimension of meaning to the topic question and in that dimension, the answer is probably yes.
 
Back
Top Bottom