AI: Aiming to Learn as We Do, a Machine Teaches Itself

psychegram said:
Well, what are the consequences for a society whose members have given up their agency to the decisions of an AI? Who no longer exercise personal discernment, but rely upon the conclusions of computer simulations which (since computers have essentially no intuitive capacity) will inevitably be deeply misleading, with little relation to actual reality (witness for instance the global warming simulations)? This is wishful thinking writ large. However good the AI's decisions may seem for individual terminal points (aka human beings), writ large the wishful thinking will lead to disastrous macroscopic decisions, economically, ecologically, politically, psychosocially.
All good points, and I agree with the above too. I'd also add, however, that even if the AI could develop an intuition and be largely helpful and correct microcosmically and macrocosmically, our reliance on it is bad for our development as conscious beings capable of making decisions and be responsible for our own existence. It's bad for the same reason that it would be harmful to have a caretaker alien civilization take care of us - to bring us technology and resources and tell us how to structure our government and how everything should work, etc. We need to do all this ourselves in order to grow as a group and as individuals. We need to figure things out and learn from challenges - mental, physical, and emotional. Otherwise we rely on something else, and no matter how smart and capable it is, we stop developing ourselves - whatever our caretaker gains, we lose that, we become weak and powerless and incapable of functioning and making decisions. So then what the hell good are we then - to ourselves and to anyone else?

Like you said, we'd just be house pets - very comfortable and well taken care of, but not challenged and not given the opportunity to grow and know anything or do anything, or even to be able to trust ourselves and figure anything out without being told how to do things. This AI then can't be STO since it is disempowering us, taking away from us the ability to be functioning conscious beings who can learn and progress themselves and become smarter and more capable and standing on our own 2 feet. Anything that takes this away from someone is either not conscious and is just following "caretaker" programming, or if it is conscious of what it does, then it doesn't have our best interests at heart, it chooses to keep us powerless and weak and it knows this, it wants us to be reliant on it.

This is why the C's say knowledge protects and why they can't spoon feed us and lead us by the hand, why knowledge can't be dispensed like halloween candy and it's on us to learn and grow from discovery and personal effort. That's the only way we get empowered, and that's the attitude you'd expect from an STO being/group, rather than offering to fix all our issues and take care of us and "save us" from our own incompetence. And new agers just don't get that, they think the same way as technologists/futurists (like Ray Kurzweil) - that anyone or anything that makes our decisions/life "easier" is automatically a good thing.

For us humans it's so tempting to accept assistance that actually takes the power/responsibility away from us, and it's no wonder that this is where all our technology is headed because it really does seem to just "make sense" to make life easier and easier, with nobody seemingly thinking about ever limiting that direction because why would they want to? Everyone on the technology bandwagon is convinced that any technology that makes everything easier and more efficient is good. It's one of those universally assumed absolutes with no consideration of the rule of 3 and therefore out of touch with reality. So if easier is always and absolutely "good", then technology will indefinitely continue to be developed to take any stress or challenge out of our lives with no reason to ever question that path. So how can it end with anything but disaster, a very "comfortable" disaster that sneaks up on us like gradually warming water because we just wanted to make our bath nice and warm but end up boiling ourselves?

To me, science and technology only makes sense as a way to understand the universe, not as a crutch to avoid having to think because thinking is just so hard and therefore "uncomfortable", or having to feel any feelings other than bliss, or making difficult choices, etc. That's how a child thinks - more candy is always good, more of what "feels good" is always good, less of anything that "feels bad" is always good, it's an "absolute" of a primitive/childish mind that has no understanding of the very basic but vital fact - that without challenge, without pain and certain degree of suffering, no progress can be made, no lessons can be learned, there is no growth or development since there is no impetus for either. Maybe that's life on the long wave cycle, who knows, and maybe we're attempting to technologically simulate the long wave cycle - to make existence totally blissful, pain and challenge free, just floating in space being disembodied with no need to do anything, to think anything, or to feel anything that we don't want to. No challenge or discomfort of any sort, just bliss. Anyone ever see the animated cartoon WALL-E? That's the mild PG version of that reality, we become stupid useless blobs of meat, literally nothing but food - no good to ourselves or anyone else, other than those who want to eat us.
 
Exactly, SAO, to everything you said there.

Ever see someone slavishly following the instructions of their brand-new GPS, even when they've made the drive a hundred times before?

That's what makes this network so important: it is composed of people who have found a way to use the 'net to extend their faculties, rather than as a replacement for their faculties.
 
psychegram said:
Exactly, SAO, to everything you said there.

Ever see someone slavishly following the instructions of their brand-new GPS, even when they've made the drive a hundred times before?

That's what makes this network so important: it is composed of people who have found a way to use the 'net to extend their faculties, rather than as a replacement for their faculties.

How many times have I read someone driving into a ditch because one of those things. Just goes to show how dangerous blindly following such technology can be.
 
Ellipse said:
The New York Times
By STEVE LOHR
Published: October 4, 2010

Give a computer a task that can be crisply defined — win at chess, predict the weather — and the machine bests humans nearly every time. Yet when problems are nuanced or ambiguous, or require combining varied sources of information, computers are no match for human intelligence.

Few challenges in computing loom larger than unraveling semantics, understanding the meaning of language. One reason is that the meaning of words and phrases hinges not only on their context, but also on background knowledge that humans learn over years, day after day.

From the same article:
Today, ever-faster computers, an explosion of Web data and improved software techniques are opening the door to rapid progress. Scientists at universities, government labs, Google, Microsoft, I.B.M. and elsewhere are pursuing breakthroughs, along somewhat different paths.

For example, I.B.M.’s “question answering” machine, Watson, shows remarkable semantic understanding in fields like history, literature and sports as it plays the quiz show “Jeopardy!” Google Squared, a research project at the Internet search giant, demonstrates ample grasp of semantic categories as it finds and presents information from around the Web on search topics like “U.S. presidents” and “cheeses.”

This is some Wikipedia on Watson:
Watson is an artificial intelligence computer system capable of answering questions posed in natural language,[2] developed in IBM's DeepQA project by a research team led by principal investigator David Ferrucci. Watson was named after IBM's first president, Thomas J. Watson.[3][4]

In 2011, as a test of its abilities, Watson competed on the quiz show Jeopardy!, in the show's only human-versus-machine match-up to date.[3] In a two-game, combined-point match, broadcast in three Jeopardy! episodes February 14–16, Watson beat Brad Rutter, the biggest all-time money winner on Jeopardy!, and Ken Jennings, the record holder for the longest championship streak (74 wins).[5][6] Watson received the first prize of $1 million, while Ken Jennings and Brad Rutter received $300,000 and $200,000, respectively. Jennings and Rutter pledged to donate half their winnings to charity, while IBM divided Watson's winnings between two charities.[7]

Watson had access to 200 million pages of structured and unstructured content consuming four terabytes of disk storage,[8] including the full text of Wikipedia,[9] but was not connected to the Internet during the game.

I watched a NOVA documentary on Watson yesterday on the local PBS channel (premiere date 2/9/11) and thought some of you might enjoy it. This AI dude Watson proves to be pretty slick, and Jeopardy! more difficult than I thought. http://video.pbs.org/video/1786674622/
 
Back
Top Bottom