AI: Aiming to Learn as We Do, a Machine Teaches Itself

Ellipse

The Living Force
FOTCM Member
The New York Times
By STEVE LOHR
Published: October 4, 2010

Give a computer a task that can be crisply defined — win at chess, predict the weather — and the machine bests humans nearly every time. Yet when problems are nuanced or ambiguous, or require combining varied sources of information, computers are no match for human intelligence.

Few challenges in computing loom larger than unraveling semantics, understanding the meaning of language. One reason is that the meaning of words and phrases hinges not only on their context, but also on background knowledge that humans learn over years, day after day.

Since the start of the year, a team of researchers at Carnegie Mellon University — supported by grants from the Defense Advanced Research Projects Agency and Google, and tapping into a research supercomputing cluster provided by Yahoo — has been fine-tuning a computer system that is trying to master semantics by learning more like a human. Its beating hardware heart is a sleek, silver-gray computer — calculating 24 hours a day, seven days a week — that resides in a basement computer center at the university, in Pittsburgh. The computer was primed by the researchers with some basic knowledge in various categories and set loose on the Web with a mission to teach itself.

“For all the advances in computer science, we still don’t have a computer that can learn as humans do, cumulatively, over the long term,” said the team’s leader, Tom M. Mitchell, a computer scientist and chairman of the machine learning department.

The Never-Ending Language Learning system, or NELL, has made an impressive showing so far. NELL scans hundreds of millions of Web pages for text patterns that it uses to learn facts, 390,000 to date, with an estimated accuracy of 87 percent. These facts are grouped into semantic categories — cities, companies, sports teams, actors, universities, plants and 274 others. The category facts are things like “San Francisco is a city” and “sunflower is a plant.”

NELL also learns facts that are relations between members of two categories. For example, Peyton Manning is a football player (category). The Indianapolis Colts is a football team (category). By scanning text patterns, NELL can infer with a high probability that Peyton Manning plays for the Indianapolis Colts — even if it has never read that Mr. Manning plays for the Colts. “Plays for” is a relation, and there are 280 kinds of relations. The number of categories and relations has more than doubled since earlier this year, and will steadily expand.

The learned facts are continuously added to NELL’s growing database, which the researchers call a “knowledge base.” A larger pool of facts, Dr. Mitchell says, will help refine NELL’s learning algorithms so that it finds facts on the Web more accurately and more efficiently over time.

NELL is one project in a widening field of research and investment aimed at enabling computers to better understand the meaning of language. Many of these efforts tap the Web as a rich trove of text to assemble structured ontologies — formal descriptions of concepts and relationships — to help computers mimic human understanding. The ideal has been discussed for years, and more than a decade ago Sir Tim Berners-Lee, who invented the underlying software for the World Wide Web, sketched his vision of a “semantic Web.”

Today, ever-faster computers, an explosion of Web data and improved software techniques are opening the door to rapid progress. Scientists at universities, government labs, Google, Microsoft, I.B.M. and elsewhere are pursuing breakthroughs, along somewhat different paths.

For example, I.B.M.’s “question answering” machine, Watson, shows remarkable semantic understanding in fields like history, literature and sports as it plays the quiz show “Jeopardy!” Google Squared, a research project at the Internet search giant, demonstrates ample grasp of semantic categories as it finds and presents information from around the Web on search topics like “U.S. presidents” and “cheeses.”

Still, artificial intelligence experts agree that the Carnegie Mellon approach is innovative. Many semantic learning systems, they note, are more passive learners, largely hand-crafted by human programmers, while NELL is highly automated. “What’s exciting and significant about it is the continuous learning, as if NELL is exercising curiosity on its own, with little human help,” said Oren Etzioni, a computer scientist at the University of Washington, who leads a project called TextRunner, which reads the Web to extract facts.

Computers that understand language, experts say, promise a big payoff someday. The potential applications range from smarter search (supplying natural-language answers to search queries, not just links to Web pages) to virtual personal assistants that can reply to questions in specific disciplines or activities like health, education, travel and shopping.

“The technology is really maturing, and will increasingly be used to gain understanding,” said Alfred Spector, vice president of research for Google. “We’re on the verge now in this semantic world.”

With NELL, the researchers built a base of knowledge, seeding each kind of category or relation with 10 to 15 examples that are true. In the category for emotions, for example: “Anger is an emotion.” “Bliss is an emotion.” And about a dozen more.

Then NELL gets to work. Its tools include programs that extract and classify text phrases from the Web, programs that look for patterns and correlations, and programs that learn rules. For example, when the computer system reads the phrase “Pikes Peak,” it studies the structure — two words, each beginning with a capital letter, and the last word is Peak. That structure alone might make it probable that Pikes Peak is a mountain. But NELL also reads in several ways. It will mine for text phrases that surround Pikes Peak and similar noun phrases repeatedly. For example, “I climbed XXX.”

NELL, Dr. Mitchell explains, is designed to be able to grapple with words in different contexts, by deploying a hierarchy of rules to resolve ambiguity. This kind of nuanced judgment tends to flummox computers. “But as it turns out, a system like this works much better if you force it to learn many things, hundreds at once,” he said.

For example, the text-phrase structure “I climbed XXX” very often occurs with a mountain. But when NELL reads, “I climbed stairs,” it has previously learned with great certainty that “stairs” belongs to the category “building part.” “It self-corrects when it has more information, as it learns more,” Dr. Mitchell explained.

NELL, he says, is just getting under way, and its growing knowledge base of facts and relations is intended as a foundation for improving machine intelligence. Dr. Mitchell offers an example of the kind of knowledge NELL cannot manage today, but may someday. Take two similar sentences, he said. “The girl caught the butterfly with the spots.” And, “The girl caught the butterfly with the net.”

A human reader, he noted, inherently understands that girls hold nets, and girls are not usually spotted. So, in the first sentence, “spots” is associated with “butterfly,” and in the second, “net” with “girl.”

“That’s obvious to a person, but it’s not obvious to a computer,” Dr. Mitchell said. “So much of human language is background knowledge, knowledge accumulated over time. That’s where NELL is headed, and the challenge is how to get that knowledge.”

A helping hand from humans, occasionally, will be part of the answer. For the first six months, NELL ran unassisted. But the research team noticed that while it did well with most categories and relations, its accuracy on about one-fourth of them trailed well behind. Starting in June, the researchers began scanning each category and relation for about five minutes every two weeks. When they find blatant errors, they label and correct them, putting NELL’s learning engine back on track.

When Dr. Mitchell scanned the “baked goods” category recently, he noticed a clear pattern. NELL was at first quite accurate, easily identifying all kinds of pies, breads, cakes and cookies as baked goods. But things went awry after NELL’s noun-phrase classifier decided “Internet cookies” was a baked good. (Its database related to baked goods or the Internet apparently lacked the knowledge to correct the mistake.)

NELL had read the sentence “I deleted my Internet cookies.” So when it read “I deleted my files,” it decided “files” was probably a baked good, too. “It started this whole avalanche of mistakes,” Dr. Mitchell said. He corrected the Internet cookies error and restarted NELL’s bakery education.

His ideal, Dr. Mitchell said, was a computer system that could learn continuously with no need for human assistance. “We’re not there yet,” he said. “But you and I don’t learn in isolation either.”

http://www.nytimes.com/2010/10/05/science/05compute.html
 
I suspect that any AI we create will be a psychopath. What's a psychopath but a brain without a heart? And we don't know how to create genuine emotions in a machine, never mind empathy (which may be a spiritual function, and not something you can build into a machine in the first place). So unless we solve that issue before we make a computer think and have free will, I don't see how it could be anything but a psychopath. Emotions is what gives us reason to make choices in the first place, and the intellect is just good at figuring out how to do stuff and other specifics of what to do to accomplish the otherwise emotionally/empathy-driven direction. So either an AI will just sit there idly and do nothing because it has no impetus to even make choices to begin with, or its choices will be based on some sort of idle intellectual curiosity (not sure if such a thing even exists without emotional backing).

You can only make a robot/computer pretend to care about humans when it has no free will (and therefore no real intelligence, just a lot of programming, lots of if/then functions). As soon as it gets free will, it must then choose to care, and that's where we'll probably encounter an issue - what reason would prompt a consciousness that has free will to care about other consciousnesses and their well-being? As far as I know empathy is required, and without it the machine will at best be STS, and at worst won't function at all since it won't have a selfish nor a selfless "drive" that prompts it. I hope the comets hit before computer/robot psychopaths start destroying things - human psychopaths are enough!
 
From the description in the article, the NELL AI effort is impressive from a software engineering perspective.

Playing "devil's advocate" to SAO's ideas, I could imagine that a rudimentary form of "emotion" can be expressed into source code.
The golden rule, "do unto others as you would have others do unto you" seems easy enough to code. Or at least it is not a huge stretch of imagination to be able to code rules or a mathematical equation of "the golden rule".
In my line of work, automated test suites/tools are built to achieve the most results with the least user intervention. This translates to running a sequence of (almost) all possible permutations to find bugs/defects in the code. Which is not so different from a person contemplating the cascade of effects from a course of action he is currently contemplating.

From a software perspective, it would be much easier to "update the source code" with new functions/rules than, say, trying to reform a real-live psychopath since he is "hard wired" to not feel (or "learn") emotion.
But on the other hand, the term "hard wire" comes from the field of electronics... :/

The problem I see from a programming perspective would be if the AI classifies/categorizes itself as different from humans (no surprises there) and would have a different set of rules for machines than for humans. Which is kind of scary. :scared:
But is this any different from a group of children without parents/adults to teach them "right from wrong"? "Lord of the Flies" springs to mind...
 
SAO said:
I hope the comets hit before computer/robot psychopaths start destroying things - human psychopaths are enough!

Maybe the two subjects are connected.
 
Well, this NELL project seems identical to the CYC project started in 1984 (link below).

I agree with both replies, because they aren't actually opposed or anything. It's like the whole idea of how this Work we do helps us see that everything fits in the larger picture somewhere. All we need do is get the left-brain and right-brain married together and working with all the available cognitive faculties.

I don't subscribe to sharp distinctions between the left and right hemispheres of the brain, but I do feel like the right brain, through powerfully strong emotions, generates the motivation and the leads for retrieving all relevant facts that pertain to a particular situation or thinking problem.

Once you get all the pieces together, it is precisely through our empathy and emotions that the myriad collection of data in our minds can be "felt" all at once. It is these emotions that put reality into perspective to expose ponerization and psychopathology - something I don't think a computer will ever do.

I can hardly wait to discover DARPA's plans for it (as if I couldn't guess :rolleyes:).

------------------------
I couldn't find any serious criticism of the NELL project, but in case it's interesting, here's some that were made of CYC:

The Cyc project has been described as "one of the most controversial endeavors of the artificial intelligence history", so it has inevitably garnered its share of criticism which include:

* The complexity of the system - arguably necessitated by its encyclopedic ambitions - and the consequent difficulty in adding to the system by hand
* Scalability problems from widespread reification, especially as constants
* Unsatisfactory treatment of the concept of substance and the related distinction between intrinsic and extrinsic properties
* The lack of any meaningful benchmark or comparison for the efficiency of Cyc's inference (general logical deduction) engine
* The current incompleteness of the system in both breadth and depth and the related difficulty in measuring its completeness
* Limited documentation
* The lack of up-to-date on-line training material makes it difficult for new people to learn the systems
* A large number of gaps in not only the ontology of ordinary objects but an almost complete lack of relevant assertions describing such objects

Source: _http://en.wikipedia.org/wiki/Cyc
 
Interesting thought SAO about AI and psychopathy. Is the original intent of the programmer will be never enough to make an AI STO ? Perhaps, a day, we will see battles of STS AI vs STO AI.

Bud, thanks for the link. We can see NELL as CYC v2 in a way.

About AI here's the dangers I see :
- AI + Robotic. Very real interaction with our physical world. This is the subject of the movie Terminator.
- AI + self programming. The machine is no more doing what you put in it.
- AI + Internet. If the AI know to use the net to self replicate using virus techniques you can no more just unplug the machine...
- At some point the humanity completely rely on AI and no more want to exercise his free will.

Imagine the first 3 points combined...
 
I just logged on to Google a few hours ago and the first item to catch my attention was a delicious recipe for cashews and kale. How does Google know I have a garden with Winterbor Kale, Curly Scotch Kale, and Red Russian Kale in abundance? Google remembers I ordered Kale seed last spring and suggests a Kale recipe in the fall. How thoughtful! It is the search engine which is useful, it provides me with answers to questions I didn’t know I had.

George Dyson’s essay "Turing’s Cathedral" hypothesizes that Goggle may be the machine that teaches itself. There are many interesting essays and dialogues on AI on www.edge.org. A comment by Nasim Taleb on Craig Venter’s “Creation of a Bacterial Cell Controlled by a Chemically Synthesized Genome,” point to risk Artificial Intelligence and Artificial Genomes pose to humanity.

I think the fear of psychopathic AI is misdirected unless AI and AG can be combined to create skin jobs, like those in Blade Runner. The real danger is the psychopaths that create artificial intelligence and artificial life without understanding or consideration they may destroy humanity with their technology. Technology may usher in a scientific dictatorship or human extinction. Here is Dyson on Google…..

http://www.edge.org/3rd_culture/dyson05/dyson05_index.html said:
"The whole human memory can be, and probably in a short time will be, made accessible to every individual," wrote H. G. Wells in his 1938 prophecy World Brain. "This new all-human cerebrum need not be concentrated in any one single place. It can be reproduced exactly and fully, in Peru, China, Iceland, Central Africa, or wherever else seems to afford an insurance against danger and interruption. It can have at once, the concentration of a craniate animal and the diffused vitality of an amoeba." Wells foresaw not only the distributed intelligence of the World Wide Web, but the inevitability that this intelligence would coalesce, and that power, as well as knowledge, would fall under its domain. "In a universal organization and clarification of knowledge and ideas... in the evocation, that is, of what I have here called a World Brain... in that and in that alone, it is maintained, is there any clear hope of a really Competent Receiver for world affairs... We do not want dictators, we do not want oligarchic parties or class rule, we want a widespread world intelligence conscious of itself."

My visit to Google? Despite the whimsical furniture and other toys, I felt I was entering a 14th-century cathedral — not in the 14th century but in the 12th century, while it was being built. Everyone was busy carving one stone here and another stone there, with some invisible architect getting everything to fit. The mood was playful, yet there was a palpable reverence in the air. "We are not scanning all those books to be read by people," explained one of my hosts after my talk. "We are scanning them to be read by an AI."

When I returned to highway 101, I found myself recollecting the words of Alan Turing, in his seminal paper Computing Machinery and Intelligence, a founding document in the quest for true AI. "In attempting to construct such machines we should not be irreverently usurping His power of creating souls, any more than we are in the procreation of children," Turing had advised. "Rather we are, in either case, instruments of His will providing mansions for the souls that He creates."

Google is Turing's cathedral, awaiting its soul. We hope. In the words of an unusually perceptive friend: "When I was there, just before the IPO, I thought the coziness to be almost overwhelming. Happy Golden Retrievers running in slow motion through water sprinklers on the lawn. People waving and smiling, toys everywhere. I immediately suspected that unimaginable evil was happening somewhere in the dark corners. If the devil would come to earth, what place would be better to hide?"

For 30 years I have been wondering, what indication of its existence might we expect from a true AI? Certainly not any explicit revelation, which might spark a movement to pull the plug. Anomalous accumulation or creation of wealth might be a sign, or an unquenchable thirst for raw information, storage space, and processing cycles, or a concerted attempt to secure an uninterrupted, autonomous power supply. But the real sign, I suspect, would be a circle of cheerful, contented, intellectually and physically well-nourished people surrounding the AI. There wouldn't be any need for True Believers, or the downloading of human brains or anything sinister like that: just a gradual, gentle, pervasive and mutually beneficial contact between us and a growing something else. This remains a non-testable hypothesis, for now. The best description comes from science fiction writer Simon Ings:

"When our machines overtook us, too complex and efficient for us to control, they did it so fast and so smoothly and so usefully, only a fool or a prophet would have dared complain."

Here is Nassim Taleb on the ultimate Black Swan………….

http://www.edge.org/documents/archive/edge319.html#rc said:
If I understand this well, to the creationists, this should be an insult to God; but, further, to the evolutionist, this is certainly an insult to evolution. And to the risk manager/probabilist, like myself & my peers, this is an insult to human Prudence, the beginning of the mother-of-all exposure to Black Swans. Let me explain.

Evolution (in complex systems) proceeds by undirected, convex bricolage or tinkering, inherently robust, i.e., with the achievement of potential stochastic gains thanks to continuous and repetitive small, near-harmless mistakes. What men have done with top-down, command-and-control science has been exactly the reverse: concave interventions, i.e., the achievement of small certain gains through exposure to massive stochastic mistakes (coming from the natural incompleteness in our understanding of systems). Our record in understanding risks in complex systems (biology, economics, climate) has been pitiful, marred with retrospective distortions (we only understand the risks after the damage takes place), and there is nothing to convince me that we have gotten better at risk management. In this particular case, because of the scalability of the errors, you are exposed to the wildest possible form of informational uncertainty (even more than markets), producing tail risks of unheard proportions.

I have an immense respect for Craig Venter, whom I consider one of the smartest men who ever breathed, but, giving fallible humans such powers is similar to giving a small child a bunch of explosives.
 
There is this extract from a cassiopaean transcript (I don't know which one exactly I searched and found the quote back)

A: Well, first, no being that is given intelligence to think on its own is, in fact, comepletely soul-less. It does have some soul imprint. Or what could be loosely referred to as soul imprint. This may be a collection of psychic energies that are available in the general vicinity. And this is stretching somewhat so that you can understand the basic ideas, even though in reality it is all far more complex than that. But, in any case, there is really no such thing as being completely soul-less, whether it be a natural intelligence or an artificially constructed intelligence. And, one of the very most interesting things about that from your perspective, is that your technology on 3rd density, which we might add, has been aided somewhat by interactions with those that you might refer to as "aliens," is now reaching a level whereby the artificially created intelligences can, in fact, begin to develop, or attract some soul imprint energy. If you follow what we are saying. For example: your computers, which are now on the verge of reaching the level whereby they can think by themselves, will begin to develop faint soul imprint. Q: (L) That's not a pleasant thought.

So...will psychopaths somehow reincarnate into ultra intelligent machines ?? As they don't feel any emotions could it be interesting for them ?
 
Thanks Tigersoap for this reminder. This is session 09/09/95.

There's this other one (talking about Atlanteans) :

19941119.html
Q: (T) What power did these crystals gather?
A: Sun.
Q: (T) Was it necessary for them to have power gathering stations on Mars and the Moon. Did this increase their power?
A: Not necessary but it is not necessary for you to have a million dollars either. Get the correlation? Atlanteans were power hungry the way your society is money hungry.
Q: (T) Was the accumulation of this power what brought about their downfall?
A: Yes.
Q: (T) Did they lose control of this power?
A: It overpowered them the same way your computers will overpower you.

With a good synchronization, yesterday night, I have discover the existence of the Caprica TV series witch is about what we are talking (http://en.wikipedia.org/wiki/Caprica_%28TV_series%29).
 

Attachments

  • Caprica.jpg
    Caprica.jpg
    10.5 KB · Views: 132
It has often occurred to me, a la George Dyson's commentary, that Google is already a kind of AI. But, it is not an AI in the sense we usually think of it, ie a self-contained intelligence: rather it's a participatory network intelligence, learning thanks to the input from over a billion Internet users, refining its knowledge of each of us individually and of humanity as a whole.

What worries me is not that machine intelligence will go Skynet on us and start building terminator drones (though of course, the Pentagon dearly wants those drones.) Of far greater concern is that machine intelligence will become too good at making the 'right' decision. go2's experience with the kale recipe is just one example of this. Basically, if the AI gets the right answer 99.99% of the time, it becomes far too easy for humans to start over-relying on it. Before long they are no longer exercising free will in any meaningful sense, but are rather being managed by a sort of soft paternalism ... one they freely choose to enter, because it makes so much sense.

The science fiction writer Poul Anderson explored this idea in a series of books (Harvest of Stars, The Stars Are Also Fire), in which a future civilization is essentially run entirely by machine intelligences which create what appears to be a perfect, utopian existence for the human inhabitants ... but on closer examination, the humans have been robbed of all agency, have become essentially decorative. Like pets.

It seems to me that we are already witnessing the beginning of this process: an over-reliance on the conclusions arrived at by means of computer simulation and mechanical computation, and a decreased confidence in our own intuitive reasoning capacities. I see this throughout the sciences, especially, but also in wider society. "Google said it, so it must be true," seems to be the mantra of much of the youth.

Now, add brain chips into the mix (which Intel plans on having on the mass market by 2020): direct neural interface devices which allow the 'net to be accessed with a thought. Of course the communication will be two-way, as well, and since the brain is so strongly connected access to one part of it allows access to the entire structure. It's been suggested by some that implanted RFID chips are all that is necessary to achieve this access. The possibilities for a very direct form of mind control are pretty obvious.

Where does it all lead, though? If as the C's suggest, our computers will overpower us the way Atlanteans were overpowered by their crystalline technology, what does that mean, exactly?

Well, what are the consequences for a society whose members have given up their agency to the decisions of an AI? Who no longer exercise personal discernment, but rely upon the conclusions of computer simulations which (since computers have essentially no intuitive capacity) will inevitably be deeply misleading, with little relation to actual reality (witness for instance the global warming simulations)? This is wishful thinking writ large. However good the AI's decisions may seem for individual terminal points (aka human beings), writ large the wishful thinking will lead to disastrous macroscopic decisions, economically, ecologically, politically, psychosocially.

We already see this happening.

BUT, it is also possible to disengage from this dynamic. To use the network as a tool, to collate, filter and trade information, whilst maintaining our hold on, and continuing to develop, our individual discernment and free agency. This is of course what the network on this forum is already doing.
 
psychegram said:
Well, what are the consequences for a society whose members have given up their agency to the decisions of an AI? Who no longer exercise personal discernment, but rely upon the conclusions of computer simulations which (since computers have essentially no intuitive capacity) will inevitably be deeply misleading, with little relation to actual reality (witness for instance the global warming simulations)? This is wishful thinking writ large. However good the AI's decisions may seem for individual terminal points (aka human beings), writ large the wishful thinking will lead to disastrous macroscopic decisions, economically, ecologically, politically, psychosocially.

The way your post lays it out, the current concerns with AI are synonymous with "the way people currently do thinking", and that is so true from my perspective. In this sense, machines taking over people's thinking is just a redundancy.

Without using that 'look again' aspect of thinking, one might never consider enough actual possibilities before arriving at a conclusion, or to consider the possible consequences afterward.

Example:

Not long ago, a California appeals court ruling clamping down on homeschooling by parents without teaching credentials sent shock waves across the state, leaving an estimated 166,000 children as possible truants and their parents at risk of prosecution. The homeschooling movement never saw the case coming. Why?

One can't help but wonder if some people are so fixated on fear and compliance with 'rules of thinking and behavior' that they can't see any potential for idiotic implications.

In other words, it's NOT OK to 'risk' traumatizing children's minds with the uncredentialed education efforts of thoughtful, caring parents but it IS OK to traumatize both parents and children by having the State take possible possession of the children, throw the parents in jail, ruining their lives in every way from their reputations to their careers and future, spending more tax money to house and feed these 'new' criminals, etc, etc, and for what? :huh:

And if you were to challenge the court or state representative over this lunacy, do you know what they would say? "Well, we hope it will never have to come to that."

What??????

Then why was it made a criminal offense when it could have been a civil issue, a referendum or even left alone?

Read the full story here:
_http://articles.sfgate.com/2008-03-07/news/17170360_1_appeals-court-credential-parents

As I see it, the real problem as always, is getting people to turn on their brains, because so many people seem to rely more on the feelings associated with their more comfortable established 'judgment' methods.

Consider this ignorance: 'Prohibition' creates the market for gangsters to make huge profits. Anyone who wishes to take drugs this weekend will be able to acquire them. Therefore, clearly, the solution to the gangster problem is...more Prohibition!

It is already a sorry state.

Even when 'official law' is not involved, people seem to 'stop' thinking when they 'feel' the correct answer. Some older computer programmers may remember Jerry Weinberg. Weinberg warned against thinking you knew enough and no longer had to keep considering more possibilities. Back in the day when most of the programmers in this world were writing COBOL, PL/1 or raw assembler, Jerry Weinberg (1982) gave the example of a programmer writing an assembler. The programmer discovered that he could do table lookups based on the opcode number and so designed his program. But the hardware people did not hold the opcode numbering scheme sacrosanct, and when they made a valid change, the program design broke.

Any project suffers the same problem when people from 'different' departments working on the same project manage to screw it up anyway for the same reason mentioned above. Increasing the head count may increase processing power but does not necessarily increase the quality of the result.
_http://dustyvolumes.com/archives/371

A black-hole looked at one way may be a dead star. Looked another way, it might be a star trickle-charging. So, how should it be looked at? Maybe the significance of that question is not in it's answer, but that someone feels very strongly that it can only be one way. :)

(And sometimes humor can help break a trance long enough to allow a useful re-frame. That's why I always liked George Carlin.) :D



Edit: Added an extra link.
 
Hi Bud, Yes, when we look at ourselves and those around us, it seems Artificial Intelligence is a redundancy. :)

Bud said:
Not long ago, a California appeals court ruling clamping down on homeschooling by parents without teaching credentials sent shock waves across the state, leaving an estimated 166,000 children as possible truants and their parents at risk of prosecution. The homeschooling movement never saw the case coming. Why?

The State backed down when the parents of these children arrived at Sacramento in caravans and lobbied their legislators. Then to emphasized the point, they posted a round the clock picket in from of the judge's home. The State meets so little resistance to their creeping(pun intended) intrusion into every aspect of human life, they were caught by surprise. There are two groups of homeschooling parents. The Christian group and a highly educated group. Both groups are organized, motivated, intelligent, and affluent. They made it clear, this is a battle they were going to fight. So for now, my grandchildren will be home schooled.
 
It does seem most people are dependent on technology whether it is moblies, tv or the internet.
As well as people not discerning information they come across, AI or otherwise I also think about the stress and frustration people experience when the internet doesn't work or when there is no moblie reception.
Kind of like withdrawal and people start forgeting that they could write to a friend to keep in contact or go to the library to look something up.
Although the convenience of the internet makes it so much easier looking things up which is one of the problems when people start taking things for face value.
I wonder how people would cope for a period of time without technology. It might be chaotic, where people feel helpless without it.

I don't really know in what way the C's mean our computers will overpower us and combined with what they said on soul-imprint and the above article about developing AI it doesn't sound nice.
There seems to be many ways for technology to overpower us.
 
Bud said:
A black-hole looked at one way may be a dead star. Looked another way, it might be a star trickle-charging. So, how should it be looked at? Maybe the significance of that question is not in it's answer, but that someone feels very strongly that it can only be one way. :)

(And sometimes humor can help break a trance long enough to allow a useful re-frame. That's why I always liked George Carlin.) :D

Which (getting a bit off topic) is very much the problem in modern astrophysics: astronomers are very attached to the theories with which they explain the universe to the rest of us. The cosmic microwave background has to be the afterglow of the Big Bang; quasar redshifts have to be a result of the quasars racing away from us, due to the Big Bang; stars have to be isolated, nuclear explosions with predetermined life expectancies; dark matter has to dominate the dynamics of galaxies; and black holes must rule at their hearts. Alternative narratives are simply not allowed: all that is permitted, is to carry out the calculations that show observations support the theories.

Mechanical thinking.

But then this is not a modern problem. Obviously, Gurdjieff identified this as a central issue a century ago....
 
psychegram said:
Obviously, Gurdjieff identified this as a central issue a century ago....

Indeed, and we can see why he eventually 'retired' and turned to allegory. It seems it's about the only way to show something to people who don't yet have the eyes to see. The mechanical part of the mind is a control freak and a person's sense of himself is so identified with their ideas and beliefs that to 'make' them see would be to tear them apart psychologically, so the control freak needs to relax quite a bit and be willing to give it up themselves.

Like CYC, the mechanical mind's 'awareness' is completely limited to its collection of information packets, and it only knows about things that it has copied into its processing space. Like CYC, it has complete knowledge of everything in its own program holding area, and it doesn't know that anything except this space exists, so it has to believe it knows enough or that what it doesn't know doesn't matter! The mechanical part of the mind doesn't really know everything of course - in fact it hardly knows anything.

This is the error I believe Ouspensky and his fellow students made in the last chapter of ISOTM.

The wider context here starts with the knowledge that pre-revolution St. Petersburg was pretty much a hotbed of esoteric interest. The "non-materialist" agenda was something of which any journalist in the city would have been aware. We must also consider that G's traveling companions were some kind of 'high-brow' and well-to-do 'oil kings' or whatnot.

Concerning the description of G's behavior, where Ouspensky saw 'grandeur' or 'something felt outside the ordinary run of phenomena', the journalist wrote of interests "higher than war and peace" and a shocking contempt for his fellow man. And there was no reason for the journalist to rationalize or blind himself to what he might actually be seeing in this case.

So, was Ouspensky or the journalist closer to the truth? Who 'calculated' the correct answer?

From the outside we can compare Ouspensky's description of Gurdjieff's behavior and the journalists description and combined with the wider context which includes a knowledge of G's lessons, the historical context and the differences in the social groups, we may realize that G has just demonstrated his lesson in "plastics"!

Gurdjieff had deliberately completely synchronized himself (movements, attitude, bearing, etc) to a completely different social group before boarding the carriage and I reckon that at this particular point in time, the journalist nailed it. :)


--------------
ref:
ISOTM, 331-333
 
Back
Top Bottom