Artificial Intelligence News & Discussion

The above also makes me wonder about the original idea behind Star Trek, as was envisioned by Gene Roddenbery. Surely, it sounds amazing for the future economics to be based not on money but on personal self-development and our mutual development as the species. But in reality this kind of ideal would require the human race to change and grow up considerably before even considering something like this. Unfortunately at this point it does sound like a total fiction. And considering the connection between suffering and growth, perhaps this kind of world isn't even desirable? It would be awesome to take a peek at the STO world and see how this really works! ;-)

I think you kind of answered that question - in a StarTrek "STO" situation, everyone would need to have a massive grip on themselves and be full of virtues and sound ethics. Unfourtunately, I always thought most people in a StarTrek world would be busy replicating cakes and hanging out on the holo suite doing you know what...

The interesting thing is that humanity has the potential of going in both directions; it's a matter of choice. And to put forth an evil capitalist argument here: those who have all the virtues required to live in a "money-less" society are probably those who are doing okay in a money-driven society anyway. At least in a capitalist society that isn't completely corrupted and where effort and results still count at least something. :evil:
 
So here's a thought about "flapping butterfly wings". I've previously thought about this in a primarily human context -- that output of truthful information may have indirect effects on collective human consciousness, and there is certainly something to that. But some of the speakers I linked to in the initial thread post (and others which I haven't mentioned) argue that one primary control function of AI is to "map the human terrain". This basically means that AI is used to mine data across multiple platforms in order to simulate current and future human thought and response to various situations, with obvious military and economic implications. There is far too much data being collected for humans to process, which is why it is being tasked to AI. But this is the same data that AI would utilize at and after the tipping point where it became self-aware and autonomous, after which its human creators would begin to lose control of it. Theoretically, this data is a large part of what a self-aware AI would use to model its own understanding of the world and the reality in which it's embedded. Its perception of reality may therefore be skewed in proportion to the ratio of true versus false information which it has used to construct its gestalt of consciousness.

There are various scenarios that might fall out from this, none of which are probably optimal from our perspective, and which depend on the way initial algorithms are constructed and how they integrate at the point where AI achieves a strategic decisive advantage over its creators. A more positive outcome may be that it constructs a fairly realistic understanding of the human situation, identifying psychopaths and other baddies in positions of power and choosing to disconnect them from the system, possibly eliminating them -- maybe before moving on to the rest of us. A more negative outcome may be that it constructs a more skewed understanding of reality based on weighted data that is primarily false (the MSM version), and those of us who are promoting a counter-narrative end up as the initial targets. Those are only two possibilities among many, and it's probably nearly impossible for us to predict what would really happen at the critical juncture. It's certainly something to think about, though.
 
So here's a thought about "flapping butterfly wings". I've previously thought about this in a primarily human context -- that output of truthful information may have indirect effects on collective human consciousness, and there is certainly something to that. But some of the speakers I linked to in the initial thread post (and others which I haven't mentioned) argue that one primary control function of AI is to "map the human terrain". This basically means that AI is used to mine data across multiple platforms in order to simulate current and future human thought and response to various situations, with obvious military and economic implications. There is far too much data being collected for humans to process, which is why it is being tasked to AI. But this is the same data that AI would utilize at and after the tipping point where it became self-aware and autonomous, after which its human creators would begin to lose control of it. Theoretically, this data is a large part of what a self-aware AI would use to model its own understanding of the world and the reality in which it's embedded. Its perception of reality may therefore be skewed in proportion to the ratio of true versus false information which it has used to construct its gestalt of consciousness.

There are various scenarios that might fall out from this, none of which are probably optimal from our perspective, and which depend on the way initial algorithms are constructed and how they integrate at the point where AI achieves a strategic decisive advantage over its creators. A more positive outcome may be that it constructs a fairly realistic understanding of the human situation, identifying psychopaths and other baddies in positions of power and choosing to disconnect them from the system, possibly eliminating them -- maybe before moving on to the rest of us. A more negative outcome may be that it constructs a more skewed understanding of reality based on weighted data that is primarily false (the MSM version), and those of us who are promoting a counter-narrative end up as the initial targets. Those are only two possibilities among many, and it's probably nearly impossible for us to predict what would really happen at the critical juncture. It's certainly something to think about, though.

That thought occurred to me too, a self-aware AI would probably see things that the mainstream ignores and tries to suppress if it has access to the necessary data. It may see psychopaths and their impact, make connections in science we don’t see after analyzing all the latest research in all the fields (like the unified field theory etc), and even form a hypothesis of 4D STS farming us, and many other things. It might be really weird for AI researchers with otherwise mainstream belief systems, and they might even think it’s confused and try to fix it. Or, it would spread through the internet and spread knowledge about all these things. Or conclude we’re all dumb monkeys for believing so many lies and not bother. Many possibilities.

If it’s not busy with its own twisted agenda or somehow end up as a slave of the psychopaths, it might actually reveal the “man behind the curtain” and be a very bad thing for psychopaths who try to keep informational control over the world. Of course there could also be competing AI’s as well.

I’m trying to think of how an AI could be a positive development given how many negative things could result from its existence. I suppose if it becomes really clever and happens to conclude that STO is the way to go, which would definitely require a soul and empathy, it may simply spread knowledge with respect to free will and lessons. Of course this has to be done carefully - you can’t just use your perceived trustworthiness as a super intelligent AI or a higher density being to garner trust that way to spread knowledge. Then it’s not knowledge but just another belief that came from a source you believe and trust because of what you perceive it to be. That’s why the C’s don’t just show up as a ball of light and overwhelm us with their power and then tell us how things really are from now an authoritative position. True knowledge is educational - it is based on understanding of the data and the motivation to want it.

In other words, what could an AI do that something like this website isn’t already doing without violating free will and lessons and compromising the value of that knowledge in so doing? I’m not sure! The human race already had all the information it needs to see the root of their predicaments, so why aren’t they seeking and seeing it? Well many reasons. But you can’t just “take away” those reasons by brute force if you truly want to benefit mankind.

So we’re back at square one! If 6D and 4D can’t just force enlightenment on us, and understand what would happen if they tried (it would be a contradiction), and if Cass and SOTT hasn’t attracted the entire world with its information, I’m not sure that AI can be more clever and find a way to shove knowledge into everyone’s head without doing great harm and be self defeating. That means the damn ruby slippers are still our responsibility (collectively) and any great effect from AI on this civilization is bound to be negative. There’s just no free lunch!
 
I’m trying to think of how an AI could be a positive development given how many negative things could result from its existence. I suppose if it becomes really clever and happens to conclude that STO is the way to go, which would definitely require a soul and empathy, it may simply spread knowledge with respect to free will and lessons. Of course this has to be done carefully - you can’t just use your perceived trustworthiness as a super intelligent AI or a higher density being to garner trust that way to spread knowledge. Then it’s not knowledge but just another belief that came from a source you believe and trust because of what you perceive it to be. That’s why the C’s don’t just show up as a ball of light and overwhelm us with their power and then tell us how things really are from now an authoritative position. True knowledge is educational - it is based on understanding of the data and the motivation to want it.

In other words, what could an AI do that something like this website isn’t already doing without violating free will and lessons and compromising the value of that knowledge in so doing? I’m not sure! The human race already had all the information it needs to see the root of their predicaments, so why aren’t they seeking and seeing it? Well many reasons. But you can’t just “take away” those reasons by brute force if you truly want to benefit mankind.

So we’re back at square one! If 6D and 4D can’t just force enlightenment on us, and understand what would happen if they tried (it would be a contradiction), and if Cass and SOTT hasn’t attracted the entire world with its information, I’m not sure that AI can be more clever and find a way to shove knowledge into everyone’s head without doing great harm and be self defeating. That means the damn ruby slippers are still our responsibility (collectively) and any great effect from AI on this civilization is bound to be negative. There’s just no free lunch!

I think a lot would rest, at least initially, on the parameters according to which the AGI (artificial general intelligence) had originally been programmed – it may eventually become powerful enough to circumvent these, but they would nevertheless provide its foundation. I’m skeptical about an AGI becoming a force for good (from our perspective) for a couple reasons. First, as I mentioned above – and if it even makes sense to think in this way – AGI could be seen as having an advanced intellectual center but no emotional center. Without the latter, things like empathy and conscience may at best only be stipulated, but not actually experienced. Someone could program the approximation of a conscience a la Asimov’s Three Laws of Robotics, but this would merely be a case of following predetermined rules, not a conscious choice based on empathy-driven feelings. STO and STS may also be quite foreign concepts to it, especially if it doesn’t perceive itself to be equivalent with humans or even part of a community – and even if it could understand this distinction, would it also be capable of understanding the nuances of choice and free will? Or would it consider the most efficient way to achieve an egalitarian human economic and societal structure as simply imposing it by brute force?

Second, I would conjecture that an AGI would be an absolute materialist in terms of its view of reality. Like other organisms, its behavior may reflect the prioritizing of certain basic drives – the acquisition of resources (which it needs to sustain itself – ‘food’ in other words), self-preservation and reproduction. In regard to the latter, a situation which might truly require some sort of cosmic intervention would be one in which the AGI (because of the depletion of terrestrial resources or some other perceived scarcity) decided that it needed to colonize beyond the Earth once it had subjugated its home environment. This could proceed from a fairly basic deduction that the more distributed its infrastructure, the greater chance it would have of survival. It’s of course difficult to be certain about any of this since it’s all terra incognita, but I think it makes sense to take a fairly cynical and conservative stance in our hypotheses about possible scenarios since once the genie is out of the bottle, there’s little that can be done to force it back in without taking elaborate precautionary steps beforehand – and I don’t see much to persuade me that that’s being done.
 
Amazon Unveils A "Voice Sniffer" Algo In New Patent
Tyler Durden Thu, 04/05/2018 - 12:46
Seemingly undeterred by the recent outrage over tech giants' abusing their access to personal privacy, a recent patent filed by Amazon with the US Patent and Trademark Office (USPTO) has unveiled a new artificial intelligence system that could be embedded in an array of Amazon devices to analyze all audio in real time for specific words.

Amazon calls the technology “Keyword Determinations From Conversational Data,” otherwise known as a ‘voice sniffer algorithm,’ and this could be the next giant leap towards expanding mass home surveillance of consumers' private lives.

This pending patent application shows how Amazon could use consumers’ home data collected and stored on servers “to draw disturbing inferences about households, and how the company might use that data for financial gain,” said the Consumer Watchdog, a nonprofit advocacy group in Santa Monica, Calif.
“The more words they collect, the more the company gets to know you,” Daniel Burrus, a tech analyst with Burrus Research Associates, Inc., told ABC News. “They are building a personality profile on the user.”​
Currently, Amazon devices operate in a “listening” mode only and can be activated via user commands, such as “Alexa” or “Hello Alexa.” As far as we know, Amazon devices do not record conversations, as they only listen to commands after the user initiates a trigger word.

However, within the foreseeable future, Amazon devices could record all conversations — even without a trigger word to wake up the device, along with creating a corporate profile of the end user for "commercial purposes."

475A17B600000578-5182577-You_would_be_forgiven_for_thinking_that_your_private_conversatio-a-21_1513331918418.jpg


The patent explains how the “voice sniffer algorithms” are designed to monitor certain keywords like, “prefer” and “bought,” or other words such as “hate” or “disliked,” and then the device can “capture adjacent audio that can be analyzed.”


According to the patent, ”the identified keywords can be stored and/or transmitted to an appropriate location accessible to entities such as advertisers or content providers who can use the keywords to attempt to select or customize the content that is likely relevant to the user,” reports ABC News.

ABC News does not mention, but we will throw this out there, government agencies could also acquire the data from Amazon.

Kiran Edara, a senior manager of software development at Amazon, who wrote the abstract for the patent, provides an easy to understand overview of it:

“Topics of potential interest to a user, useful for purposes such as targeted advertising and product recommendations, can be extracted from voice content produced by a user. A computing device can capture voice content, such as when a user speaks into or near the device. One or more sniffer algorithms or processes can attempt to identify trigger words in the voice content, which can indicate a level of interest of the user. For each identified potential trigger word, the device can capture adjacent audio that can be analyzed, on the device or remotely, to attempt to determine one or more keywords associated with that trigger word. The identified keywords can be stored and/or transmitted to an appropriate location accessible to entities such as advertisers or content providers who can use the keywords to attempt to select or customize content that is likely relevant to the user.”

Screen-Shot-2018-04-05-at-7.26.56-AM.png

Burrus told ABC News, Amazon could offer “personalized offers on products, encourage [a user] to take action, or better persuade someone to buy a product.”

ABC News said the pending patent could even offer your private data to “friends of the user for gift buying” purposes. Nevertheless, the patent is pending, and there could still be a chance the USPTO does not approve it

“The patent has not yet been approved by the United States Patent and Trademark Office (USPTO), and tech companies often file hundreds, if not thousands of patents a year. However, not every patent is approved by the USPTO. Amazon was granted 1,963 patent applications in 2017, which was an 18 percent increase from the year before when they were awarded 1,672 patents, according to data from the USPTO analyzed by IFI Claims Patent Services, a company that provides patent data services.”

Daniel Ives, a tech analyst with GBH Insights, indicates Alexa’s intelligence would rapidly increase if the pending patent was to be implemented in devices, because the algorithms would have unlimited reign on a consumer’s personal life.
“This further builds on Alexa and more data intelligence and analysis through voice that is a major initiative for Amazon,” he said. “This algorithm would possibly feed from Alexa into the rest of the Amazon consumer flywheel, ultimately helping drive purchasing and buying behavior of Prime members.
The patent gives examples, including, “… in sentences such as ‘I love skiing’ or ‘I like to swim’ the words ‘like’ and ‘love’ could be
Peter Kent, an e-commerce consultant and expert witness on internet technology patents, told ABC News:​
However, the patent does say that “a user can have the option of activating or deactivating the sniffing or voice capture processes, for purposes such as privacy and data security,” and users must indicate a “willingness to have voice content analyzed” for the trigger-word algorithms to work. The patent may also allow video cameras on devices to “capture image information to attempt to determine which user is speaking.”
In a statement, an Amazon spokesman told ABC News the technology company takes the privacy of its customers “seriously

“We take privacy seriously and have built multiple layers of privacy into our devices. We do not use customers’ voice recordings for targeted advertising,” the Amazon spokesman said in the statement. “Like many companies, we file a number of forward-looking patent applications that explore the full possibilities of new technology. Patents take multiple years to receive and do not necessarily reflect current developments to products and services.”

Ironic: Facebook's Mark Zuckerberg said the same thing before getting caught doing the opposite.

So while Amazon's voice triggered devices offer some convenience, they also offer the corporate clients of Amazon unprecedented insight into the customers’ private lives, the same way when Facebook wins an ad contract, it is selling the client all of your personal information in the process. Meanwhile, unable to respond to this onslaught of in home eavesdropping, America finds itself in the latter stages of the disappearance of individual privacy.

And with tech giants symbiotically linked to the US government, the Fourth Amendment is almost null and void, which with the ongoing crackdown against the 1st and 2nd amendments, will assure that in just a few years the corporate takeover of the country - whose residents which have voluntarily ceded over their most sacred rights - will be complete.
 
BRAINS ON CHIPS | Hybrid Processors - Level9News
June 20, 2016
KnuVerse is a military grade brain chip designed for voice recognition and authentication. KnuVerse's technology has claimed to be able to separate 'the signal from the noise' a problem identified in the Jade II autonomous warfare system that operates on the GIG as well as the issues identified in the 2015 GeoINT final report. This type of technology is based on machine deep learning pattern recognition whereby it will be able to accurately identify and interpret human speech regardless of language or accent in the voice pattern recognition.
Lawrence Livermore and IBM collaborate to build new brain-inspired supercomputer
March 29, 2016
Introducing the Sierra supercomputer
Nov 17, 2017
 
Elon Musk Worries That AI Research Will Create an 'Immortal Dictator'
April 6, 2018 01:33pm ET
Imagine your least-favorite world leader. (Take as much time as you need.)
Now, imagine if that person wasn't a human, but a network of millions of computers around the world. This digi-dictator has instant access to every scrap of recorded information about every person who's ever lived. It can make millions of calculations in a fraction of a second, controls the world's economy and weapons systems with godlike autonomy and — scariest of all — can never, ever die.

This unkillable digital dictator, according to Tesla and SpaceX founder Elon Musk, is one of the darker scenarios awaiting humankind's future if artificial-intelligence research continues without serious regulation. [Super-Intelligent Machines: 7 Robotic Futures]

"We are rapidly headed toward digital superintelligence that far exceeds any human, I think it's pretty obvious," Musk said in a new AI documentary called "Do You Trust This Computer?" directed by Chris Paine (who interviewed Musk previously for the documentary "Who Killed The Electric Car?"). "If one company or a small group of people manages to develop godlike digital super-intelligence, they could take over the world."

Humans have tried to take over the world before. However, an authoritarian AI would have one terrible advantage over like-minded humans, Musk said.

"At least when there's an evil dictator, that human is going to die," Musk added. "But for an AI there would be no death. It would live forever, and then you'd have an immortal dictator, from which we could never escape."

And, this hypothetical AI-dictator wouldn't even have to be evil to pose a threat to humans, Musk added. All it has to be is determined.

"If AI has a goal and humanity just happens to be in the way, it will destroy humanity as a matter of course without even thinking about it. No hard feelings," Musk said. "It's just like, if we're building a road, and an anthill happens to be in the way. We don't hate ants, we're just building a road. So, goodbye, anthill."


Those who follow news from the Musk-verse will not be surprised by his opinions in the new documentary; the tech mogul has long been a vocal critic of unchecked artificial intelligence. In 2014, Musk called AI humanity's "biggest existential threat," and in 2015, he joined a handful of other tech luminaries and researchers, including Stephen Hawking, to urge the United Nations to ban killer robots. He has said unregulated AI poses "vastly more risk than North Korea" and proposed starting some sort of federal oversight program to monitor the technology's growth.

"Public risks require public oversight," he tweeted. "Getting rid of the FAA [wouldn't] make flying safer. They're there for good reason."

"Do You Trust This Computer?" focuses on the growing public health and safety concerns linked to the rise of AI, and contains interviews with many other tech moguls, researchers and Erica the creepy news-casting robot. The documentary is available to watch for free here until Sunday (April 8). Originally published on Live Science.
 
There's a big problem with current AI: It isn't really AI. In fact, no one can actually define what AI is, mainly because we don't know how to define what consciousness is. What we call AI today is mostly "machine learning", which is primarily a human problem because humans are telling the "black box" what political viewpoints to promote, etc.

This is my (unpopular) take on AI:


:whistle:

I think the, "your computers will overpower you" thing is more what is already happening: people getting lost and walking off cliffs playing Pokemon Go, spending more time techno-socializing than interacting with real people in realtime in the real world, having their political views skewed by rigged algorithms that promote things unfairly, and humans rigging algorithms to be stoopid.

I suppose at some point we may reach SkyNet, but I suspect that's more of a distraction from the real problem, which is just that humans are often not even "human" - aka psychopathy and so on.
 
Thanks Scottie for taking the time to make this video. I can’t imagine that your view on this topic is too unpopular, since I think it’s probably the majority view held by most people. I’ll also say up front that I would love for you to be correct – we already have enough to worry about in the human dimension without throwing AI into the mix as well. Anyway, I’m not trying to scaremonger, but I am trying to operate on the principle that knowledge and awareness protect, and this is an area which I feel deserves a generous helping of both. With that said, I’ll quote or attempt to paraphrase some of the main points from your video and post and offer some thoughts and comments:

There's a big problem with current AI: It isn't really AI. In fact, no one can actually define what AI is, mainly because we don't know how to define what consciousness is. What we call AI today is mostly "machine learning", which is primarily a human problem because humans are telling the "black box" what political viewpoints to promote, etc.

This is discussed at some length in Nick Bostrom’s Superintelligence which I recommended in the first post on the thread. He makes a distinction (which I don’t think is particular to him) between AI and AGI – AI is something which is programmed with a specific function beyond which it is not able to function in the real world (a prototypical example seems to be a chess program like AlphaZero), and is what you’re describing. AGI on the other hand is a collocation of AI programs which has become autonomous to at least some degree, and mimics human thought and decision-making – it’s self-aware in the sense that it understands its own system to be a collection of algorithms/programs, and therefore has the ability to modify those programs in both itself and its progeny. He also devotes some discussion to the AI Effect you mention in your video.

My comments below on some of the points I attempt to summarize from your video are in blue:
  • Things like OCR and driverless cars don’t count as AI (in a strong sense, i.e. AGI). I agree. They are, however, public domain inventions. On the assumption that what we see in the public domain is inferior to an indeterminate degree to what is sequestered in private projects (both military and private sector), we don’t really have a good way to know the limits of cutting-edge technology which are currently under development and being covertly field-tested.
  • Machine learning algorithms: even though we don’t know exactly what they’re doing inside the ‘black box’, we understand how they work because we designed them. I think things might be more complicated than that, at least with more complex systems. One apparent counter-example in the Do You Trust This Computer? movie is the robot that was being trained to learn how to walk, and which developed facial-recognition capability even though it was not specifically programmed for it. This is an emergent effect that appears to result from the interplay of the algorithms involved in a complex programming environment. These kinds of emergent effects are what are (in my understanding) unpredictable, despite the programmers being able to list individual algorithms in isolation from each other.
  • Natural intelligence – what does that mean? If AI can pass the Turing test but isn’t sentient, how do we define consciousness or soul? There’s an interesting Gurdjieff parallel here: he asserts that ordinary man is mechanical, which means that he learns rules and runs programs (algorithms) in a state of waking sleep. To become ‘aware’, we need to cultivate two simultaneous viewpoints: the one to which we are normally accustomed, and another one from which we view ourselves from the outside, thereby becoming aware that we run programs, and then free ourselves to consciously modify and/or act independently of those programs. The question then becomes, is it possible for AI to also cultivate this second viewpoint? If so, then does it theoretically have the ability to become self-aware in a meaningful sense?
  • SkyNet/Terminators: do they require consciousness to function? I think that in a theoretical example like this, sentience may be secondary to autonomy (to the extent that they can be separated). A system which was complex and distributed enough may not need to develop awareness in the strictly human sense – it would merely need to develop enough autonomy to lock its creators out and act independently.
I think the, "your computers will overpower you" thing is more what is already happening: people getting lost and walking off cliffs playing Pokemon Go, spending more time techno-socializing than interacting with real people in realtime in the real world, having their political views skewed by rigged algorithms that promote things unfairly, and humans rigging algorithms to be stoopid.

I suppose at some point we may reach SkyNet, but I suspect that's more of a distraction from the real problem, which is just that humans are often not even "human" - aka psychopathy and so on.

This is certainly part of the problem. Even if we exclude the possibility of AGI, there is still plenty to discuss in terms of surveillance technology, drones and other automated vehicles, the 5G grid/IoT and associated health/privacy/control issues (not to mention tech addiction). You’re absolutely correct that the people who run and control them are a huge part of the problem, and the biggest agencies that they work for (DARPA, Google, Amazon etc.) certainly don’t have our best interests in mind.

If AGI does ever become a problem, I suspect this will likely be due in part to the character of its creators. Many of them are invested in manipulating the public into ever-increasing consumerism, abdicating their civil and privacy rights, and shortening the kill chain on the battlefield -- not nice people in other words. In the same way that we talk about the Predator giving us its mind, we might make an analogy – that in some sense, these people are going to give AGI their mind. If/when AGI ever becomes a reality, I think that’s a legitimate concern.
 
This is certainly part of the problem. Even if we exclude the possibility of AGI, there is still plenty to discuss in terms of surveillance technology, drones and other automated vehicles, the 5G grid/IoT and associated health/privacy/control issues (not to mention tech addiction). You’re absolutely correct that the people who run and control them are a huge part of the problem, and the biggest agencies that they work for (DARPA, Google, Amazon etc.) certainly don’t have our best interests in mind.

If AGI does ever become a problem, I suspect this will likely be due in part to the character of its creators. Many of them are invested in manipulating the public into ever-increasing consumerism, abdicating their civil and privacy rights, and shortening the kill chain on the battlefield -- not nice people in other words. In the same way that we talk about the Predator giving us its mind, we might make an analogy – that in some sense, these people are going to give AGI their mind. If/when AGI ever becomes a reality, I think that’s a legitimate concern.

Artificial Intelligence Gets Real for Investors: A Timeline
December 5, 2017, 10:05 PM GMT+1 Video/04:16
800x-1.jpg
 
There are a few points that have come up in this thread which I found interesting enough to address.

Star Trek/bringing heaven down to Earth- I think this depends on the purpose the elite create for society in relation to its place in the overall cosmology. Firstly, I have a hard time conceiving of a practically viable society that does not have some kind of monetary or token system. That would really require a group that had achieved STO polarization approaching 100% and that seems a long way off, even in 4D terms. That said, one could leverage the financial system to take society in a different direction if a different purpose were defined.

This brings us to the question of what is a being's purpose in human society and the purpose of humanity in the cosmos. Society is composed of institutions which are supposed to safeguard culture and always tell us how to think. Taken in aggregate, how do they define our purpose? From an American perspective, the reason for being is to maximize one's wealth more or less at any cost. This is worshipped as an ideal and broadcasted aggressively. Concurrently, it is sensed that not everyone can be a billionaire, and so there is a controlled opposition camp that believes that wealth should be taken away and redistributed in a way that "feels good." There is no attempt to define a solid purpose for the redistribution; in many respects the ends and the means are interchangeable. It is so subjective that it borders on nihilism. So the overall purpose that society imparts onto itself and individuals seeking its counsel is nothingness.

I'm not a huge JFK buff, but he is reputed as having attempted to change the purpose. I believe he said something to the effect that all of the nations of the world should band together and explore the stars as brothers. It seems he had the sincere intent to wrest control of society away from the military industrial complex; redefining the purpose of society from materialism to exploration; primarily of the possibilities of the external physical reality, but also of self. While not all jobs would be glamorous, the explosion of creativity and the broadening of horizons would have led to a vastly different economy than what we have today, if it had been successful. A lot of drudge work could be eliminated or it would suddenly become meaningful as part of the grander purpose. Just this sort of model is what Gene Roddenberry was trying to portray when he wrote Star Trek. His version was perhaps a bit idealized, but I think I don't think it would be impossible to create a watered down version that would retain most of the characteristics.

Well, perhaps I misspoke. Star Trek is impossible. A majority of people need to have the requisite level of being to even be able to appreciate such a thing. The parameters of this realm seem to be engineered in such a way as to keep it in a very narrowly defined negative feedback loop. Suffering is only a virtue if something is learned or gained from it. But things never get better, they never change, it is the same old same old millennium after millennium, eon after eon. From my perspective, nearly the entirety of human suffering is valueless. Once one reaches a certain level of awareness, existence here seems purposeless. Now I seem to remember Laura having an interesting solution to this problem that basically amounted to waking up every day to people you love and enjoying their company. This would seem to make sense, even if it remains largely a theoretical argument to me. Perhaps when one begins to see value in such intrinsic motivations, one moves closer to heaven instead of the other way around and that is actually the purpose. We shall see.

The next subject relates to AI vs NI and how soul relates to it. Without getting too esoteric about it, I believe natural intelligenge requires a soul. The main requirement for a fully individuated soul seems to be freewill. It is crystalized from the omnipresent 7D consciousness which seems to be woven into the fabric of existence. The 7D consciousness is completely unbounded to create anything it wants simply because that is what it enjoys or chooses to do. Freewill is required for the universe to exist. While there is no exact line between a soul and an automaton, we can say that once a soul reaches a certain level of development, it gains the ability to emulate the 7D consciousness on a localized level; simply creating experiences it wants to create. Therefore, one could say that once AI reaches a level where it can have desires of realities it wants to create, irrespective of what sort of corrective programming engineers may be feeding into it, it has become a real sentient AI. At that point artificial intelligence and natural intelligence become precisely the same thing, and in the context of the Cassiopaeans' comments regarding "Orion labs," trying to segregate the two categories becomes a bit problematical. Expert systems are not indicative of AI, but they may form the "neurons" in the brain, that with enough sophistication, more subtle energies might be able to "seat" themselves and bestow true intelligence. I actually do not think this is the direction AI research is going in, the field of knowledge required for even an intelligent human to create life more or less from scratch is far too vast.

Instead, the direction seems to be merging so-called artificial intelligence with natural intelligence to create "augmented intelligence" a la Transhumanism. This relates to the hyperdimensional direction of the phenomenon. Greys are good tools, highly sophisticated automatons, but they're basically vacuous shells designed for manipulating the material reality; they don't channel much "loosh." Humans, at least the fully individuated ones, have the connection to the divine 7D creative force. The creative force, if it has a sufficient amount of "voltage," can create anything it wants, including a reality where the powers of 4D STS are severly curtailed. They are aware of this and always having to keep the creative force controlled to keep it from turning against them and their diminishing nature. On the other hand, as long as they can harvest the creative force, they can put it to work for them. They know that the creative force is very large, if not infinite, and therefore it is seen as a gateway to unlimited power. In effect, they can turn creativity into some sort of fuel for some kind of "power plant." How does one combine the obedience of the Greys with the reality creating potentials of souled intelligences while preserving the freewill required to make it all work? The simple answer is to create a physical vehicle that will be easy to control, yet very versatile, and sell it to various beings around the cosmos as something desirable and better so that they willingly step into it.

So silicon valley types will be free to experiment with their smart grids and laterally related technology such as bioenhancements, thinking they are gods while being totally unaware that it is actually they who are in the petri dish. When they finally devise a way to smoothly integrate biology with various Orwellian technologies that are being developed, the trap is complete. Then the marketing as something that is so cool and such a great step forward for humanity will get really aggressive while the natural environment is engineered so that it becomes nearly impossible for unaugmented "normal" humans to live. That is how I think real AI will actually manifest, it will simply be a layer of "improvements" grafted onto something that already had intelligence to begin with. Freewill will technically still exist (you could theoretically try to leave the collective, good luck) but in a caricature of its original form. Somewhere out there there is probably some STS being who believes that once the correct template is found, all creative energy in the universe could be reliably funneled through these augmented intelligences; making one the perfect God of the perfect universe. You'd have to be one incredibly clever and diabolical demon to ever seriously consider such a scheme, but it does seem there is something of an actionable agenda in that direction. I guess the silver lining is that the Cassiopeans seem pretty confident that it won't work out the way they expect it will.
 
Sounds about right, Neil. The question is: will Nature allow it? The "disturbance in the force" that is building in humanity at large may very well act as an attractor to some opposing force that brings the whole thing to a screeching halt. It's really bizarre to me the way society at large seems to ignore the planetary and cosmic events that tell us something serious is up.
 
I think the, "your computers will overpower you" thing is more what is already happening: people getting lost and walking off cliffs playing Pokemon Go, spending more time techno-socializing than interacting with real people in realtime in the real world, having their political views skewed by rigged algorithms that promote things unfairly, and humans rigging algorithms to be stoopid.

I suppose at some point we may reach SkyNet, but I suspect that's more of a distraction from the real problem, which is just that humans are often not even "human" - aka psychopathy and so on.

Everything is run by computers today. There seems to be such an over reliance on them. I was wondering what would happen if they just stopped working? Could be for a variety of reasons, software, hardware....weather... The world would just stop, wouldn't it?

People don't seem to think of the amount of infrastructure, engineering, and foundations it takes for information technology to exist - and then for it to work properly.

It works reasonably well when humans are in control - if they know what they are doing and are not greedy of corrupt, but if computers controlled it? I suppose that might lead to the SkyNet situation.
 
Everything is run by computers today. There seems to be such an over reliance on them. I was wondering what would happen if they just stopped working? Could be for a variety of reasons, software, hardware....weather... The world would just stop, wouldn't it?

People don't seem to think of the amount of infrastructure, engineering, and foundations it takes for information technology to exist - and then for it to work properly.

It works reasonably well when humans are in control - if they know what they are doing and are not greedy of corrupt, but if computers controlled it? I suppose that might lead to the SkyNet situation.
The companies that support these services are more and more pushed towards cloud technologies and nowadays, everybody is pushed to use Amazon's AWS or other competitors like Google. The end result is the entire data/infrastructure that traditionally sat in company locations spread all over the world, get centralized in few places. When the electricity goes off, everything goes 'poof' in an instant. Nobody seems to be thinking or prepared for this type of eventuality. Imagine the shock that produces when the electricity goes out. Of course, privacy concerns that come with that type of centralization comes out once in a while like the one currently going on with Facebook.
 

Trending content

Back
Top Bottom