How Google & Facebook Censor Content & Demonetize Independent Media

April 12, 2018 - Coincidence? On February 4, 2004, The Pentagon killed a project to amass personal browsing and viewing habits of American citizens. On the same date, Facebook launched. (Pictured graph)
Coincidence? On February 4, 2004, The Pentagon killed a project to amass personal browsing and viewing habits of American citizens. On the same date, Facebook launched. – Investment Watch Blog

Back-dated 02.04.04 - Pentagon kills LifeLog Project
Pentagon Kills LifeLog Project

The Pentagon canceled its so-called LifeLog project, an ambitious effort to build a database tracking a person's entire existence.

Run by Darpa, the Defense Department's research arm, LifeLog aimed to gather in a single place just about everything an individual says, sees or does: the phone calls made, the TV shows watched, the magazines read, the plane tickets bought, the e-mail sent and received. Out of this seemingly endless ocean of information, computer scientists would plot distinctive routes in the data, mapping relationships, memories, events and experiences.

LifeLog's backers said the all-encompassing diary could have turned into a near-perfect digital memory, giving its users computerized assistants with an almost flawless recall of what they had done in the past. But civil libertarians immediately pounced on the project when it debuted last spring, arguing that LifeLog could become the ultimate tool for profiling potential enemies of the state.

Researchers close to the project say they're not sure why it was dropped late last month. Darpa hasn't provided an explanation for LifeLog's quiet cancellation. "A change in priorities" is the only rationale agency spokeswoman Jan Walker gave to Wired News.

However, related Darpa efforts concerning software secretaries and mechanical brains are still moving ahead as planned.
[URL='https://twitter.com/intent/tweet?text=Pentagon%20Kills%20LifeLog%20Project&url=https%3A%2F%2Fwww.wired.com%2F2004%2F02%2Fpentagon-kills-lifelog-project%2F?mbid=social_twitter_onsiteshare&via=WIRED'][URL='https://www.wired.com/2004/02/pentagon-kills-lifelog-project/#comments'][EMAIL='?subject=WIRED%3A%20Pentagon%20Kills%20LifeLog%20Project&body=Check%20out%20this%20great%20article%20I%20read%20on%20WIRED%3A%20%22Pentagon%20Kills%20LifeLog%20Project%22%0D%0A%0D%0Ahttps%3A%2F%2Fwww.wired.com%2F2004%2F02%2Fpentagon-kills-lifelog-project%2F?mbid=email_onsiteshare']
LifeLog is the latest in a series of controversial programs that have been canceled by Darpa in recent months. The Terrorism Information Awareness, or TIA, data-mining initiative was eliminated by Congress – although many analysts believe its research continues on the classified side of the Pentagon's ledger. The Policy Analysis Market (or FutureMap), which provided a stock market of sorts for people to bet on terror strikes, was almost immediately withdrawn after its details came to light in July.

"I've always thought (LifeLog) would be the third program (after TIA and FutureMap) that could raise eyebrows if they didn't make it clear how privacy concerns would be met," said Peter Harsha, director of government affairs for the [URL='http://cra.org/']Computing Research Association
.

"Darpa's pretty gun-shy now," added Lee Tien, with the Electronic Frontier Foundation, which has been critical of many agency efforts. "After TIA, they discovered they weren't ready to deal with the firestorm of criticism."

That's too bad, artificial-intelligence researchers say. LifeLog would have addressed one of the key issues in developing computers that can think: how to take the unstructured mess of life, and recall it as discreet episodes – a trip to Washington, a sushi dinner, construction of a house.

"Obviously we're quite disappointed," said Howard Shrobe, who led a team from the Massachusetts Institute of Technology Artificial Intelligence Laboratory which spent weeks preparing a bid for a LifeLog contract. "We were very interested in the research focus of the program ... how to help a person capture and organize his or her experience. This is a theme with great importance to both AI and cognitive science."

To Tien, the project's cancellation means "it's just not tenable for Darpa to say anymore, 'We're just doing the technology, we have no responsibility for how it's used.'"

Private-sector research in this area is proceeding. At Microsoft, for example, minicomputer pioneer Gordon Bell's program, MyLifeBits, continues to develop ways to sort and store memories.

David Karger, Shrobe's colleague at MIT, thinks such efforts will still go on at Darpa, too.

"I am sure that such research will continue to be funded under some other title," wrote Karger in an e-mail. "I can't imagine Darpa 'dropping out' of such a key research area."

Pentagon Wants to Make a New PAL

Pentagon Alters LifeLog Project

A Spy Machine of DARPA's Dreams

Hide Out Under a Security Blanket

[/EMAIL][/URL][/URL][/URL]
 
April 8, 2018 - To Facebook, You’re Not the Customer, You’re the Product
To Facebook, You’re Not the Customer, You’re the Product - WhoWhatWhy

Imagine a stranger walking up to you on the street and saying: “If you give me your personal information and that of your friends and family, I’ll tell you which character from Harry Potter you are.” What sounds like a ridiculous proposition is, in a nutshell, the business model of Facebook.

The social media giant reportedly now has more than 2.2 billion monthly active users, i.e., people who log on at least once per month. But that’s a bit misleading. Because individuals are “users” in the same way that chickens are “users” of gigantic battery farms. Those hens aren’t customers, they are the product.

Just as the chickens don’t know where their eggs disappear to, Facebook users have no clue in whose hands their personal information will end up. Most of them probably don’t even realize how much data they are forking over when they “agree” to the terms of service (which virtually no one reads) before playing even the dumbest of games or taking quizzes telling them which type of pudding they are.

They certainly don’t seem to be asking themselves why they would need to grant the makers of Candy Crush access to their friends list just to play a game a 5-year-old could master.

And, because Facebook is essentially a monopoly, the people who rely on it to stay in touch with friends and family do not have a lot of choice about whether they want their data harvested.

Case in point: buried in a little-noticed statement the company released earlier this week is an admission that “most people on Facebook could have had their public profile scraped” by “malicious actors,” who are using phone numbers and email addresses to take information from these profiles.

That sounds as though these “malicious actors” are doing pretty much what app makers and Facebook advertisers are doing — with the key difference being that they aren’t paying Facebook.

So they’re kinda like the NSA. Speaking of … those guys must feel like idiots for violating the Constitution to illegally spy on American citizens when all they had to do was create a bunch of apps and quizzes to get us to hand over our information voluntarily.

That brings us to the users themselves. While a lot of people complain about their data being taken — for example, the 87 million Facebook users whose information was used by Cambridge Analytica to help elect Donald Trump — many of them are not taking the steps available to protect their information and communications. On the other hand, Facebook doesn’t make it easy to protect users from having their data harvested. Still, we now live in a world in which convenience seems to be more important to many Americans than privacy. If you don’t believe me, just ask Alexa.

However, and here’s the key, there are a lot of people who value privacy but also want to take advantage of social media. They don’t have a lot of options because Facebook is essentially a monopoly, and granting its users a maximum of privacy runs counter to its business model.

This is one area in which government has an obligation to step in. Facebook being used by foreign and domestic actors to subvert democracy is another.

It is apparent that Mark Zuckerberg and his crew won’t do anything they don’t have to. For example, Facebook will strengthen data protection provisions this month, but only because the EU has mandated it. And there might even be some protections that only apply to Europeans.

Zuckerberg also announced that Facebook would start verifying the identity of people who purchase political ads on the platform. Whoopdeefuckingdoo! The only big deal about this step is that it has taken so long. Foreign nationals are prohibited from spending money on US elections, so, for example, with regard to Russians purchasing Facebook ads or promoting Facebook pages in the past, Zuckerberg is merely saying that the company will now start following the law.

Naturally, this “concession” comes a week before he will appear before Congress to explain how Facebook is making a buck on ruining US democracy.

Like the internet itself, Facebook was originally designed to facilitate connections, which it has done. But like most nice things, people have found ways to screw it up big-time. And — Surprise! Surprise! — it turns out we can’t rely on the people who are getting rich off the screw up to fix things.
 
Monday, April 23, 2018 - More information, faster removals, more people - an update on what we’re doing to enforce YouTube’s Community Guidelines
Official YouTube Blog: More information, faster removals, more people - an update on what we’re doing to enforce YouTube’s Community Guidelines

In December we shared how we’re expanding our work to remove content that violates our policies. Today, we’re providing an update and giving you additional insight into our work, including the release of the first YouTube Community Guidelines Enforcement Report.

Providing More Information -
We are taking an important first step by releasing a quarterly report on how we’re enforcing our Community Guidelines. This regular update will help show the progress we’re making in removing violative content from our platform. By the end of the year, we plan to refine our reporting systems and add additional data, including data on comments, speed of removal, and policy removal reasons.

We’re also introducing a Reporting History dashboard that each YouTube user can individually access to see the status of videos they’ve flagged to us for review against our Community Guidelines.

Machines Helping to Address Violative Content -
Machines are allowing us to flag content for review at scale, helping us remove millions of violative videos before they are ever viewed. And our investment in machine learning to help speed up removals is paying off across high-risk, low-volume areas (like violent extremism) and in high-volume areas (like spam).

Highlights from the report -- reflecting data from October - December 2017 -- show:

We removed over 8 million videos from YouTube during these months.

The majority of these 8 million videos were mostly spam or people attempting to upload adult content - and represent a fraction of a percent of YouTube’s total views during this time period.2
  • 6.7 million were first flagged for review by machines rather than humans
  • Of those 6.7 million videos, 76 percent were removed before they received a single view.
For example, at the beginning of 2017, 8 percent of the videos flagged and removed for violent extremism were taken down with fewer than 10 views.3 We introduced machine learning flagging in June 2017. Now more than half of the videos we remove for violent extremism have fewer than 10 views.

The Value of People + Machines -
Deploying machine learning actually means more people reviewing content, not fewer. Our systems rely on human review to assess whether content violates our policies. You can learn more about our flagging and human review process in this video:

Last year we committed to bringing the total number of people working to address violative content to 10,000 across Google by the end of 2018. At YouTube, we've staffed the majority of additional roles needed to reach our contribution to meeting that goal. We’ve also hired full-time specialists with expertise in violent extremism, counterterrorism, and human rights, and we’ve expanded regional expert teams.

We continue to invest in the network of over 150 academics, government partners, and NGOs who bring valuable expertise to our enforcement systems, like the International Center for the Study of Radicalization at King’s College London, Anti-Defamation League, and Family Online Safety Institute. This includes adding more child safety focused partners from around the globe, like Childline South Africa, ECPAT Indonesia, and South Korea’s Parents’ Union on Net.

We are committed to making sure that YouTube remains a vibrant community with strong systems to remove violative content and we look forward to providing you with more information on how those systems are performing and improving over time.

-- The YouTube Team
 
02.05.2018 - Google Co-Founder Cautions about Dark Side of Artificial Intelligence
Google Co-Founder Cautions About Dark Side of Artificial Intelligence

Co-founder of Google Sergey Brin raised concerns over the issue of ethics in AI in his annual founder’s letter.

As part of his letter to shareholders, the president of Google's parent company Alphabet cautioned that great power brings great responsibility, and that humans should prepare to address hazards coming with the current "technological renaissance".

Quoting the introduction to Charles Dickens' novel "A tale of two cities" — "It was the best of times, it was the worst of times", — Brin said that although technology has touched "nearly every segment of modern society" and it is hard to overestimate the usefulness of AI, the current era of "great inspiration" also needs "tremendous thoughtfulness and responsibility".

"Such powerful tools also bring with them new questions and responsibilities. How will they affect employment across different sectors? How can we understand what they are doing under the hood? What about measures of fairness? How might they manipulate people? Are they safe?", Brin pointed out.

He praised the benefits of AI which doesn't only boost things like image recognition, language translation and self-driving cars but is able to provide solutions that can help diagnose new diseases or discover new planetary systems.

Still, people should take into consideration the potential dangers of very smart tech, including unemployment or the technologies being misused to spread fake news and propaganda, he said.

Brin noted that he remains optimistic about artificial intelligence and technology but urged everyone to "tread with deep responsibility, care, and humility," as we march towards the future.


16.04.2018 - Researchers Create New Type of Qubit for Quantum Computer
Researchers Сreate New Type of Qubit for Quantum Computers

Scientists from the National University of Science and Technology MISiS (NUST MISiS) and the Russian Quantum Center have recently created a new qubit based on using a solid superconducting nanowire, rather than the traditional Josephson junctions, which are usually obtained by introducing a barrier between two superconductors.

The qubit has been created in cooperation with a team of researchers from the Moscow Institute of Physics and Technology, the Skolkovo Institute of Science and Technology, the University of London and the National Physical Laboratory in Teddington (UK), the Karlsruhe University and the Institute of Photonic Technology (Germany).

Despite the fact that universal quantum computers have not been created yet, the principle of calculation which it can be based on allows researchers to solve highly complex problems. For example, some laboratories are using qubits for modeling chemical compounds and materials and recreating the mechanisms of the photosynthesis process. This is why it is so important to perfect the key elements of quantum computers, such as the main computational cells — qubits, as soon as possible.

There are several approaches to creating qubits. For example, researchers created qubits operating in an optical range. But they are much harder to scale up — unlike superconducting cubits, which operate in the radio-frequency spectrum and are based on so-called Josephson junctions. These junctions are formed by introducing a barrier between two superconductors — a section of dielectric, which serves as a tunnel for the flowing electrons.

The new qubit is based on the effect of coherent quantum phase slips — controlled recurring interruption and restoration of superconductivity in a superfine (about 4 nanometers thick) nanowire, which normally has a high level of resistance.
This project's leading researcher Oleg Astafiev (head of the Artificial Quantum System Lab at the Moscow Institute of Physics and Technology and professor at the University of London and the National Physical Laboratory in Teddington) was the first to observe this predicted effect in the course of an experiment. This pioneer work of his was published in the Nature journal in 2012.

Alexei Ustinov, one of the co-authors of the new research (head of the Russian Quantum Center research team, head of the Superconducting Metamaterials laboratory at NUST MISiS and a professor in the Karlsruhe Institute of Technology) explained that scientists also managed to create a new type of superconducting device, very similar to SQUID (the Superconducting Quantum Interference Device).

Basically, SQUID is an ultrasensitive magnetometer used for measuring weak magnetic fields, which is based on Josephson junctions. But instead of using a magnetic field, researchers created an interference by introducing an electric field that alternates between the electric charges on the section between two nanowires. In the device, these wires serve as Josephson junctions, except that they do not require creating barriers and can be manufactured from one layer of superconductor material.

During this research, scientists managed to prove that this system can function as a charge interferometer, Ustinov said. "If we divide the wire into two sections and create a thickening in the center, by alternating between the charges on this thickening we can achieve periodic modulation of the process of tunneling the magnetic quanta through the wire, something that was observed during the research," the scientist explained.

This is a crucial moment that proves that we have managed to obtain a controlled and coherent effect, which can be used for creating a new generation qubit.

SQUID technologies are already being implemented in the production of a range of medical scanning devices, such as magnetocardiographs and magnetoencephalographs, as well as nuclear magnetic resonance detection devices. They also found an application in geophysical and paleogeologic surveying.

Scientists still have to address the range of fundamental problems related to studying the principles of new qubit applications, Professor Ustinov states. But it is clear already that they will study new qubits with similar or maybe even greater functionality. It is also important to mention that the new qubits are much easier to produce. Perhaps, the discovered principle will be used as a basis for production of a range of elements for superconducting electronics.


02.05.2018 - Twitter Data Scandal: "Biggest Threat for Us is being Brainwashed" - Lawyer
Twitter Data Scandal: 'Biggest Threat For Us is Being Brainwashed' - Lawyer

Twitter has reportedly sold a massive amount of data to a researcher behind the Cambridge Analytica uproar. According to reports, the information sold was based on tweets made from December 2014 to April 2015.

Radio Sputnik discussed the issue with Yair Cohen, a social media lawyer with Cohen Davis Solicitors.

Sputnik: Cambridge Analytica researcher Alexander Kogan has already called massive data harvesting a pretty usual thing. How widespread is the practice now and what are the legal mechanisms behind it?

Yair Cohen: It seems to be a widespread practice among some social media companies, because what we are seeing is that their business model is not as we thought previously based on selling advertising, but rather selling huge amounts of data. The fact that the information data being sold is not, strictly speaking, private is neither here nor there.
We know that email harvesting, for example, is being outlawed in many countries around the world, even though email addresses are in the public domain. It’s not that the information is private or public that makes the difference, but it’s rather putting that data together, it’s the accumulation of data, categorizing the data, putting it into a very specific use. You need some very sophisticated software in order to do that.

I think that is where the real risk is, because once you can accumulate a huge amount of data and categories it into groups, then you can use your influence or sell that influence to all sorts of other organizations, from political parties we know to commercial organizations, in order to effectively brainwash those users.

Sputnik: I’ve been getting many emails from companies that are abiding by the new UK law which has to do with data protection of personal information, but the average man on the street doesn’t understand what the implications are, what they are signing and getting involved in. How can they go about educating the general public?

Yair Cohen: You are absolutely correct and there is an argument which I clearly support, that social media companies have been making the reading of terms and conditions extremely difficult deliberately, just in order to make it very hard for people to see and to spend time to read the conditions so I think one of the things that we need to do is to make those terms and conditions very simple: tell people what you do, tell people what is likely to happen with the data. I think that would be a starting point and people do get suspicious now. People are starting to think do I really want to share that part of information.

On the other hand, some people are saying, look you know everything is out there anyway, what do I care. But I think that ultimately the biggest threat for us is being brainwashed. It is having someone unbeknown to us changing the way we think about things, changing the way we feel about things, but doing this on the industrial scale without us having the ability to object to it.

You have the ability to object to watching the advertisements on the television, for example, as you can just switch it off, you don’t have to expose yourself to that form of brainwashing, but when it comes to fake news it’s very difficult to do.


01.05.2018 - UK Parl't Invites Zuckerberg to give Evidence on Cambridge Analytica Case - MP
UK Parl't Invites Zuckerberg to Give Evidence on Cambridge Analytica Case – MP

The UK House of Commons Digital, Culture, Media and Sport Committee is inviting Facebook CEO Zuckerberg to give evidence by May 24 in the Cambridge Analytica case, it may issue a formal summons for him to appear when he is next in the United Kingdom if he declines the request, head of the committee Damian Collins said Tuesday.

In March, media reported that Cambridge Analytica used up to 87 million Facebook users' data obtained illegally from a third-party application in order to better target political ads. The committee has invited Zuckerberg to give evidence on the issue several times.

"Following reports that he will be giving evidence to the European Parliament in May, we would like Mr Zuckerberg to come to London during his European trip. We would like the session here to take place by May 24 … We hope that he will respond positively to our request, but if not the Committee will resolve to issue a formal summons for him to appear when he is next in the UK," Collins said in a letter to Facebook.

Last week, Mike Schroepfer, the social media company's chief technology officer (CTO), sent a written submission to the Digital, Culture, Media and Sport Committee of the UK House of Commons, which was dedicated to the Cambridge Analytica scandal.

"Mr. Schroepfer failed to answer fully on nearly 40 separate points. This is especially disappointing to the Committee considering that in his testimony to Congress Mark Zuckerberg also failed to give convincing answers to some questions," Collins said.

The letter also includes a list of 39 questions, which the Committee wants to be answered.

Since the data breach scandal erupted, Facebook has been facing lawsuits and investigations in a number of countries. In April, Zuckerberg testified to the US Congress about the data breach issue.
 
02.05.2018 - Cambridge Analytica Closing Operations after Facebook Data Controversy - Reports
Cambridge Analytica Closing Operations After Facebook Data Controversy - Reports

The company took the decision to close its doors after losing scores of clients and with growing legal fees pertaining to the Facebook investigation, the Wall Street Journal reported, citing a source familiar with the matter.

Cambridge Analytica is closing its US offices in wake of the disclosure of its misuse of Facebook data, media reported. The company has filed for insolvency in the UK and will start bankruptcy proceedings in the US soon.

All employees have been instructed to promptly turn in their computers and return their keycards, with the firm announcing that it will officially close its doors on Wednesday, the report said.

Explaining the decision to close the offices, the chairman of the SCL Group, Cambridge Analytica's parent company, Julian Wheatland, reportedly cited the ongoing investigations into Cambridge Analytica's massive data harvesting scandal, damage to the company's reputation and loss of clients.

Facebook has acknowledged that Cambridge Analytica harvested the personal information of up to 87 million users of the social media site without the permission from the users.

Since the data breach scandal burst, Facebook has been facing lawsuits and investigations in a number of countries. Earlier in April, Zuckerberg testified to the US Congress about the data breach issue.

Cambridge Analytica is a private UK-based analytical company that uses in-depth data analysis technology to develop strategic communication during election campaigns on the Internet. The company has worked on various political campaigns, using the data to improve a mechanism that would predict and influence the behavior of voters.
 
04.05.2018 - Scammers take advantage of Google Maps Redirection Flaw
Scammers Take Advantage of Google Maps Redirection Flaw

Security company Sophos has warned that Google Maps users are at risk of being tricked by scammers using an open redirect vulnerability.

According to Sophos researcher Mark Stockley, attackers can exploit a flaw in the mapping software to lure users to shady websites.

Security experts say links to dodgy sites are being disguised to look like safe shortcuts to Google Maps. Clicking on such a link, people expect to be sent to Google Maps but instead get redirected to a malicious page offering to buy, for instance, diet pills.

Linking directly to a scam site would result in Google's automated checks refusing the link, so cybercriminals bypass URL shortening service tests and use Google Maps as a legitimate middleman before a completely different website is loaded than the intended one.

"The crooks have turned a service designed for shortening and sharing Google Maps URLs into an impromptu redirection service for sharing whatever the heck they like, thanks to an open redirection vulnerability in the maps.app.goo.gl service", Stockley said.

Last month, Google announced its plans to shut down the goo.gl URL shortening service and replace it with Firebase Dynamic Links. But before it happens, scammers still can take full advantage of short links using Google Maps.


04.05.2018 - Twitter Urges Users to Change Passwords after they were Stored Unmasked
Twitter Urges Users to Change Passwords After They Were Stored Unmasked

Twitter said in a statement on Thursday that it has advised users to change the passwords to their accounts after a glitch in their system stored passwords unmasked in the company’s internal log.

"We recently found a bug that stored passwords unmasked in an internal log," Twitter said. "We fixed the bug and have no indication of a breach or misuse by anyone. As a precaution, consider changing your password on all services where you’ve used this password."

Recently, it has been reported that Twitter sold massive data access to Aleksandr Kogan, a man behind selling info of 87 million Facebook users to the infamous Cambridge Analytica firm.

In March, media reported that personal information of about 50 million Facebook users had been harvested by the Cambridge Analytica consultancy firm without their permission through an application.


04.05.2018 - Cambridge Analytica "Did What Wasn't Allowed by Terms of Service" - Specialist
Cambridge Analytica 'Did What Wasn't Allowed by Terms of Service' - Specialist

The political consultancy firm Cambridge Analytica has shut down amid a scandal over data it mined from Facebook and used in political campaigns, namely the 2016 US election and Brexit referendum. Radio Sputnik discussed the situation with Yul Bahat, a member of Cyan’s cybersecurity advisory network.

Cambridge Analytica has published a statement in which it said it had been “vilified” for legal activities.

When asked if this particular situation is a witch hunt or whether Cambridge Analytica has some questions to answer about the way it obtained and utilized information and whether he personally was pro or against this company, Yul Bahat said that the information they used was publicly available.

“The question is how they got it: got it by lying; they got it by misleading; they got it by doing something which was specifically not allowed by certain terms of services. So, I think it's a lot more about how they did it and what their motives were than the information that they actually got. I think they definitely have some things to answer for,” Bahat noted.

Investigators said that they will pursue Cambridge Analytica’s staff and directors despite the firm’s closure. Bahat believes that investigators should go after the company’s directors and the CEO, the people who actually were at the highest levels of the firm.

When asked about the mechanism of acquiring data from social media users for political campaigns, about how this data is used in these in these campaigns, Yul Bahat said that in 2016, Cambridge Analytica created a Facebook app, which was actually a personality test.

“You fill out some form, you answer some questions and they tell you what type of personality you have based on actual scientific research. Now the quiz itself was pretty harmless, that didn’t make any change. What people didn’t know, what they didn’t notice when they signed the fine print is that they actually allowed Cambridge Analytica full access to all their Facebook information including their list of friends,” Yul Bahat noted.

Facebook has acknowledged that Cambridge Analytica acquired personal data from some 87 million Facebook profiles without users’ permission.

In April, Facebook CEO Mark Zuckerberg went before a Congressional panel in Washington to testify about the data breach issue.
 
Alexei Ustinov, one of the co-authors of the new research (head of the Russian Quantum Center research team, head of the Superconducting Metamaterials laboratory at NUST MISiS and a professor in the Karlsruhe Institute of Technology) explained that scientists also managed to create a new type of superconducting device, very similar to SQUID (the Superconducting Quantum Interference Device).

Basically, SQUID is an ultrasensitive magnetometer used for measuring weak magnetic fields, which is based on Josephson junctions. But instead of using a magnetic field, researchers created an interference by introducing an electric field that alternates between the electric charges on the section between two nanowires. In the device, these wires serve as Josephson junctions, except that they do not require creating barriers and can be manufactured from one layer of superconductor material.


SQUID technologies are already being implemented ...

Morpheus: Did Zion send a warning?
Dozer: No, another ship.. Squiddies sweeping in quick.
Neo: Squiddy?
Trinity: A sentinel. A killing machine designed for one thing.

Sentinel
 
09.05.2018 - Facebook Eyes Major Reorganization following Data Breach Scandal - Reports
Facebook Eyes Major Reorganization Following Data Breach Scandal - Reports

Following scandals around Russia-sponsored content and the breach of personal data, Facebook plans to reorganize the company on a large scale into three main groups, local media reported on Wednesday.

According to the Recode technology news portal, the first group will deal with Facebook apps, namely Instagram, Messenger and WhatsApp, and will be headed by Chief Product Officer Chris Cox.

The second group under the direction of Chief Technology Officer Mike Schroepfer will be responsible for new platforms and infrastructure, in particular, augmented reality, virtual reality and artificial intelligence, the news portal added.

Finally, the third group led by Vice President of Growth Javier Olivan will focus on advertisement, analytics, development and product management.

The reports about the company's shake-up have been then confirmed by a Facebook representative to another news portal — Business Insider.

Facebook has recently become embroiled in a personal data breach scandal. In late March, media reported that the personal information of about 50 million Facebook users had been harvested by Cambridge Analytica without their consent during the 2016 US presidential campaign. While reportedly working for multiple political campaigns, the firm gathered data from these millions of social media accounts to develop a mechanism that would predict and influence the behavior of voters.

Last year, Facebook got involved into inquiry on Russian alleged interference in the 2016 Brexit referendum. In December, Facebook said that St. Petersburg-based Internet Research Agency, which is suspected of interfering in the 2016 US election, spent only $0.97 on the referendum-related advertisements delivered to audiences in the United Kingdom.

Russia has repeatedly denied interfering in foreign elections and domestic affairs, saying that such actions go against the principles and conduct of Russian foreign policy.

Google CEO Sundar Pichai has announced that the company is relaunching its news service facilitating the search of information from credible sources for its users as part of fight against fake news distribution, local media reported on Wednesday.


According to the Business Insider news portal, the Google News service with the use of artificial intelligence (AI) will also highlight the articles a user might be interested in and help in search for detailed information on particular subjects.

"We are using AI to bring forward the best of what journalism has to offer… We want to give users quality sources that they trust," Pichai said at Google developer conference on Tuesday as quoted by the news outlet.

Following the launch of the updated service, every user will have personalized news channel based on their personal information available to Google, the Business Insider added.

The relaunched news service is expected to operate in 127 counties.
 
10.05.2018 - US Senate rushes to Restore Net Neutrality as FCC Announces Death Date
US Senate Rushes to Restore Net Neutrality as FCC Announces Death Date

Net neutrality is set to come to an end in the United States on June 11, the Federal Communications Commission (FCC) announced Thursday at an open meeting.

Web users decried the move when the FCC considered it late last year, accusing telecommunications companies like AT&T, Verizon and Comcast of trying to monopolize the internet and imposing a pay-to-play system on the web, which critics argued should instead operate as an open marketplace of ideas.


Twitter Public Policy

@Policy


Twitter has long supported strong #NetNeutrality rules and an open Internet for all. Twitter supports the Congressional Review Act resolution and will continue to work with lawmakers to ensure an open Internet that fosters job creation, innovation, and free expression.
12:23 PM - May 9, 2018

As netizens' objections mounted online, protesters descended on Washington, DC, prior to the December net neutrality vote. Activists demonstrated outside the White House and at FCC and other meetings in the nation's capital, as well as at more than 600 Verizon stores across the US — to no avail.

The FCC voted on December 14, 2017, to rescind the 2015 Open Internet Order, which safeguarded internet users from the whims of large telecommunication companies, which could otherwise play favorites with some websites and throttle or outright block others. The vote was 3-2 along party lines, with Democrats dissenting in favor of keeping the 2015 rules.

Specifically, the 2015 order prevented telecoms from blocking or slowing internet access or websites, and banned paid prioritization. FCC Chairman Ajit Pai argued that the rules should be removed because the FCC had overstepped its authority by imposing the restrictions on telecoms in the first place.


Lee Camp [Redacted]

@LeeCamp


As you know, the FCC's corrupt stooges decided to destroy #NetNeutrality in order help large corporations further own + censor the infrastructure of the internet. Now the Senate may try to overrule that decision. Contact your Senator. https://www.battleforthenet.com/
4:51 PM - May 9, 2018

Senate Democrats introduced legislation Wednesday that would override the FCC and reinstate the 2015 regulation. Lawmakers invoked the Congressional Review Act to force a vote, which lets Congress use an expedited legislative process to review new federal regulations. If Congress can pass a joint resolution on net neutrality with a simple majority in both the House of Representatives and the Senate, the measure would then get passed to US President Donald Trump for him to sign or veto.

The Senate is expected to vote next week on the bill. So far, 48 of the 100 Senators are on board and activists are now looking to moderate Republicans to help swing the vote.
 
14 May, 2018 - Ultrasonic Attacks Can Trigger Alexa and Siri With Hidden Commands — Raise Serious Security Risks (Video)
Ultrasonic Attacks Can Trigger Alexa and Siri With Hidden Commands - Raise Serious Security Risks - Stock Board Asset

Over the last two years, academic researchers have identified various methods that they can transmit hidden commands that are undetectable by the human ear to Apple’s Siri, Amazon’s Alexa, and Google’s Assistant.

According to a new report from The New York Times, scientific researchers have been able “to secretly activate the artificial intelligence systems on smartphones and smart speakers, making them dial phone numbers or open websites.” This could, perhaps, allow cybercriminals to unlock smart-home doors, control a Tesla car via the App, access users’ online bank accounts, load malicious browser-based cryptocurrency mining websites, and or access all sort of personal information.

In 2017, Statista projected around 223 million people in the U.S. would be using a smartphone device, which accounts for roughly 84 percent of all mobile users. Of these 223 million smartphones users, around 108 million Americans are using the Android Operating System, and some 90 million are using Apple’s iOS (operating system). A new Gallup poll showed that 22 percent of Americans are actively using Amazon Echo or Google Assitant in their homes.

With much of the country using artificial intelligence systems on smartphones and smart speakers, a new research document published from the University of California, Berkeley indicates inaudible commands could be embedded “directly into recordings of music or spoken text,” said The New York Times.

For instance, a millennial could be listening to their favorite song: ‘The Middle’ by Zedd, Maren Morris & Grey. Embedded into the audio file could have several inaudible commands triggering Apple’s Siri or Amazon’s Alexa to complete a task that the user did not instruct — such as, buying merchandise from the music performer on Amazon.
“We wanted to see if we could make it even more stealthy,” said Nicholas Carlini, a fifth-year Ph.D. student in computer security at U.C. Berkeley and one of the paper’s authors.​
At the moment, Carlini said this is only an academic experiment, as it is only a matter of time before cybercriminals figure out this technology. “My assumption is that the malicious people already employ people to do what I do,” he added.

The New York Times said Amazon “does not disclose specific security measure” to thwart a device from an ultrasonic attack, but the company has taken precautionary measures to protect users from unauthorized human use. Google told The New York Times that security development is ongoing and has developed features to mitigate undetectable audio commands.

Both companies’ [Amazon and Google] assistants employ voice recognition technology to prevent devices from acting on certain commands unless they recognize the user’s voice.

Apple said its smart speaker, HomePod, is designed to prevent commands from doing things like unlocking doors, and it noted that iPhones and iPads must be unlocked before Siri will act on commands that access sensitive data or open apps and websites, among other measures.​
Yet many people leave their smartphones unlocked, and, at least for now, voice recognition systems are notoriously easy to fool.

There is already a history of smart devices being exploited for commercial gains through spoken commands,” said The New York Times.

Last year, there were several examples of companies and even cartoons taking advantage of weaknesses in voice recognition systems, including Burger King’s Google Home commercial to South Park‘s episode with Alexa.

While there are currently no American laws against broadcasting subliminal or ultrasonic messages to humans, let alone artificial intelligence systems on smartphones and smart speakers. The Federal Communications Commission (FCC) warns against the practice, calling it a “counter to the public interest,” and the Television Code of the National Association of Broadcasters bans “transmitting messages below the threshold of normal awareness.” However, The New York Times points out that “neither says anything about subliminal stimuli for smart devices.”

Recently, the ultrasonic attack technology showed up in the hands of the Chinese. Researchers at Princeton University and China’s Zhejiang University conducted several experiments showing that inaudible commands can, in fact, trigger voice-recognition systems in an iPhone.
“The technique, which the Chinese researchers called DolphinAttack, can instruct smart devices to visit malicious websites, initiate phone calls, take a picture or send text messages. While DolphinAttack has its limitations — the transmitter must be close to the receiving device — experts warned that more powerful ultrasonic systems were possible,” said The New York Times.​
DolphinAttack could inject covert voice commands at 7 state-of-the-art speech recognition systems (e.g., Siri, Alexa) to activate always-on system and achieve various attacks, which include activating Siri to initiate a FaceTime call on iPhone, activating Google Now to switch the phone to the airplane mode, and even manipulating the navigation system in an Audi automobile. (Source: guoming zhang)

DolphinAttack Demonstration Video
While the number of smart devices in consumers’ pockets and at their homes is on the rise, it is only a matter of time before the technology falls into the wrong hands, and unleashed against them. Imagine, cybercriminals accessing your Audi or Tesla via ultrasonic attacks against voice recognition technology on a smart device. Maybe these so-called smart devices are not smart after all, as the dangers of these devices are starting to be realized. Millennials will soon be panicking.
 
08.06.2018 - Facebook Admits to Massive Posting Malfunction
Facebook Admits to Massive Posting Malfunction

A glitch in Facebook privacy settings may have led 14 million users to unknowingly share personal posts with a wider audience than intended, the network’s chief privacy officer said.

"We recently found a bug that automatically suggested posting publicly when some people were creating their Facebook posts," Erin Egan said in a statement seen by CNN.

The US website said the bug had affected posts made between May 18 and May 22 when Facebook was testing a new feature.

Users are given a range of options on the size of the audience they want to share a post with and this choice is then applied automatically to all succeeding posts, if left unchanged.

Affected users will begin receiving notifications to review the posts they made during that period of time to see if the audience was selected correctly.

This is the latest in a string of mishaps at the social media giant after it came under fire over a scandal involving a UK consultancy, Cambridge Analytica, which illegally harvested private data to use it for political profiling.


07.06.2018 - Cambridge Analytica Ex-Offical Denies Channeling Money to WikiLeaks - Reports
Cambridge Analytica Ex-Official Denies Channeling Money to WikiLeaks - Reports

Brittany Kaiser, the former business development director at the UK Cambridge Analytica consultancy, has denied reports that she channeled donations from third parties to WikiLeaks, after the media revealed that she visited the organization's founder, Julian Assange, last year, The Guardian reported on Thursday.

Earlier in the week, The Guardian newspaper said that it had obtained the visitor logs from Ecuador's Embassy in London, where Assange is currently residing, which reveal that the former Cambridge Analytica executive visited Assange on February 17 last year and discussed the US elections with him. The media also claimed that Kaiser allegedly received money in the form of "gifts and payments" and channeled it as cryptocurrency to WikiLeaks, which she had called her "favorite charity."

According to the Financial Times newspaper, Kaiser confirmed the visit, but denied she had discussed any business deals with WikiLeaks or transferred money to the whistleblower from the third parties. The visit took place with no particular agenda and long time after the election, Kaiser claimed, as cited by the media outlet.


06.06.2018 - Facebook Confirms Sharing Users' Data With at least 4 Chinese Firms - Statement
Facebook Confirms Sharing Users' Data With at Least 4 Chinese Firms – Statement

Facebook has admitted that the company has the data-sharing agreements with at least four Chinese electronics firms, including Huawei, one of the world’s largest smartphone makers, which has been flagged as a security threat by US intelligence.

"Huawei is the third largest mobile manufacturer globally and its devices are used by people all around the world, including in the United States. Facebook along with many other US tech companies have worked with them and other Chinese manufacturers to integrate their services onto these phones," Facebook Vice President of mobile partnerships Francisco Varela said in a statement, as quoted by Politico media outlet.

The New York Times reported on Monday that Facebook established partnerships with a number of mobile device manufacturers over the past decade that granted them access to large amounts of user data. On Tuesday, US Senators John Thune and Bill Nelson said that Facebook CEO Mark Zuckerberg had been asked to provide additional information regarding his company’s handling of private user information in the wake of a new media report on the practice.

He noted that Facebook kept under control and approved the use of the information, while the users’ data stayed on the devices themselves, and not on Huawei’s servers.

Varela added that Facebook had similar agreements with China's Lenovo, OPPO and TCL.

Facebook has faced widespread outrage as it emerged in March that personal data of about 50 million of its users had been harvested by the Cambridge Analytica consultancy firm without their permission through a special app. The information was allegedly used to help target political advertising. In early April, Facebook estimated that the number of users affected by data leak was around 87 million.

The executive said that the money meant by The Guardian investigation was a $200-donation she had made to WikiLeaks in connection with the research she did while studying human rights in law school.

WikiLeaks is facing criminal and congressional investigations in relation to the hacking of e-mails of the Democratic Party and its former presidential candidate Hillary Clinton during the 2016 US presidential election.

Cambridge Analytica, which is also reported to have illegally harvested Facebook users' data, is being probed in the United States by the team of Special Counsel Robert Mueller on the alleged collusion of the team of US President Donald Trump and Russian businessmen and political figures. The firm is known for working with Trump’s campaign during the 2016 presidential election won by him.

The ties between the two organizations became subject of inquiry when media reports emerged in October saying that Cambridge Analytica CEO Alexander Nix had contacted Assange and offered him help in publishing the hacked e-mails. Media reports suggested that the e-mails had been an issue of interest for the Trump campaign. Moreover, the US Republican party, which had nominated Trump as its presidential candidate, reportedly tried to hire hackers to obtain these e-mails.

Though Assange confirmed his contacts with Nix and said that he had turned down Nix’s offer, in April, Nix told the UK lower house’s Digital, Culture, Media and Sport Committee that no Cambridge Analytica employee had ever contacted WikiLeaks. Nix also argued that interactions were held via an agency representing the whistleblowing website.


25.05.2018 - Facebook May Be Fined Over Violating Russia's User Data Storage Policy
Facebook May Be Fined Over Violating Russia's User Data Storage Policy

Russia’s communications regulator Roskomnadzor will look into Facebook later this year to determine whether it has complied with local data storage rules, the agency’s head said.

We will inspect Facebook this year," Alexander Zharov told Russia’s Izvestia newspaper. "Depending on the findings, we may fine the company or request legal documents about their plans to comply with Russian laws."

The watchdog is concerned that the US-based social media giant is not storing Russian users’ data locally or acts fast enough to delete illegal content. There are no plans yet to block the website in Russia, Zharov stressed.

The warning comes after Roskomnadzor began last month to block a popular messaging app, Telegram, for its refusal to hand over encryption keys. Zharov argued this was a lengthy process as the agency worked on new methods to cut the app off, with mixed results.
 
09.06.2018 - Facebook Reportedly Kept Sharing Private Data With Firms After Promising to Stop
Facebook Reportedly Kept Sharing Private Data With Firms After Promising to Stop

The Facebook administration has provided information about its users to a number of companies since 2015, despite the social network saying that it had stopped doing so, the Wall Street Journal reports.

According to the Wall Street Journal, Facebook had agreements with "a selected group of companies, some of them had special access to user information." As the publication emphasizes, such agreements operated "well after that moment in 2015, when the social network said that it completely deprived developers of access to this information."

The article notes that the new information its sources provided is corroborated by court documents which the journalists familiarized themselves with.

Earlier information, they claim, on the availability of these agreements has never been made public. Among the Facebook partners were the financial company RBC Capital Markets and Nissan Motor Company.

Facebook, in particular, allowed them to receive information about users' phones, as well as statistics, from which it was possible to determine how close their relationships with each other were.

The publication notes that many of the agreements in question were not related to the agreements that Facebook had with at least 60 major phone manufacturers and makers of other electronic devices.
 
10.06.2018 - Watching the Watchers: Facebook Hires 'Credibility Specialists' Due to Fake news
Watching the Watchers: Facebook Hires 'Credibility Specialists' Due to Fake News

Facebook is looking to hire people to help remove the fake news that its users post to its website.

As of Thursday, Facebook had two positions for "news credibility specialists" on its job site, as reported by Gizmodo.

"We're seeking individuals with a passion for journalism, who believe in Facebook's mission of making the world more connected," one of the two listings gushed.

"As a member of the team, you'll be tasked with developing a deep expertise in Facebook's News Credibility Program. You'll be conducting investigations against predefined policies," the listing continued in the same vein.

The two job listings have since been removed from the social media giant's career page although whether there is a corollary between the deletion and consistent accusations against Facebook of passing along false information clothed in authentic-seeming reports is unclear.

"We're working to effectively identify and differentiate news and news sources across our platform," company spokesman Adam Isserlis asserted to Business Insider.

In April, Facebook announced that it would allow users to appeal decisions to remove questionable content from the social media platform.

"For the first time we're giving you the right to appeal our decisions on individual posts so you can ask for a second opinion when you think we've made a mistake," Facebook global policy management exec Monika Bickert declared in an April blog post.

The same month, Facebook announced that it would use machine learning algorithms to spot misinformation on the platform.

"We will start using updated machine learning to detect more potential hoaxes to send to third-party fact checkers," the social media giant stated.

If an article has been reviewed by fact checkers, we may show the fact checking stories below the original post. In addition to seeing which stories are disputed by third-party fact checkers, people want more context to make informed decisions about what they read and share. We will continue testing updates to Related Articles and other ongoing News Feed efforts to show less false news on Facebook and provide people context if they see false news."

Facebook continues to be in the crosshairs of lawmakers and privacy advocates around the world after it was revealed in March that millions of users' saw their personal information shared with former White House adviser Steve Bannon's right-wing political disinformation firm Cambridge Analytica.

Cambridge Analytica — now rebranded as Emerdata Limited — reportedly swept up the personal data of Facebook users through the use of a personality app developed by Alexander Kogan, a Cambridge University researcher, and went on to use the information to predict and influence the actions of US voters.
 
YouTube will display links to Wikipedia and other “fact-based” sites along videos about “conspiracy theories”.

June 12, 2018 - YouTube Will Fight “Conspiracy” Videos Using Wikipedia
https://vigilantcitizen.com/latestnews/youtube-will-fight-conspiracy-videos-using-wikipedia/


After demonetizing thousands of channels (many of which were “truther” and “conspiracy”-related), YouTube is now taking further steps to fight undesirable videos. YouTube CEO Susan Wojcicki announced this week that the video platform will soon begin displaying links to “fact-based” sites alongside conspiracy videos. Called “information cues”, these snippets of information will link to “reputable” articles in order to combat “hoaxes” and “fake news” stories (gotta use lots of quotation marks to highlight mass media’s biased vocabulary).

Here’s how it will work:

If you search and click on a conspiracy theory video about, say, chemtrails, YouTube will now link to a Wikipedia page that debunks the hoax alongside the video. A video calling into question whether humans have ever landed on the moon might be accompanied by the official Wikipedia page about the Apollo Moon landing in 1969. Wojcicki says the feature will only include conspiracy theories right now that have “significant debate” on the platform. “Our goal is to start with a list of internet conspiracies listed on the internet where there is a lot of active discussion on YouTube,” Wojcicki said at SXSW.

– Wired, YouTube Will Link Directly to Wikipedia to Fight Conspiracy Theories

The announcement comes shortly after YouTube was blamed for the distribution of “conspiracy theory” videos concerning the Florida shooting. In the wake of the event, the top trending video on YouTube was about crisis actors (notably David Hogg) appearing on camera. The video was quickly removed from the platform.

Using Wikipedia, a text-based, volunteer encyclopedia site, that can be edited by anyone, is a rather perplexing choice. While college and university students are prohibited from using the site as an information source due to reliability issues, Wikipedia will be used as a “fact-checking” site by YouTube. Maybe that’s because most Wikipedia articles completely deny most conspiracies. Although not officially announced, expect to see Snopes as another “fact-checking” site … Despite the fact that the site has a clear pro-elite agenda.

While YouTube’s new measure still allows the viewing of conspiracy videos, elite-owned sites such as Wired are already pushing for the outright banning of conspiracy theories on the platform.

YouTube has also still yet to decide and implement clear rules for when uploading conspiracy theory content violates its Community Guidelines. Nothing in the rules explicitly prevents creators from publishing videos featuring conspiracy theories or misleading information, but lately YouTube has been cracking down on accounts that spread hoaxes anyway.
– Ibid.​
Needless to say, media giants have been engaging in the past months on a slippery slope that is extremely dangerous, where the line between “truth” and “conspiracy”, “facts” and “fake news” can be arbitrarily determined by outside agents. While journalists used to be champions of free speech and information, they are now spineless hacks cheering for the coming of an Orwellian thought police.
 
Another strike(out) for Facebook?
Facebook bans Alzheimer's, Dementia prevention education videos


Facebook bans Alzheimer’s, Dementia prevention education videos
Thursday, June 21, 2018 by: Mike Adams
Tags: Alzheimer's, badhealth, badmedicine, banned, Big Pharma, Censorship, dementia, evil, Facebook, medical monopoly, natural health, oppression, zuckerberg





1,950VIEWS
Elderly-Man-Senior-Nurse-Wheelchair-Hospice.jpg


(Natural News) In yet another example of tech giants protecting Big Pharma’s evil monopolies by censoring content that helps people heal, Facebook has banned advertising of an upcoming Alzheimer’s and Dementia summit.
According to Facebook, offering educational videos for improved health is not a “business model” that Facebook tolerates. Of course, promoting toxic pharmaceuticals and antidepressant drugs to children is perfectly acceptable to Facebook, Google, YouTube and other tech giants that are now almost entirely driven by Big Pharma profits. But teaching people how to avoid toxic prescription medications and protect their health using nutrition and healthy living is utterly disallowed.
This is because the monopolistic tech giants have all thrown in with Big Pharma, and they are abusing and exploiting their positions of power to suppress public education about natural health and prevention. After all, a person who avoids a chronic disease is lost revenue for the drug companies and even the tech giants themselves, as they earn money from drug advertising.
“We don’t support ads for your business model”
The organizers of the Alzheimer’s and Dementia Summit, which begins July 23rd, shared the following screen grab with Natural News, where Facebook says, “We don’t support ads for your business model. Please consider this decision final.”
Facebook-bans-Dementia.jpg

This is yet more proof that Facebook, Google and YouTube are colluding with Big Pharma to censor not just free speech, but speech that’s important for human health, longevity and quality of life.
This is exactly why those of us who believe in free speech and healthy living are forced to build our own platforms like REAL.video, launching in July, which will feature thousands of video channels discussing natural health, self-reliance, botanical medicine, CBD oil and other topics that are now increasingly banned by Google and Facebook. (You can request your own video channel now at REAL.video.)
Isn’t it time the U.S. government charged Facebook, Google, Big Pharma and the corporate-run media with RICO Act racketeering and fraud?
To watch the the banned Alzheimer’s and Dementia Summit, register at this link.
 
Back
Top Bottom