Pokemon Go.

Woodsman said:
...
Who needs subliminal 25th frame manipulations when the primary message there for all to see is already an expression of true darkness? What other programming is necessary after that?

I can't argue with that--just look (if you can bear) at all those moronic "comedy" shows, complete with laugh tracks to cue viewers to think that drivel is actually amusing. Beyond that, the bulk of TV programming features shows scaring/numbing viewers with crime, murder and mayhem; or promoting self-indulgence and materialism in all forms; or disseminating "scientific" disinformation; and of course, there's plenty of "breaking news" and "commentary" obsessively drilling the appropriate narrative into the minds of the masses--interrupted regularly by pharmaceutical ads convincing folks to take yet one more of their poison pills.
 
Source: http://www.dutchnews.nl/news/archives/2016/10/pokemon-gone-pikachu-to-be-banned-from-railways-and-kijkduin/

Pokémon Gone: Pikachu to be banned from railways and Kijkduin

October 10, 2016

Game maker Niantic has agreed to stop characters from its augmented reality game Pokémon Go appearing on or around railway tracks and in the dunes near Kijkduin after threats of lawsuits.

Railway firm ProRail had repeatedly requested a ban and said it has now been told by Niantic’s lawyer that the characters will be stopped from appearing in places where they can create ‘dangerous situations or are a nuisance’.

In addition, Niantic has agreed to remove them from the protected dune area near the seaside resort of Kijkduin altogether and in the town itself overnight.

Kijkduin dubbed itself Pokémon capital of the Netherlands this summer but quickly came to regret the move after thousands of people turned up to play the game.

The Hague city council had planned to challenge Niantic in court on October 11 and has now suspended the legal action.
 
[quote author= article Palinurus]In a lawsuit that appears in court on October 11th, the municipality demands that no Pokemon appear in the area between 11:00 p.m and 7:00 a.m.[/quote]

So people were chasing digital fantasies between those hours. Otherwise they wouldn't complain right. lol, get real. Some really got their priorities all messed up. Get some sleep and leave Pikachu alone already.
 
This would be hilarious if it wasn't so pathetic:

https://sputniknews.com/science/201710131058186253-pokemon-go-cnn-russian-interference/ said:
So to sum up, CNN has accused Russia of using a mobile game to influence the results of the 2016 presidential election. Their evidence of this is that a Black Lives Matter affiliate used that mobile game as part of a contest to publicize police violence. This same BLM group was banned from Facebook sometime in late 2016 and a single Russian word appeared in an interview document from the group's alleged founder.

Unless CNN's Butterfree has used Confusion on the Sputnik news office, that is literally all the evidence they have presented to support the charge that those nefarious Russians used Pokémon Go to help rig the US election.
 
A Jay said:
This would be hilarious if it wasn't so pathetic:

https://sputniknews.com/science/201710131058186253-pokemon-go-cnn-russian-interference/ said:
So to sum up, CNN has accused Russia of using a mobile game to influence the results of the 2016 presidential election. Their evidence of this is that a Black Lives Matter affiliate used that mobile game as part of a contest to publicize police violence. This same BLM group was banned from Facebook sometime in late 2016 and a single Russian word appeared in an interview document from the group's alleged founder.

Unless CNN's Butterfree has used Confusion on the Sputnik news office, that is literally all the evidence they have presented to support the charge that those nefarious Russians used Pokémon Go to help rig the US election.

So stupid and hilarious at the same time, they don't have "the pokeBALLS" necessary to admit that once for all Russia has nothing to do with anything US election related.
 
A Jay said:
This would be hilarious if it wasn't so pathetic:

https://sputniknews.com/science/201710131058186253-pokemon-go-cnn-russian-interference/ said:
So to sum up, CNN has accused Russia of using a mobile game to influence the results of the 2016 presidential election. Their evidence of this is that a Black Lives Matter affiliate used that mobile game as part of a contest to publicize police violence. This same BLM group was banned from Facebook sometime in late 2016 and a single Russian word appeared in an interview document from the group's alleged founder.

Unless CNN's Butterfree has used Confusion on the Sputnik news office, that is literally all the evidence they have presented to support the charge that those nefarious Russians used Pokémon Go to help rig the US election.

I know this is months old. But I just found footage CNN may use as evidence for the Russian/Pokemon connection in which they helped rig the US election.

Pikachu soviet march (0:58 min)
 
Well, this is a thread I didn't think would be revived - but apparently they've admitted the quiet part out loud.

Here it is in video format:

In short - the CIA backed company was
1) grabbing every piece of data it could from people playing the game (images from both cameras, locations, speed, probably audio too)
2) have turned it into a giant AI 3D 'map' of the world (that probably includes a bunch of things about the people there too)

Back in July of 2016, I wrote a short article for Network World entitled “The CIA, NSA, and Pokémon Go."

While the title was certainly viewed as a bit “over the top” and “conspiracy theorist-y”, it was really just a collection of (in my opinion, rather bizarre) facts that – even without any sinister connection – were worth documenting. I am republishing it here, with some additional (increasingly odd) details added at the end (including radio and TV appearances related to this article).

Some of the details relating to the exact permissions and capabilities of the Pokémon application have changed over the last few years… but everything else remains correct, factual, and up to date.


The CIA, NSA, and Pokémon Go​

With Pokémon Go currently enjoying, what I would call, a wee-bit-o-success, now seems like a good time to talk about a few things people may not know about the world's favorite new smartphone game.

This is not an opinion piece. I am not going to tell you Pokémon Go is bad or that it invades your privacy. I’m merely presenting verifiable facts about the biggest, most talked about game out there.

Let’s start with a little history​

Way back in 2001, Keyhole, Inc. was founded by John Hanke (who previously worked in a “foreign affairs” position within the U.S. government). The company was named after the old “eye-in-the-sky” military satellites. One of the key, early backers of Keyhole was a firm called In-Q-Tel.

In-Q-Tel is the venture capital firm of the CIA. Yes, the Central Intelligence Agency. Much of the funding purportedly came from the National Geospatial-Intelligence Agency (NGA). The NGA handles combat support for the U.S. Department of Defense and provides intelligence to the NSA and CIA, among others.

Keyhole’s noteworthy public product was “Earth.” Renamed to “Google Earth” after Google acquired Keyhole in 2004.

In 2010, Niantic Labs was founded (inside Google) by Keyhole’s founder, John Hanke.

Over the next few years, Niantic created two location-based apps/games. The first was Field Trip, a smartphone application where users walk around and find things. The second was Ingress, a sci-fi-themed game where players walk around and between locations in the real world.

In 2015, Niantic was spun off from Google and became its own company. Then Pokémon Go was developed and launched by Niantic. It’s a game where you walk around in the real world (between locations suggested by the service) while holding your smartphone.

Data the game can access​

Let’s move on to what information Pokémon Go has access to, bearing the history of the company in mind as we do.

When you install Pokémon Go on an Android phone, you grant it the following access (not including the ability to make in-app purchases):

Identity

  • Find accounts on the device
Contacts

  • Find accounts on the device
Location

  • Precise location (GPS and network-based)
  • Approximate location (network-based)
Photos/Media/Files

  • Modify or delete the contents of your USB storage
  • Read the contents of your USB storage
Storage

  • Modify or delete the contents of your USB storage
  • Read the contents of your USB storage
Camera

  • Take pictures and videos
Other

  • Receive data from the internet
  • Control vibration
  • Pair with Bluetooth devices
  • Access Bluetooth settings
  • Full network access
  • Use accounts on the device
  • View network connections
  • Prevent the device from sleeping
Based on the access to your device (and your information), coupled with the design of Pokémon Go, the game should have no problem discerning and storing the following information (just for a start):

  • Where you are
  • Where you were
  • What route you took between those locations
  • When you were at each location
  • How long it took you to get between them
  • What you are looking at right now
  • What you were looking at in the past
  • What you look like
  • What files you have on your device and the entire contents of those files
I’m not going to tell people what they should think of all this.

I’m merely presenting the information. I recommend looking over the list of what data the game has access to, then going back to the beginning of this article and re-reading the history of the company.

Update: April 14th, 2020​

In March of 2017, a little less than a year after this article was originally published, WikiLeaks released what they called “Vault 7." A series of documents that was purported to be a large leak of CIA related documents focused heavily on hacking and electronic surveillance.

Among those documents was a list of code names, descriptions, and various details around Android specific exploits.

Of the code names listed… almost a third of them were Pokémon names. Between that and the CIA investment (via In-Q-Tel) in Niantic (the company behind Pokémon Go)… I mean, that's just a heck of a lot more Pokémon than one would expect from the CIA.


One other little tidbit:

The original CEO of In-Q-Tel was a man named Gilman Louie. Louie received multiple awards for his work with In-Q-Tel - including CIA Agency Seal Medallions, Director's Award by the Director of the Central Intelligence Agency, and Director of National Intelligence Medallion – which included investing in Keyhole.

Louie now sits on the board of directors of Niantic.


In 2019 alone, Pokémon Go earned $1.4 Billion (USD). As of February 2019, the game had been downloaded over One Billion times.

Update: June 15th, 2024​

After this article was originally published, back in 2016, I made a few radio guest appearances to talk about it -- my favorites being for Coast to Coast AM and Fade to Black. Both of which remain available online.

This was followed by an episode of a TV show, for The History Channel, called "Breaking Mysterious". That show only received a limited run in the USA, but it remains available via streaming in many other countries in case you want to look it up.

Here's a few snapshots from that episode (Season 1, Episode 1 - "The Watchers") just for good measure.

The article from Niantec (this is some of what they did with all that data from people playing the game):
November 12, 2024

Building a Large Geospatial Model to Achieve Spatial Intelligence​

Eric Brachmann and Victor Adrian Prisacariu​

At Niantic, we are pioneering the concept of a Large Geospatial Model that will use large-scale machine learning to understand a scene and connect it to millions of other scenes globally.

When you look at a familiar type of structure – whether it’s a church, a statue, or a town square – it’s fairly easy to imagine what it might look like from other angles, even if you haven’t seen it from all sides. As humans, we have “spatial understanding” that means we can fill in these details based on countless similar scenes we’ve encountered before. But for machines, this task is extraordinarily difficult. Even the most advanced AI models today struggle to visualize and infer missing parts of a scene, or to imagine a place from a new angle. This is about to change: Spatial intelligence is the next frontier of AI models.

As part of Niantic’s Visual Positioning System (VPS), we have trained more than 50 million neural networks, with more than 150 trillion parameters, enabling operation in over a million locations. In our vision for a Large Geospatial Model (LGM), each of these local networks would contribute to a global large model, implementing a shared understanding of geographic locations, and comprehending places yet to be fully scanned.

The LGM will enable computers not only to perceive and understand physical spaces, but also to interact with them in new ways, forming a critical component of AR glasses and fields beyond, including robotics, content creation and autonomous systems. As we move from phones to wearable technology linked to the real world, spatial intelligence will become the world’s future operating system.


What is a Large Geospatial Model?

Large Language Models (LLMs) are having an undeniable impact on our everyday lives and across multiple industries. Trained on internet-scale collections of text, LLMs can understand and generate written language in a way that challenges our understanding of “intelligence”.

Large Geospatial Models will help computers perceive, comprehend, and navigate the physical world in a way that will seem equally advanced. Analogous to LLMs, geospatial models are built using vast amounts of raw data: billions of images of the world, all anchored to precise locations on the globe, are distilled into a large model that enables a location-based understanding of space, structures, and physical interactions.

The shift from text-based models to those based on 3D data mirrors the broader trajectory of AI’s growth in recent years: from understanding and generating language, to interpreting and creating static and moving images (2D vision models), and, with current research efforts increasing, towards modeling the 3D appearance of objects (3D vision models).
YCSgR2iBcmelSliUkwijryiwHzGoTRO_PmcLP_feVknfSZq6_5xRTeyc2DmNQBzj-uxiv0tqprvljQz0u7WzxU9Hn_sJCBPCFHDrZnnUMOnbJw=e365-pa-nu-w897-rw

Geospatial models are a step beyond even 3D vision models in that they capture 3D entities that are rooted in specific geographic locations and have a metric quality to them. Unlike typical 3D generative models, which produce unscaled assets, a Large Geospatial Model is bound to metric space, ensuring precise estimates in scale-metric units. These entities therefore represent next-generation maps, rather than arbitrary 3D assets. While a 3D vision model may be able to create and understand a 3D scene, a geospatial model understands how that scene relates to millions of other scenes, geographically, around the world. A geospatial model implements a form of geospatial intelligence, where the model learns from its previous observations and is able to transfer knowledge to new locations, even if those are observed only partially.

While AR glasses with 3D graphics are still several years away from the mass market, there are opportunities for geospatial models to be integrated with audio-only or 2D display glasses. These models could guide users through the world, answer questions, provide personalized recommendations, help with navigation, and enhance real-world interactions. Large language models could be integrated so understanding and space come together, giving people the opportunity to be more informed and engaged with their surroundings and neighborhoods. Geospatial intelligence, as emerging from a large geospatial model, could also enable generation, completion or manipulation of 3D representations of the world to help build the next generation of AR experiences. Beyond gaming, Large Geospatial Models will have widespread applications, ranging from spatial planning and design, logistics, audience engagement, and remote collaboration.

Our work so far

Over the past five years, Niantic has focused on building our Visual Positioning System (VPS), which uses a single image from a phone to determine its position and orientation using a 3D map built from people scanning interesting locations in our games and Scaniverse.

With VPS, users can position themselves in the world with centimeter-level accuracy. That means they can see digital content placed against the physical environment precisely and realistically. This content is persistent in that it stays in a location after you’ve left, and it’s then shareable with others. For example, we recently started rolling out an experimental feature in Pokémon GO, called Pokémon Playgrounds, where the user can place Pokémon at a specific location, and they will remain there for others to see and interact with.

Niantic’s VPS is built from user scans, taken from different perspectives and at various times of day, at many times during the years, and with positioning information attached, creating a highly detailed understanding of the world. This data is unique because it is taken from a pedestrian perspective and includes places inaccessible to cars.

AQPNSCSYyEcWG8OBR8oMtKjXX21m9RCXlo_bPU9jGEkWF3Pe_m0A6lc6GqLRuFUb2WmTeqWuBi_GbWRjUWultb8WnvErXrU0pXmDQ4rt_d_-qcc=e365-pa-nu-w897-rw

Today we have 10 million scanned locations around the world, and over 1 million of those are activated and available for use with our VPS service. We receive about 1 million fresh scans each week, each containing hundreds of discrete images.

As part of the VPS, we build classical 3D vision maps using structure from motion techniques - but also a new type of neural map for each place. These neural models, based on our research papers ACE (2023) and ACE Zero (2024) do not represent locations using classical 3D data structures anymore, but encode them implicitly in the learnable parameters of a neural network. These networks can swiftly compress thousands of mapping images into a lean, neural representation. Given a new query image, they offer precise positioning for that location with centimeter-level accuracy.

Niantic has trained more than 50 million neural nets to date, where multiple networks can contribute to a single location. All these networks combined comprise over 150 trillion parameters optimized using machine learning.

From Local Systems to Shared Understanding

Our current neural map is a viable geospatial model, active and usable right now as part of Niantic’s VPS. It is also most certainly “large”. However, our vision of a “Large Geospatial Model” goes beyond the current system of independent local maps.

An entirely local model might lack complete coverage of their respective locations. No matter how much data we have available on a global scale, locally, it will often be sparse. The main failure mode of a local model is its inability to extrapolate beyond what it has already seen and from where the model has seen it. Therefore, local models can only position camera views similar to the views they have been trained with already.

Imagine yourself standing behind a church. Let us assume the closest local model has seen only the front entrance of that church, and thus, it will not be able to tell you where you are. The model has never seen the back of that building. But on a global scale, we have seen a lot of churches, thousands of them, all captured by their respective local models at other places worldwide. No church is the same, but many share common characteristics. An LGM is a way to access that distributed knowledge.

An LGM distills common information in a global large-scale model that enables communication and data sharing across local models. An LGM would be able to internalize the concept of a church, and, furthermore, how these buildings are commonly structured. Even if, for a specific location, we have only mapped the entrance of a church, an LGM would be able to make an intelligent guess about what the back of the building looks like, based on thousands of churches it has seen before. Therefore, the LGM allows for unprecedented robustness in positioning, even from viewpoints and angles that the VPS has never seen.

The global model implements a centralized understanding of the world, entirely derived from geospatial and visual data. The LGM extrapolates locally by interpolating globally.

Human-Like Understanding

The process described above is similar to how humans perceive and imagine the world. As humans, we naturally recognize something we’ve seen before, even from a different angle. For example, it takes us relatively little effort to back-track our way through the winding streets of a European old town. We identify all the right junctions although we had only seen them once and from the opposing direction. This takes a level of understanding of the physical world, and cultural spaces, that is natural to us, but extremely difficult to achieve with classical machine vision technology. It requires knowledge of some basic laws of nature: the world is composed of objects which consist of solid matter and therefore have a front and a back. Appearance changes based on time of day and season. It also requires a considerable amount of cultural knowledge: the shape of many man-made objects follow specific rules of symmetry or other generic types of layouts – often dependent on the geographic region.

While early computer vision research tried to decipher some of these rules in order to hard-code them into hand-crafted systems, it is now consensus that such a high degree of understanding as we aspire to can realistically only be achieved via large-scale machine learning. This is what we aim for with our LGM. We have seen a first glimpse of impressive camera positioning capabilities emerging from our data in our recent research paper MicKey (2024). MicKey is a neural network able to position two camera views relative to each other, even under drastic viewpoint changes.
X7zrz_r2CdVJQ9YPdvyqeN7tUqb4AsqzisIKhMWHpIFxiNDH4WTocI5v4YouwIvAlDdxcgO3WH5FfOlrgwCq4BR0IzOj7YAqxiOlbyC7CF1IqQ=e365-pa-nu-w897-rw

MicKey can handle even opposing shots that would take a human some effort to figure out. MicKey was trained on a tiny fraction of our data – data that we released to the academic community to encourage this type of research. MicKey is limited to two-view inputs and was trained on comparatively little data, but it still represents a proof of concept regarding the potential of an LGM. Evidently, to accomplish geospatial intelligence as outlined in this text, an immense influx of geospatial data is needed – a kind of data not many organizations have access to. Therefore, Niantic is in a unique position to lead the way in making a Large Geospatial Model a reality, supported by more than a million user-contributed scans of real-world places we receive per week.

Towards Complementary Foundation Models​

An LGM will be useful for more than mere positioning. In order to solve positioning well, the LGM has to encode rich geometrical, appearance and cultural information into scene-level features. These features will enable new ways of scene representation, manipulation and creation. Versatile large AI models like the LGM, which are useful for a multitude of downstream applications, are commonly referred to as “foundation models”.

Different types of foundation models will complement each other. LLMs will interact with multimodal models, which will, in turn, communicate with LGMs. These systems, working together, will make sense of the world in ways that no single model can achieve on its own. This interconnection is the future of spatial computing – intelligent systems that perceive, understand, and act upon the physical world.

As we move toward more scalable models, Niantic’s goal remains to lead in the development of a large geospatial model that operates wherever we can deliver novel, fun, enriching experiences to our users. And, as noted, beyond gaming Large Geospatial Models will have widespread applications, including spatial planning and design, logistics, audience engagement, and remote collaboration.

The path from LLMs to LGMs is another step in AI’s evolution. As wearable devices like AR glasses become more prevalent, the world’s future operating system will depend on the blending of physical and digital realities to create a system for spatial computing that will put people at the center.

Once AR glasses and things like (such as VR goggles outward facing cameras) that become popular, they will have even more data to crunch. It can probably also be back engineered for general spying (i.e. photos) from phones to be added to the data set.

A very quick take from this - it would seem that the CIA (and whoever else) has the equivalent of a google street view (that can take into account weather and lighting) and can position itself probably anywhere within those scanned regions of the world (including inside people houses) with a potentially near perfect representation of what it will see there.

In short - AI 'remote viewing'.

Going further (speculation based on the data) - it can probably tell where you are and what you're doing, if you're part of that data set. And maybe even if you aren't part of the data set.
 

Trending content

Back
Top Bottom