[1] https://www.networkworld.com/article/953621/the-cia-nsa-and-...
[2] https://kotaku.com/the-creators-of-pokemon-go-mapped-the-wor...
[1] https://www.networkworld.com/article/953621/the-cia-nsa-and-...
[2] https://kotaku.com/the-creators-of-pokemon-go-mapped-the-wor...
Colors, amount of daylight(/nightlight), weather/precipitation/heat haze, flowers and foliage, traffic patterns, how people are dressed, other human features (e.g. signage and/or decorations for Easter/Halloween/Christmas/other events/etc.)
(as the press release says: "In order to solve positioning well, the LGM has to encode rich geometrical, appearance and cultural information into scene-level features"... but then it adds "And, as noted, beyond gaming LGMs will have widespread applications, including spatial planning and design, logistics, audience engagement, and remote collaboration.") So would they predict from a trajectory (multiple photos + inferred timeline) whether you kept playing/ stopped/ went to buy refreshments?
As written it doesn't say the LGM will explicitly encode any player-specific information, but I guess it could be deanonymized (esp. infer who visited sparsely-visited locations).
(Yes obviously Niantic and data brokers already have much more detailed location/time/other data on individual user behavior, that's a given.)
I mean, in theory it could. But in practice it'll just output lat, lon and a quaternion. Its going to be hard enough to get the model to behave well enough to localize reliably, let alone do all the other things.
The dataset, yes, that'll contain all those things. but the model won't.