In the study, researchers explained how a Generative Adversarial Network - one of the two common varieties of machine learning agents - defied the intentions of its programmers and started spitting out synthetically engineered maps after being instructed to match aerial photographs with their corresponding street maps.In the first place, this isn't cheating. The program was instructed to adaptively match the two maps as closely as possible, and it did as instructed. If the programmers had wanted to specify that only one map could be altered, they should have specified.
The intention of the study was to create a tool that could more quickly adapt satellite images into Google's street maps. But instead of learning how to transform aerial images into maps, the machine-learning agent learned how to encode the features of the map onto the visual data of the street map.
The current icon shows Polistra using a Personal Equation Machine.