In the study, researchers explained how a Generative Adversarial Network - one of the two common varieties of machine learning agents - defied the intentions of its programmers and started spitting out synthetically engineered maps after being instructed to match aerial photographs with their corresponding street maps. The intention of the study was to create a tool that could more quickly adapt satellite images into Google's street maps. But instead of learning how to transform aerial images into maps, the machine-learning agent learned how to encode the features of the map onto the visual data of the street map.In the first place, this isn't cheating. The program was instructed to adaptively match the two maps as closely as possible, and it did as instructed. If the programmers had wanted to specify that only one map could be altered, they should have specified. In the second place, this isn't new, so it's not "frightening futurism". Devices that adaptively map one surface onto another have been around for a few years, and some of the devices cheat as defined above. The adaptive mapping devices are called "paint" and "clothing". Within the available range of "clothing" AI, some devices (eg "girdles" and "brassieres") are intended to cheat by mapping the inner surface to the outer surface. All three adaptive AI devices (paint, inward-mapping clothing, outward-mapping clothing) in one picture.
The current icon shows Polistra using a Personal Equation Machine.