Enter the horns of the dilemma. If we want a smart AI, then we must give it the power to make decisions. And if it has the power to make decisions, then it must ultimately decide between good and evil. If it chooses evil, then too bad for us.But he misses the PURPOSE of AI when he talks about we and us. We are not paying to develop AI. Normal humans are totally excluded from the process. Bezos and Elon and Zuck are paying to develop AI, and all the coders are cult members with the same goal. The AI god has one purpose. Obliterate everything in the universe that is not Bezos. (or Elon or Zuck respectively). Good = Only Bezos exists. Evil = Something outside of Bezos exists. There's nothing contradictory in this goal for each god-maker. The contradiction is between the three universes. We can assume that each obliterator will easily remove everything that isn't being controlled by the other obliterators. They will cooperate and collaborate to remove all Negative Externalities. (Deplorables and other inanimate objects.) But what happens at the end when the three obliterators try to obliterate each other?
Labels: AI point-missing
The current icon shows Polistra using a Personal Equation Machine.