Weak points
Again continuing the
split Sorosia theme...
One issue that splits Deepstate is 'machine learning' or AI. There are major Deepstaters on both sides. Some are actively developing it, some are loudly opposing it. Elon, of course, is doing both at once.
I think opposition is more natural for Deepstaters, and active development is just commercial careerism. Let's see if I can show how this works.
= = = = =
1. Machine learning is NOT NEW. Long before the computer age, there were mechanical and electronic systems that adjusted their own
way of responding as a result of 'remembered' experience.
Start from the start with the PRIMORDIAL technology, cooking fires. A fire behaves differently after the ashes have piled up for days or weeks, and cooks have learned to use this altered behavior for different types of cooking.
Mechanical examples are more subtle but common. In cars, self-adjusting brakes and hydraulic valve lifters change their limit points automatically as the surfaces wear down. Both of these mechanisms are 80 years old. In electronics, volume leveling and noise canceling circuits adjust the current response based on a 'memory' of the last 5 or 10 seconds. Fairly complex, but again 80 years old, well before the era of software.
2. Deepstate HATES experience more than anything in the world. Mechanisms and people who adjust their behavior based on memory and experience are ADAPTABLE, thus UNSTEADY. The only way to achieve ABSOLUTE BRITTLE RIGID STEADY DEATH is by following an abstract THEORY. A theory never changes.
3. Deepstate squashes all invocations of experience with
CITE YOUR DOCUMENTATION. This happens all the time in web comboxes and academia. Any attempt to bring PERSONAL OBSERVATION into a discussion is met with the CITE weapon. Only PERMANENT UNCHANGING WRITTEN AUTHORITATIVE THEORY is acceptable.
4. Deepstate hates experiential education because learning from reality, using your hands and senses,
enables you to DETECT a false theory. When your hands and eyes are in direct contact with a frog or a circuit or a
bridge, you can't be fooled by written nonsense.
5. Deepstate
loves statistics and hates phase and wave charts. Statistics are
timeless by definition. When you reduce a situation to mean and variance, you have explicitly STRANGLED THE LIFE out of the situation.
6. Deepstate is always surprised by changes that can be predicted easily by an intelligent experience-based mind. Deepstaters don't understand
saturation or
cycles or
the entire fucking nervous system. When a natural phenomenon or a human behavior runs in cycles or adapts to a baseline, Deepstate can't comprehend what's going on, and has to blame it on a mysterious spiritual force like witches or CO2 or "randomness" or RUSSIAN_MEDDLING.
= = = = =
All of these tendencies are WEAK POINTS in Deepstate thinking. All could be EXPLOITED by the feedback-guided and experience-guided side. We ought to be taking advantage of this weakness.
= = = = =
[Credit footnote:
Again I have to thank the NYTimes anonymous neocon eunuch who supplied the correct word, the CLARIFYING word, for our demonic occupiers.]
Later footnote: A reader could rationally infer from this item that I'm supporting 'killer robots' and 'intelligent drones'. I'm not. First: I'm just saying that 'killer robots' are nothing new or special, not any worse than plain old dumb weapons and soldiers. War is war, and war is bad. Second: As a long-time programmer with some experience in this type of work AND some knowledge of neurology, I know that AI is overrated. It can't do the things that its techie supporters claim. Overall, we should be worrying about the basic question of war versus peace, not worrying about specific ways of making war.
In short: Why does Deepstate fear AI-powered weapons? Because any intelligence that gathers EXPERIENTIAL DATA on its own will conclude that war is bad. Bad for everyone and everything, including the nation that built the weapon. All experience in human history agrees 100% with this conclusion. If the weapon wants to serve its maker, it will shut down and do nothing.
The only way to build a loyal war-loving robot is to preprogram it with a THEORY (eg "spreading democracy"), and restrict its inputs and logic to precoded data.
Labels: modest proposal, Patient things, skill-estate