"Current machine text-generation models can write an article that may be convincing to many humans, but they're basically mimicking what they have seen in the training phase," said Lin. "Our goal in this paper is to study the problem of whether current state-of-the-art text-generation models can write sentences to describe natural scenarios in our everyday lives." Specifically, Ren and Lin tested the models' ability to reason and showed there is a large gap between current text generation models and human performance. Given a set of common nouns and verbs, state-of-the-art NLP computer models were tasked with creating believable sentences describing an everyday scenario. While the models generated grammatically correct sentences, they were often logically incoherent. For instance, here's one example sentence generated by a state-of-the-art model using the words "dog, frisbee, throw, catch": "Two dogs are throwing frisbees at each other." The test is based on the assumption that coherent ideas (in this case: "a person throws a frisbee and a dog catches it,") can't be generated without a deeper awareness of common-sense concepts. In other words, common sense is more than just the correct understanding of language -- it means you don't have to explain everything in a conversation. This is a fundamental challenge in the goal of developing generalizable AI -- but beyond academia, it's relevant for consumers, too.This also applies to the deification of theory among "human" scientists. As every branch of science departs from PHYSICAL experience, theories become weirder and crazier and more murderous. 100 years ago, theorists like Lodge and Faraday and Ayrton worked constantly with REAL PHYSICAL EQUIPMENT, and depended on close teamwork with mechanics who could build and maintain the REAL PHYSICAL EQUIPMENT. When every idea is applied directly to Nature, Nature will TELL you which ideas are sensible. ... but you need to LISTEN as well, and you need to break out of Parkinson. You need to use negative feedback, not positive feedback. With negative feedback, constant failure tells you to stop and try something different. With positive feedback, constant failure tells you to try the same shit EVEN HARDER AND BIGGER. The omnicidal "physicists" at LHC are using positive feedback. Their attempts to obliterate the universe just go on and on and on. They never find the mythical unicorns they're supposedly looking for. Every failure becomes an urgent requirement for more speed, more energy, more staff, more funding. Common sense and physical reality should tell you when to stop. If the unicorn is a real part of Nature, you should be able to find it at ordinary scales and ordinary energies. If you have to use energies trillions of times beyond anything in Nature, you're not going to find a unicorn that occurs in Nature. NEGATIVE FEEDBACK IS LIFE. POSITIVE FEEDBACK IS DEATH.
Labels: Not AI point-missing
The current icon shows Polistra using a Personal Equation Machine.