The study found that in the auditory cortex, individuals who are blind showed narrower neural "tuning" than sighted subjects in discerning small differences in sound frequency.Makes sense. When vision is present, frequency detection mainly serves to detect vowel formants, which are broad and relative**. Without vision, frequency detection must also distinguish echoes from specific objects, which are more precise and absolute. = = = = = (2) This one is less expected:
An area of the brain called the hMT+ which in sighted individuals is responsible for tracking moving visual objects, shows neural responses that reflect both the motion and the frequency of auditory signals in blind individuals. This suggests that in blind people, area hMT+ is recruited to play an analogous role -- tracking moving auditory objects, such as cars, or the footsteps of the people around them.In other words: With vision: Each visible object has an 'avatar' consisting of its reflected color patterns. The 'avatar' moves around an internal map corresponding to the map of reality. Without vision: Each audible object has an 'avatar' consisting of its reflected pitch patterns. The 'avatar' moves around an internal map corresponding to the map of reality. = = = = = I'd bet blind people have a much larger and more flexible memory for sequences as well, which would also be found in pre-literate or illiterate people. When you can rely on writing to store sequences, you can offload sequences into writing and read them later. Braille differs from visible writing. When we read a paragraph in print, we see a large chunk of it at once, input it in parallel, and sort it out internally. Braille is strictly series, strictly sequential, so it's less efficient as an offloader. Blind people seem to enjoy talking backwards and playing music backwards, as an exercise for the sequencer. When you're not accustomed to having direct series access to sequences, these tricks are much harder. = = = = = Sidenote: Our magnetic sense, which we're just barely starting to understand, also seems to use the same 'avatar' approach. ** Speculative thought. Formants are relative. Each speaker has a different oral cavity and dialect and emotional shaping, so the baseline and ratio of vowel formants is different for each speaker. We quickly adapt to the baseline and track F1, F2 and F3. Each formant has a wide range, each speaker has a somewhat different pattern. Only the rank order of frequencies is the same. My first thought was that echoes from objects are more precise and constant, but that's probably wrong. Echoing objects are probably grouped by type and 'dialect' just as speakers are. Bats can distinguish different species of moth regardless of position and distance.
The current icon shows Polistra using a Personal Equation Machine.