Nothin' to See Here Neon Sign

Language-like representations don’t magically process themselves. They require specific computing machinery. LOT requires not only a digital code for the language-like data and instructions but also digital processors that respond appropriately to the instructions, digital memory registers separate from the processors to store the data and programs, and specialized digital devices (“program counters”) within the processors to keep track of both which instructions are to be executed at any given time on which data and where to find the relevant data and instructions in the memory registers.
Needless to say, the required machinery goes way beyond the simplistic digital interpretation of neural networks that McCulloch and Pitts (1943) proposed, which was a gross simplification and idealization of real neural networks and which no self-respecting neuroscientist considers at all relevant to understanding real neural networks. Fun fact: even McCulloch and Pitts soon enough realized that their model does not apply to the brain.
LOT theorists have the huge burden of finding the machinery for LOT in the brain and they have not even begun to discharge it. Anyone who somehow still thinks (Fodor-style) LOT is worth considering should stay awake at night wondering how in the world LOT could explain cognition while virtually all the neurocomputational evidence points in the opposite direction.
There’s plenty we don’t know about how brains cognize, but we’ve made a lot of progress since McCulloch and Pitts and virtually everything we have learned reinforces McCulloch and Pitts’s eventual conclusion that neural computation is not digital and that, a fortiori, the brain is not a digital computer. Setting aside the specialized neural representations that are possibly involved in explaining human linguistic and mathematical cognition (which might approximate some aspects of discreteness at a coarse level of granularity), there is no evidence of a genuinely digital code in the brain, or of a computer-like programming language being executed within the brain, let alone digital processors including the special components that are needed for processors to work, such as program counters or digital memory registers separate from the processors.
The contemporary (Fodor-style) Language of Thought (LOT) hypothesis (not to be confused with Sellars’s reasonable hypothesis that some neural processes are somewhat analogous to linguistic episodes) is that many cognitive capacities widespread within the animal kingdom, such as perception, navigation, or caching, are explained by processing language-like representations like those in digital computers by executing programs like those in digital computers.
But LOT needs a lot more! How are ordinary spike trains, which encode sensations and are not digital, to be transduced into digital RNA strings that faithfully represent the right information? How are RNA strings to be transduced back into spike trains that drive muscle contractions? Where are the digital processors and how do they process RNA strings? How are the digital processors accessing the right RNA strings just when they need them? There are many more questions like these that need to be answered before a proposal is worth discussing. Gallistel’s proposal is so far from a plausible implementation of LOT that it deserves at most an incredulous stare.
As far as I know, since McCulloch and Pitts (1943), the closest anyone has gone to speculating about where a language of thought and the required computing machinery could be found is Gallistel’s hypothesis that RNA within neurons might store the required digital data (and instructions?). Gallistel is an accomplished scientist who deserves credit for at least trying. He is right that RNA has digital structure, and RNA is surely involved in cognition for the simple reason that gene expression is surely involved in cognition and gene expression requires RNA.

Similar Posts