Fourth, neural representations are structural representations—that is, they are systems of internal states that covary with external targets, have a causal connections with their targets, can be tokened in the absence of their targets, and can guide behavior.
First, I argue that physical computation does not require representation. Computation is well-defined without representation and can occur whether or not representation is involved.
The received view in the philosophy of cognitive science is that cognition is (largely explained by) a kind of digital computation, and computation requires representation. Therefore, if neurocognitive systems are computational, they manipulate digital representations of some sort. According to this received view, representations are unobservable entities posited by successful psychological theories of cognitive capacities. That representations exist and explain cognition is a conclusion reached by inference to the best explanation. Some philosophers of an autonomist bent have gone so far as to conclude that representations and computations are proprietary to psychological explanation and do not belong in neuroscientific explanation. While I agree that neurocognitive processes are computations that manipulate representations, I argue against the other tenets of this received view.
Third, neuroscientists did initially posit representations as unobserved entities that explain cognition, at a time when neuroscience was in its relative infancy. Since the 19th century, however, neuroscientists have developed myriad techniques to confirm that neural representations exist, to observe and measure representations, and to manipulate representations in the lab. Therefore, neural representations are observable and manipulable entities, and a fortiori they are real. (My initial argument for this conclusion was originally developed in collaboration with experimental neuroscientist Eric Thomson.)
Therefore, neural computation is sui generis. (My initial argument for this conclusion was initially developed in collaboration with biophysicist Sonya Bahar.)
Fifth, nothing in the definition of structural representations requires them to be digital, so structural representations as well as computations defined over structural representations need not be digital.
Seventh, in an important technical sense, neural representations and the computations defined over them are not analog either, because—unlike analog signals—neural signals do consist primarily of (mostly) all-or-none spikes.
Sixth, as a matter of empirical fact, neural representations and the computations defined over them are not digital, at least in the general case. The main reason is that neural computations are defined either over the frequency or over the (relatively exact) timing of neuronal spikes, which are not features of digital signals. (There may be cases in which neural representations and computations are digital or approximately digital; whether there are such cases is an interesting empirical question.)
(Side note: Corey Maley has an interesting research program, independent of our joint work, where he argues that neurocognitive systems are analog computing systems. He and I don’t agree on every detail of the story but, given the way he defines analog computation, I agree with his main conclusion.)
An important consequence of this argument is that cognitive scientists cannot just posit whatever computations and representations seem most explanatory without worrying about neurocognitive mechanisms. They need to take into account what is known about neural computation and representation. Psychology and neuroscience mutually constrain one another and must be integrated to provide multilevel mechanistic explanations that involve neural computations over neural representations.
Second, as Daniel Kraemer first pointed out to me, the notion of representation was first introduced as an explanatory posit by neuroscientists in the 19th century—not by cognitive scientists in the 1950s as many believe.