Our contemporary science of consciousness is based in large on the notion of “neural correlates of consciousness” (NCC). Promoted by Crick and Koch (1990) the idea of focusing on empirically accessible questions moved consciousness back into the realm of science. Originally, the NCC were defined as the “minimal neuronal mechanisms jointly sufficient for any one specific conscious percept.” The goal is to establish objectively which brain areas and neural elements can or cannot support consciousness and its contents. In their article, Michel and Lau distinguish between NCCs, “markers”, and “constituents” of consciousness. While the NCC are supposed to be minimally sufficent, markers are neither necessary nor sufficient, but may still be useful as indicators of consciousness. Finally, constituents would be neural or physical states that are identical with—that is necessary and sufficient for—consciousness. On this basis, the authors argue that we should stick to the NCC project and stay clear of strong theoretical claims about constituents of consciousness, where integrated information theory (IIT) is meant to serve as the cautionary tale.
Why discuss this paper?
There are three main threads that are weaved together in the article: (1) a distinction between markers, NCCs, and constituents of consciousness, (2) a distinction between weak and strong versions of a theory of consciousness, and (3) the role of biological degeneracy with respect to consciousness. While each of these issues is interesting in itself and deserves some attention, the purported connection between the three topics is not very convincing and not supported by the example of IIT. Once the threads are disentangled, the authors’ pessimism about the prospects of explaining consciousness seems less warranted. We shouldn’t forget that the hope behind the NCC project has always been to arrive at a clearer understanding of phenomenology, of what consciousness is (Crick and Koch, 2003).
Let’s start with markers of consciousness. Markers are useful indicators of consciousness, but do not have to be mechanistically relevant for consciousness. The ability to report, for example, is a marker of consciousness and NCCs can also be considered markers. With respect to IIT, given initially promising empirical data, one could consider integrated information a marker of consciousness and work on refining the measure purely based on its predictive power. This view has been labeled “empirical IIT” and there is nothing wrong with taking such an approach (except that it misses the point).
The goal of IIT is to offer an explanation of subjective experience in physical terms. At the heart of the theory is a postulated identity between an experience and the cause-effect structure (CES) of its physical substrate. This view has been termed “fundamental IIT”, but I will drop the “fundamental” now, except when necessary.
Who is conflating empirical with fundamental IIT?
Arguably nobody. It is certainly true that proxy measures of any theoretical quantity must be evaluated with all the necessary caveats. For example, experiments demonstrating that the perturbational complexity index (PCI) is an excellent indicator of consciousness at the level of individual subjects (Casali et al., 2013) indeed do not provide support for IIT beyond its “empirical” interpretation (and technically PCI does not even measure integrated information at all). Nevertheless, PCI was explicitly designed with the principles of IIT in mind and a failure of the PCI measure would have been a blow to fundamental IIT. At least in broad terms, support for empirical IIT is support for IIT, because a failure of integrated information as a marker of consciousness would have been detrimental for IIT.
The question that remains is how we may test fundamental IIT specifically? One obvious answer lies in the postulated identity itself. We need to demonstrate that the cause-effect structure of a physical substrate indeed matches the phenomenal structure of its experience. Initial work on this important line of reasoning has already been started on the example of spatial experiences (Haun and Tononi, 2019). Incidentally, the same example may serve to dissociate consciousness from cognition. Judging from my own perspective and communications, the appeal of IIT is not based on a misattribution of existing empirical evidence. Rather, IIT offers a possible path to move past markers of consciousness towards accounting for phenomenology. The path may be challenging, but don’t we want to go there?
NCCs vs. constituents of consciousness: minimally sufficient or necessary?
Distinguishing between NCC that are minimally sufficient and constituents that are necessary and sufficient for consciousness has interesting consequences when it comes to the possibility of degeneracy for the neural substrate of consciousness. In principle, one could imagine that two partially overlapping sets of neurons may both be minimally sufficient for a conscious experience to occur. In this case, neither one would be necessary. It would thus seem that claims of an identity between neural and conscious states become impossible. However, constituents are necessarily tied to particular experiences at a given moment (to see this, note that my color neurons are certainly not necessary for your color experience). In other words, we can allow for degeneracy across instances while requiring necessity and sufficiency at any given moment for the particular experience. Once this is realized, all problems resolve.
How does IIT relate to the distinction between NCC and constituents of consciousness?
Since the identity in IIT is not between the experience and the physical substrate itself, but with its cause-effect structure (CES), the same CES is, in principle, multiply realizable. However, differences in the CES have to correspond to differences in the experience. Moreover, IIT’s postulated identity is a structural identity. The identity or label of the underlying physical elements does not matter beyond their structural properties. Thus, two sets of neurons N1 and N2 may support the same cause-effect structure and have the exact same experience. For these reasons, the analogy to water as H2O is a bad one (except that there actually are three different H2O molecules with different hydrogen isotopes that are all water).
[Just to be clear, not every change to the substrate will necessarily lead to a change in the cause-effect structure. Changing the connections between two neurons in a large set, for example, may not have an impact on the cause-effect structure, even if the physical substrate corresponds to the level of neurons. In general, however, small/large changes to the state and connectivity of the substrate will translate to small/large changes in the CES. Special cases in which the theory would predict the two to dissociate may offer interesting avenues for testing IIT.]
In sum, IIT explicitly distinguishes between the physical substrate (a set of physical elements that can be manipulated and observed) and its cause-effect structure composed of causal distinction and relations. The definition of constituents as “neural, or physical states that are identical with consciousness” conflates the substrate with its causal structure and thus does not map well onto IIT.
How can we identify the substrate?
Whether ultimately correct or not, IIT’s postulated identity between conscious experiences and CESs highlights several important points about the physical substrate of consciousness beyond the NCC program: for example, 1) there has to be a reason why a particular level of organization seems to matter, while others do not, and 2) the substrate may be state-dependent.
The authors claim that “it is unclear what the substrate of consciousness is supposed to be, according to Fundamental IIT”. However, IIT provides a clear prediction about the substrate: it is the level and set of (macro) elements with maximal \(\Phi\), that is, maximal cause-effect power. Moreover, given that changes in the cause-effect structure have to correspond to changes in the experience, it is clear that the relevant level has to correspond to the level at which interventions result in changes in the experience. Based on our current knowledge about the NCC, those are likely on the level of smaller or larger groups of neurons. Sub-neural changes that do not affect the state of individual neurons are thus not likely to matter (Tononi et al., 2016).
What does biological degeneracy have to do with consciousness?
For the authors, a lot seems to hinge on the question whether IIT allows for degeneracy in the physical substrate or not. However, biological degeneracy is about function, whereas consciousness (at least according to IIT and me) is not. There is no sense in which consciousness performs a function independent of its substrate.
[I recommend the two paragraphs on why IIT is not a functionalist theory on p. 6. One correction here: it is not possible to derive the measure of integrated information from the postulates in a mathematical, deductive sense. Instead, this is a matter of inference to the best explanation (where “best” should be read as “good enough” in practice).]
How much neural degeneracy is compatible with IIT and also with consciousness, more generally, is thus simply a question of the relevant level of organization. If groups of neurons seem to matter (make a difference) rather than individual neurons, there is plenty of room for redundancy and degeneracy in the neural constituents (here constituents is used in the IIT sense: the neurons that make up the substrate).
Should NCC make a difference?
At this point, it becomes relevant to acknowledge that this article was heavily motivated by an ongoing and heated debate about the role of the prefrontal cortex (PFC) for consciousness (see Boly et al. (2017); Odegaard et al., 2017). I won’t argue one way or another, but rather want to evaluate the claim of the present paper that something may be an NCC (rather than a marker) without being a constituent of the physical substrate.
First, the notion of minimal sufficiency is actually quite close to necessity. Indeed, there could be two partially overlapping minimally sufficient sets, neither of which is necessary. However, if there is a set that is necessary and sufficient, no superset can be minimally sufficient.
Let’s look at the example provided by the authors: “a group of neurons in the right PFC may be on their own minimally sufficient for a conscious experience to occur. But once lesioned, neurons in the left PFC may take over to perform the same function.”
Taken in isolation, this is perfectly consistent with the right PFC being a constituent at one moment, then we change things and now the left PFC is a constituent of a structurally identical experience (see above). But what does this imply for the right and left PFC before the lesion? For both to be part of the NCC, they do have to have separate roles, otherwise they wouldn’t both be part of a minimally sufficient set. This means that if both were part of the NCC before, lesioning one must have made some difference to the experience. I believe this reconciles Christof Koch’s view (see footnote 12) with the notion of NCC as minimally sufficient neural states. It may also be useful to turn the tables, and ask how we would establish that PFC is relevant for consciousness? It would need to make some difference, and the more the better.
[According to IIT redundant circuits may contribute to the same experience. If one were lost, this should correspond to a loss in “intensity”, not necessarily a change in type. Also, minimal changes may not be introspectable by the experiencing subject. Finally, changes in the background condition may actually affect the experience under certain circumstances even if the substrate remains in the same state. For example, if an evil neuroscientist injected every neuron in the NCC with the precise amount of current to keep it in its present state, this would result in a loss of consciousness according to IIT. All these confounds have to be taken into account wherever relevant. As the authors say in footnote 11, “there may be more complexity to these issues”. This is but one way in which actually applying the mathematical framework to toy examples is directly relevant for our understanding of experimental results.]
Rather than distinguishing NCCs from constituents (which is largely a question of their involvement in consciousness in general or particular instances), it is important to distinguish the actual physical substrate from neural correlates/constituents. Everything else being equal, differences in the substrate should lead to differences in the experience. If, for example, the substrate corresponds to groups of neurons and their average activity levels, the state of the neural constituents (the neurons that make up the substrate) is minimally sufficient, but not necessary due to multiple realizability.
In the end, I am tempted to label the paper “Much ado about nothing”. As with anything, it is important to bring many sources of evidence together and we should strive for the best explanation given the accumulated data. This goes for the role of the PFC in consciousness, and also for IIT as a (strong) theory of consciousness. Most charitably, the authors argue that the time hasn’t come yet for explanations of consciousness. Nevertheless, evaluated correctly, the formalism pertaining to fundamental IIT may shed some light on the possible confounds when it comes to neural correlates/constituents.
Michel M, Lau H (2020) On the dangers of conflating strong and weak versions of a theory of consciousness. Philosophy and the Mind Sciences, 1(II), 8.