Conscious(ness) Realist

Publication Reviews and Commentaries
by Larissa Albantakis

Being You - Part II


Commentary (continued)

This is part II of my review of “Being you” by Anil Seth. The first part is here.

II: Content

Our perception is generated by the brain. This is the take-home message of chapter II of “Being you” and I urge everyone to embrace it. That meaning comes from within might be the most important insight there is about consciousness and its contents, and Anil Seth makes a great case: “Imagine, for a moment, that you are the brain.” (p. 79) “When trying to form perceptions, all the brain has to go on is a constant barrage of electrical signals which are only indirectly related to things out there in the world, whatever they may be.” (p. 80) Yes!

In developing IIT, our number-one guideline is that we have to “take the intrinsic perspective” of the system itself. As a scientist, it is very easy to lose sight of this principle, because the whole idea of science is to take the role of an outside observer. But consciousness is intrinsic and observer independent. Perception, in this picture, becomes a “waking dream”, “a dream guided by reality”, or, as Seth prefers, a “controlled hallucination”. Personally, I prefer to highlight the dream-like nature of our conscious contents, because it emphasizes that dreams are conscious experiences too.

The brain is highly interconnected. Even neurons in the primary visual cortex, for example, receive most of their inputs from other, higher-level cortical regions, not from the eyes (Muckli & Petro, 2013). Perception is different from imagining or dreaming only in that the resulting experience is triggered in part by some outside stimulus. The neural activity underlying the experience is quite similar in all cases (Horikawa et al., 2013; Siclari et al., 2017).

Here, Anil Seth contrasts the “bottom-up” picture of perception as feature detection with a “top-down” perception-as-inference view. Clearly, our perception is shaped by the structure of our brain’s internal neural network. Perception is context dependent, and the context includes my brain and its current state. This is what happens in a highly recurrent system with certain dynamics. In this sense it should be completely uncontroversial to say that “When I look at a red chair, the redness I experience depends both on properties of the chair and on properties of my brain.” (p. 91). However, Anil Seth’s “controlled hallucination” view also entails that my experience of a red chair “corresponds to the content of a set of perceptual predictions” (p. 91) and this more specific claim should be dissociated from the critical insight that all of our experiences come from within.

There are two parts to the controlled hallucination picture (nicely summarized on p. 111): One is predictive processing, which is a hypothesis about brain function based on the idea that the brain engages in a “continual process of prediction error minimization.” (p. 110) Mechanistically, this means that at every stage in the processing hierarchy, the brain tries to cancel incoming sensory signals through top-down activity. An interpretation is that “the brain is constantly making predictions about the causes of its sensory signals, predictions which cascade in a top-down direction through the brain’s perceptual hierarchies” (p. 87) and only prediction errors make their way to the next level of processing. Whether the brain actually works this way is a subject of ongoing research. Last I know, there is not that much evidence in favor of predictive processing outside of a few specific domains (e.g., dopamine reward signaling), but that may change.

The second part to the controlled hallucination account is the “claim that perceptual experience […] is determined by the content of the (top-down) predictions, and not by the (bottom-up) sensory signals” (p. 88). Here is my biggest problem with that claim: it does not make sense from the intrinsic perspective. What are the contents of the top-down predictions? How is there suddenly meaning in the message of the top-down signals, but the bottom-up signals cannot do the job? I thought we had established that all there is is a “barrage of electrical signals” (p. 80). Top-down, or bottom-up, both are unidirectional. Why should one be sufficient but not the other? All there actually is is a bunch of interacting neurons. And that’s where the perceptual experience has to arise from.

Let’s back up a step. In defense of the controlled hallucination view, Anil Seth first provides a nice overview of Bayesian inference and suggest that the brain “is approximating Bayes’ rule.” (p. 111). The part I did not follow is how “this connection licenses the idea that perceptual content is a top-down controlled hallucination” (p. 112) if this means that “conscious contents are not merely shaped by perceptual predictions—they are these predictions.” (p. 110)

[As best as I can tell, our divergent priors about neural dynamics are important here. Seth suggests that “the brain settles and resettles on its evolving best guess about the causes of its sensory environment, and a vivid perceptual world—a controlled hallucination—is brought into being.” (p. 120). But it is not clear to me that the brain ever really settles. For example, I can flash power point slides with unexpected images or forms at a relatively high rate and still see the images. Faces and other objects can be recognized within very short time intervals (<50 ms) (Gur 2018). Do I really only see them once everything percolated though the entire processing chain and back and prediction errors are again minimized?]

If conscious contents are predictions, what are these predictions in mechanistic terms? And whose predictions are they anyways? I have yet to receive a satisfying answer to these questions from supporters of predictive processing or the free energy principle. At this point, usually, the notion of a “generative model” is evoked. Unfortunately, all that is said in the book is that “[g]enerative models determine the repertoire of perceivable things” (p. 112) and that they “are able to generate the sensory signals corresponding to a particular perceptual hypothesis.” (p. 174) But the picture that somewhere in the brain we have this almost homunculus like generative model that predicts our perceptions and actions is almost certainly wrong.

Maybe the brain’s neural activity can be described as a generative model because we can interpret some of its activity as conditional information about perceptual stimuli (aka “predictions”, which btw are not necessarily about the future (p. 112).) In fact, the free energy principle (FEP) guarantees as much (see next section). But this is an extrinsic account. Later in the book (p. 191), a footnote mentions the difference between “being” a model and “having” a model.” This is critical and I would say that from the intrinsic perspective there is no such distinction. What is important for consciousness is what the system is, not how we can describe it. Without a mechanistic account of what is actually going on, the term “generative model” is just a metaphor, no better than the metaphor of the brain as a computer. By itself, the claim that our conscious contents are predictions is just as mechanistically empty as a hand-waving reference to software.

This said, the chapter on content ends with several fascinating experimental studies emphasizing how our expectations shape our perception. For example, “people were faster and more accurate at seeing houses when houses were what they were expecting” (p. 124). Yet, they only see the house once it is presented, not while they predict it ahead of the stimulus. In any case, Anil’s findings about our perception of objects, time, and change have many important implications. To cite just one crucial insight on the topic of change-blindness: “perception of change is not the same as change of perception.” (p. 138)

III: Self

I very much enjoyed the rest of “Being you”. Anil Seth provides a gripping account of our sense of self, the paradoxes surrounding it, and “the many ways in which the self falls apart following disease or damage” (p. 156). Seth makes a convincing case that our notion of self is ultimately a perception, not “the ‘thing’ that does the perceiving.” (p. 153) Moreover, the self is not one thing, but has many different aspects (in contrast to consciousness itself, see Part I). There is the “feeling of being alive” (p. 157), the perspectival self, the volitional self (p. 158), the narrative self, and the social self (p. 159).

While the “experience of being me” (p. 160) is almost by definition a content of consciousness, there is a more general sense in which I do exist as the experiencer, the subject that has subjective experiences right here and right now. And this is an insight that we do owe at least in part to Descartes, even if he was on the wrong side of the debate around the connection between life and mind, as Anil Seth convincingly argues.

In IIT, there is an emphasis on consciousness being right here right now. Consciousness is not a process across time. Yet, we “generally experience ourselves as being continuous and unified across time.” (p. 175) The book provides a nice explanation for our “false intuition that the self is an immutable entity, rather than a bundle of perceptions” (p. 176): change blindness. While “[o]ur perceptions of self are continually changing—you are a slightly different person now than when you started reading this chapter—[…] this does not mean that we perceive these changes.” (p. 176) But while we may go as far as to call the sense of a persistent self an illusion, my existence as an experiencer right here and now is indubitable.

The stability of our experiences over time is due to our self-maintaining physical substrate. Self-maintenance is an essential property (and maybe the defining property) of living beings. In this context, Anil provides a brief overview of the free energy principle (FEP), acknowledging the struggles it takes to make sense of the formalism. There is indeed a lot of confusion about the FEP and its presumed implications. The way I came to understand the FEP (after many struggles of my own), is somewhat different from how it is portrayed in the book. Instead of a principle “even more fundamental” than the “drive to stay alive” (p. 212), in my understanding, the FEP (merely) provides an after-the-fact explanation for why it seems like systems actively minimize surprise. Instead of “I predict myself, therefore I am” (p. 210), at best we can say “I am, therefore I predict myself”. The FEP applies whenever there is a gradient of some kind. But the gradient comes first and the FEP does not provide an explanation for the gradient itself. Take for example the objection that a living system could minimize its free energy by “retreating into a dark and silent room and staying there, staring at the wall” (p. 210). The reply typically alludes to longer time scales. Instead, I think, the reason this is not what living beings do is that the need to stay alive comes first and the description in terms of the FEP comes after. Of course it wouldn’t stay in the room, because then it would die. As an analogy: Whenever something is rolling down a hill, it has angular momentum. But that doesn’t mean that there is a principle of angular momentum that makes things roll down hills. I might be wrong on this, so take it with a grain of salt. As Anil also points out, “it is not necessary to comprehend or accept the FEP in order to follow the story of controlled hallucinations and beast machines” (p. 212).

IV: Other

At this point my review is already much longer than I had anticipated. I’ll keep the rest short. As I said initially, I very much agree with Anil Seth’s perspective on other minds and his concerns around machine consciousness. While trivial, it needs emphasizing that report is not the way to go when judging whether another being is conscious or not. Even though “[i]ntelligence is not irrelevant to consciousness”, “[c]onsciousness and intelligence are not the same thing. Using the latter as a litmus test for the former commits a number of errors” (p. 238) (as I have also argued before).

“Not only can consciousness exist without all that much intelligence—you don’t have to be smart to suffer—but intelligence can exist without consciousness too.” (p. 254) Yet, “[in] the nearish future, it is entirely plausible that developments in AI and robotics will deliver new technologies that give the appearance of being conscious, even if there are no conclusive reasons to believe that they actually are conscious.” (p. 264)

For extrapolation we should rely on shared mechanisms, not superficial similarity. Whether consciousness corresponds to the mechanisms associated with top-down predictions, or the integrated information of neurons that interact in specific ways, the case is strong that consciousness is indeed “more closely connected with being alive than with being intelligent.” (p. 239)

Conclusion

Go read the book. Then, if you like, come back to compare your notes with mine.


Seth A (2021) Being You. Publisher: Dutton.