Conscious(ness) Realist

Publication Reviews and Commentaries
by Larissa Albantakis

  • Commentary

  • X

My conscious(ness) biases


As much as I love discussing consciousness, sometimes the conversation seems doomed. Over the years I have learned first to ask what the other person means with “consciousness” and also not to worry too much about convincing anyone about anything right there and then. Consciousness is something we all have and therefore everyone has their own opinions and intuitions (and often also their own theory). There are also biases at play: neuroscientists tend to be reductionists with a functional perspective on consciousness; physicists are often holists and either imagine consciousness as a property of our universe (emergent or fundamental), or do not really understand what the whole fuss is about anyways (or both); computer scientists think consciousness is computational (duh); philosophers are concerned about dualism or how to escape it… Whether these biases about consciousness are acquired or just covary with choice of occupation is an interesting question in itself.

Rather than overgeneralizing other people’s opinions about consciousness, I thought it might be illuminating to investigate my own biases with respect to consciousness. Below, thus a list of “facts” about consciousness that seem obvious to me but may completely elude other (very smart) people and, admittedly, sometimes that has me quite concerned. While I am confident that I can argue in favor of many of the points below, I am also aware that these are my natural inclinations and I didn’t get there by argument alone, if at all. As always, feel free to comment below.

It seems obvious to me that …

(1) There is something it feels like to be me. Curiously, this definition of (phenomenal) consciousness given by Nagel (1974) just doesn’t work for a considerable number of people. Likewise, referring to the “redness of red” or the feeling of pain in contrast to mechanical nociception often leads to blank stares. At this point, I sometimes start to wonder if approximately half of all people are in fact zombies. Jokes aside, I think that the ability to recognize phenomenal quality as something other than “what is out there” requires the ability to take a step back from the immediacy of the world. To me this has always seemed quite natural; I cannot remember a concrete point in my life where I realized that there is a world within that is dissociable from the world outside. However, getting across what is meant by “qualia” seems to require a real shift in perspective for many people. On this issue, I enjoy listening to Sam Harris’ meditations and I have been struck by how much effort he puts into getting people to grasp the notion of phenomenal consciousness using many different approaches. So if you don’t see what I have been going on about here, I recommend listening to some of these meditations.

(2) Functions can be dissociated from quality. Detecting objects is not the same as seeing them. Nociception is not pain. A lot of what is going on in my brain and body happens outside of my experience. There is no inherent reason why certain functions should come with experiences. In other words, functionalism cannot be postulated (and for many reasons, some mentioned below, I think it is false). Conversely, my first-hand experience in itself often just doesn’t seem to have anything to do with performing functions (e.g., I can stare out the window with an otherwise empty mind and just be for a moment or two, without doing anything).

(3) Consciousness comes first. Everything I experience is in my experience. Idealism and even solipsism cannot be proven wrong. But I should infer that there is a world out there because of the regularities in my experience, which I should take seriously as the starting point for all that is and can be known. Saying that consciousness doesn’t exist thus doesn’t make sense. It also means that it cannot be “explained away”, even if it can ultimately be explained in physical terms.

(4) There is something physical that corresponds to phenomenal consciousness. It should be possible to map consciousness with all its properties onto the physical world. The purpose of a science of consciousness is to identify how we can account for consciousness in physical terms. If at some point in the future we can account for all properties of consciousness, then we will have identified what consciousness is in physical terms. In this view, we cannot ascribe a causal role to consciousness independent of what it is physically. Likewise, consciousness is not epiphenomenal because it is one and the same as its physical expression (but not reducible to the physical, because consciousness still comes first). It is possible, in principle, that consciousness might lead to quantum collapse (Chalmers and McQueen, 2021), or that there are strongly emergent higher level causal powers (Rosas et al., 2020). However, if these effects are observable, they can be included into our description of the physical world and thus are not “purely mental”.

(5) There is something to the hard problem. If we start with the physical, without taking consciousness itself into account, we would never infer that any given system has phenomenal experiences. Let’s say that consciousness indeed collapses the wave function for example. If we didn’t “know” about consciousness, we would just postulate some additional physical principle that leads to the collapse. No phenomenology required. The same holds for any emergent physical property. As it is, though, we do experience. This means that it is a property of our universe that some systems have phenomenal experiences in certain states. We also happen to be able to reflect on our experiences and to assess their properties (note that this is the case even if our experiences are not perfectly veridical reflections of the world outside). In other words, we can start from our experiences to identify their physical correspondents, which means we can at least try to “fit our subjective experience into our objective description of the world” (this and more on this topic can soon be found in Francesco Ellia’s dissertation). If successful, the question that remains is whether the identified correspondence between consciousness and the physical will make sense to us, or not. In principle, even an experimentally supported identity between consciousness and the physical may remain unintelligible (a point Philip Goff makes in the 2nd Mind Chat episode). Here, I am hopeful that the structural identity postulated by IIT between the experience of a system in its current state and its cause-effect structure, if correct, may indeed be explanatory.

(6) Calling phenomenology an illusion doesn’t buy you anything. First of all, it doesn’t make sense (see point (3)), but it also doesn’t resolve any of the problems above and arguably just adds the additional problem why an illusion (or beliefs, or information, or models of the world, etc.) would ever feel like something. While I’m at it: of course Mary has a revelation when she first experiences red herself. Consciousness is not about knowledge or self-knowledge.

(7) I am conscious when I am dreaming. It follows that consciousness does not require perception, action, or embodiment in the moment. The brain in the vat can be conscious given appropriate background conditions.

(8) Evolution only explains the physical. Evolution may have gotten our brain to be the way it is, but beyond that there is not much of a role for evolution in explaining consciousness.

(9) Meaning must come from within. Correlations between something in the world and something in the brain cannot possibly account for why something feels the way it does, or what it means, for the simple fact that the brain cannot be aware of such correlations.

(10) No higher order functions are required to experience. Again, this is informed partially by my own experiences, which can be blissfully empty of self-reflection at times, but also by a principle of inference to the best explanation: it just seems convoluted to me to require a functional monitoring system in addition to whatever physical substrate can account for all the properties of a given perceptual experience, for example.

(11) There is no “small network” problem. What is the simplest conscious system? Any given theory of consciousness that deems itself complete and is worked out enough to be applied to neural networks has to specify a list of sufficient criteria under which a given neural network should be considered conscious. Most likely, the simplest such system will indeed be very simple compared to a human brain (for example, a network of 10 interacting neurons, or 302 as in C. elegans). This seems to make a lot of people uneasy (see Doerig et al. (2020)). I honestly do not understand this. It is true, as Doerig et al. (2020) point out, that we cannot directly test whether the simple system (or any arbitrary system for that matter) is indeed conscious. But that’s OK: not every implication of a theory needs to be testable, as long as we can sufficiently constrain the theory under consideration through some other empirical (and formal) means. Obviously “fixing” a theory by imposing additional criteria to push the bar higher, without any empirical reason to do so is not OK. It is also not necessary to prove (in a deductive or empirical sense) that the criteria provided by the theory of consciousness are in fact sufficient. Instead, the sufficiency is established by demonstrating that the theory can indeed account for consciousness and all its properties. [This last point on sufficiency is also missed, among other issues, in the recent criticism against IIT by Merker et al., (2021)].

(12) A capacity for consciousness is necessary but not sufficient for a system to require ethical consideration. Without experience, one might as well not exist. Obviously, that does not mean that it’s OK to murder people in their dreamless sleep or during anesthesia. However, not every conscious system necessarily has the capacity to experience loss, love, pain, or has plans for the future. These are contents of consciousness and the contents that a given system can experience should be considered collectively to determine a system’s ethical status.

Enough for today, maybe more to follow.