What's the German for phenomenology?

Photo by Zdeněk Macháček on Unsplash

Neuroscience is in the business of studying how neural states cause feeling, thoughts, and emotions. It sounds rather straight forward. We can measure brain activity when people feel a variety of emotions, we can induce fear responses in a mouse by beaming a laser into the brain in the right brain area at the right time. You would think that a science as technologically advanced as neuroscience would be successful in finding biomarkers for mental distress. However, a recent large review of the evidence for common biological markers (e.g., brain morphology, activation patterns) for a wide array of mental health difficulties has suggested there are no convincing biomarkers for diagnosing mental health disorders. The authors suggest point to most studies being underpowered as the reason for the disappointment that we still do not have biomarkers for mental disorders1. I find this a rather mundane explanation. Linking mental states to biological causes is a theoretical/epistemological issue more so than methodological problem.

There are at least two main issues that make solving the problem of how neural states cause subjective states difficult, if not intractable.

  1. The language problem
  2. The subjectivity problem

I do not mean for these problems to be construed as some obscurantist bullshit. I am a neuroscientist and if you ask me whether my brain is responsible for my current conscious state, or for the writing of this pseudophilosophical ramble, I would answer with a resounding yes. It is clear to anyone with a brain that this is true. But it also has become a truism that the brain causes behaviour, emotion, and subjective experience. To ask is whether the brain causes these subjective states is a pedestrian question. The important question to ask is how.

I also don’t entertain dualistic nonsense, proclamations that there is an ethereal mind and a physical body. What I do assert is that I have subjective states that you do not have access to. Which makes it difficult to link biological states with my internal states. The only way for me to communicate anything about these states with you is through language. But, to relate these descriptions back to the brain, it would have to be that our words directly reflect brain states.

The language problem.

Schadenfreude: The pleasure we feel at someone else’s misfortune. There’s no word for this feeling in the English language yet most of us have felt it Schadenfreude at some point. A feeling for English speakers that is as good as ineffable without this German loan word. Not having a word for something makes it difficult to describe and refer to a concept. Schadenfreude shows us the constraints of a particular language, in this case English. Concepts, categories, and consequently ideas are constrained by the available words and concepts within a language.

A fundamental proposition in neuroscience is that brain states give rise to feelings. The language we use to describe the feelings, the qualia, we are trying to biologically explain determines what we end up finding. So, let’s imagine you want to correlate feelings of sadness with brain activation in an fMRI and you put me in your scanner. You rely on me to report my subjective report of feeling sad, my own internal understanding of what it means to feel sad. However, the idea that the word sad directly correspond to a neural state of sadness is an unprovable axiom.

Using words to describe our subjective states is extremely useful in serving a social purpose, we can talk to each other. But it creates a potentially intractable problem for neuroscience, if the goal of neuroscience is to find the biological basis of conscious states. The categorical definition of an internal state needs to absolutely precise and well-defined, but also the term itself needs to directly refer to and capture an internal subjective state. There’s nothing special about the English language that gives its terms direct correspondence to brain states. Why would it? How would the users of a language have direct knowledge about brain states such that words that describe both brain and subjective states would come into use? In this regard, our words only describe (for communication purposes) internal subjective states, but not subjective and brain states concurrently.

Most scientists believe that an external reality exists outside of their own consciousness. This external reality can be directly referred to with words. Atomic facts2 are the simplest statements of truth that can made about physical relations and allow us to construct more complex facts and truths about the world. Quarks and antiquarks give rise to subatomic particles which in turn give rise to an atom. A statement about quarks and antiquark making up matter would thus be the atomic fact. Physical statements about the world can be broken down until they reach the most, elemental unit of physical fact. A car is an object made of atoms, which is made of subatomic particles, which is made of quarks and antiquarks. (Although, the argument has been made that language does not have correspondence to physical phenomena. E.g. Wittgenstein’s Philosophical Investigations).

Are there elemental facts that we can state about sadness? Can sadness be broken down into its smaller and smaller constituents until we reach the elemental units that make up subjective states? I am not so sure this is possible. Do smaller constituent parts of a subjective states even exist? And how would this work, if we are not referring to a physical state in the world, but to a subjective one. I am not sure what empirical facts can be brought to bear that would determine how to break down a feeling, such as sadness, to its smaller constituent parts. It makes very difficult to find a way to talk about subjective states without defaulting to the phenomenology of the state and conceding that there are no parts to summate and all we know is the whole, the feeling of the state.

We end up converging on what seems like a difficult, if not intractable, problem. Words, in their current usage, cannot directly correspond to both subjective and brain states concurrently.

Although, when speaking about DSM diagnosable disorders (for which many searching for biomarkers rely on) the issue of language becomes even trickier. We are dealing with a collection of subjective and physical states that make up a diagnosis. The original spirit of the modern DSM (DSM 3 in particular) was to describe of clusters of symptoms, different subjective feelings and physical symptoms that occur consistently across people. For example, the feelings of sadness, despair, hopeless, and fatigue that typically occur among people with depression. If a word like sadness does not have direct correspondence to a biological state in the brain, it seems a completely preposterous position to claim that a word that described a cluster of subjective states would have any correspondence to brain states.

There has been push to pretend like we get further by treating these syndromes as belonging on a spectrum. But once you are using words, you are using categories. Asserting that shifting the basis of diagnosis to a spectrum, as opposed to a category, is a language game attempting to confer validity.

The problem may extend beyond correspondence between words and subjective states to an epistemological issue. We try to act as reductionists when our biologist hats are on, but, have no way to act as reductionists when the psychologist hat is on. In this way, there is a mismatch in epistemology between studying the brain in its smaller units with no commensurate way to study subjective states in their smaller, more fundamental, units.

The subjectivity problem.

What does it mean if I say I feel sad? I know what I feel like when I am sad, but I cannot ever feel what you feel if you said you were sad. Because of this I clearly cannot feel or truly understand your feeling of sadness. By default, the nature of conscious states is that they are subjective. I have only ever felt my own qualia, and never yours.

Early in my Ph.D. I read Nagel’s What is it like to be a bat?. I love this paper. Nagel wrote this in 1974 and it is still relevant now. I’ll let Nagel state the problem and do the talking for me:

“Our own experience provides the basic material for our imagination, whose range is therefore limited. It will not help to try to imagine that one has webbing on one’s arms, which enables one to fly around at dusk and dawn catching insects in one’s mouth; that one has very poor vision, and perceives the surrounding world by a system of reflected high-frequency sound signals; and that one spends the day hanging upside down by one’s feet in an attic. In so far as I can imagine this (which is not very far), it tells me only what it would be like for me to behave as a bat behaves. But that is not the question. I want to know what it is like for a bat to be a bat. Yet if I try to imagine this, I am restricted to the resources of my own mind, and those resources are inadequate to the task.”

Nagel’s idea here is that we can only imagine what is feels like to be a bat from our own point of reference, never the bat’s. What Nagel points out is that if this is true for a bat, it is true for people and all other animals too. Animal research is the bread and butter of neuroscience research. We design a lot of tasks to induce fear, pleasure, or some other subjective state. But how do we measure this? A rat freezes upon being shocked and must thus be experiencing fear; a rat drinks more sugar and because he is drinking more of a sucrose solution, he must be experiencing pleasure. Nagel’s work would suggest that we design tasks that would induce fear or pleasure for the point of view of a human but not a rat or a mouse. I don’t think it’s a failure to take a step back and say we are measuring freezing behaviour rather than fear, or sucrose drinking rather than pleasure. But most of us go beyond and make the claim that we somehow are measuring the subjective states of fear or pleasure being experienced by the rat.

This leaves me in a difficult spot because as much as I want to infer the subjective state of the rat, I cannot. And I wonder whether I have to revert to a kind of black box behaviourism that disregards all the interesting phenomena that happen inside the black box?

I do need to caveat the above with the following. There are many tasks and experimental designs that appear (from a human’s viewpoint) to be ecologically relevant to an animal and have an intuitive face validity. Exposing an animal to the sound or scent of a predator seems like a reasonable way to induce something that resembles fear. But, I do believe the overall point that we cannot lose our human viewpoint when explaining the subjective states of animals has some veracity regardless of this caveat.

Adding complexity does not solve the problem.

When Nagel wrote this modern neuroscience was in its infancy. The “reductionist euphoria” to describe mental states with neuroscience, as Nagel puts it, had just about figured out the basic properties of synaptic transmission. We’ve learned far more since then and are continuing at a speed that is increasing annually. Our conception of how the brain works is far more complex and detailed than Nagel knew and more than Ramon y Cajal could have ever imagined. The spotlight on single neurons has now expanded to large circuits of neurons that explain subjective states.

However, moving from neuron to neural circuit does not add additional explanatory power in transforming a brain state into a subjective state. It is only moving the goalpost because it leaves us with the same problem: how does any kind of neural activity (a single neuron firing or the activation of a circuit) translate to a phenomenological state? Without adequate theory, we are just stamp collectors of brain facts, correlations of biological changes to subjective states that may or may not correspond to the brain.

Neuroscience becomes a lot more meaningful if we have a way to connect a brain to feeling. If we can directly attribute brain states to subjective feelings (however they may be defined) we stand in a far better position to link feelings like chronic sadness and its constituent parts to neural activity.

It’s also very possible that these are not intractable problems and that I am just extremely cynical. But as things stand, we do not have a cohesive theory that links the firing of a neuron or the activity of a circuit to a subjective state. The best we can do is say that it is an emergent property of brain activity, invoking some spooky middleman that translates neural signals into phenomenology.

I am sceptical that experiments alone will solve these issues. The time invested in experiments might also be a distraction from creating more meaningful theories about how the brain relates to subjective states. We need a better and more precise language to define the phenomenological states we want to explain. Definitions adopted across as the discipline, so we can at least all measure the same constructs.

Maybe it is time to stop pretending like the linguistic construct we use are synonymous to neural and subjective events, until we have a better reason to believe they are.


  1. I do not like to refer to these phenomena as mental disorders. I disagree with the terminology because I find it dehumanising to suggest that people’s experiences, thoughts, and emotions are “disordered”. That is absolutely not to say that it does not cause significant distress to people carrying these burdens. Mental health difficulties by their definition cause distress to those who suffer with them. Thus, I will refer to this as mental distress or mental health difficulties. ↩︎

  2. I’m somewhat weary of using this terminology because I am way out of my depth and I am not well-versed in analytic philosophy. But I chose this term because it is a useful way to talk about how language might directly correspond to a physical entity. ↩︎

comments powered by Disqus