Skip to main content

Philosophical Limits of Transhumanism

"Transhumanism" refers to a wide array of inter-related commitments, but here I am concerned with the assumption that, in the future, we will transcend our biology by "uploading" our consciousness onto computer hardware. While this is a beautiful idea, as I shall argue, it won't happen.

----

Lieutenant Commander Data from "Star Trek: The Next Generation" seems to represent a future stage of human evolution. His brain is positronic - not biological. As such, he is free of the kinds of biological constraints we have, including degradation and eventual brain death. As technology progresses exponentially, we might think and hope that we can become more like him. Indeed, some think that this is inevitable and will happen in our lifetime.

If we are to be like Data, it must be possible for non-biological hardware to run our mental software. Given the advances in Artificial Intelligence, we might think it is only a matter of time before this happens. To some degree, this might be the case. That is, it seems hard to deny that there will be relatively intelligent software, steps ahead of IBM's Watson. The important issue, however, is whether or not consciousness will accompany this intelligence. It is one thing for a machine to recognize that there is a red apple on the table, for instance. It is an entirely different thing, however, for that machine to have the experience of redness along with the qualitative feeling of knowing what something is.

It is natural to attribute consciousness things. But this natural tendency doesn't amount to much as an epistemic basis for belief. For example, we might attribute wants and desires to genes when talking about natural selection (e.g., we might say that the gene wants to replicate throughout the population). Apart from this tendency, there is a theory of what it means for something to count as a mental state known as "functionalism" - the dominant theory of the mind since the late 1960s, when the (mind/brain) / (software/hardware) took hold - that might seem to carry more weight. For the functionalist, just as an object is classified as a table by how it functions (and not by what it is made of, like wood), a mental state is defined by what it does, not by what it is made of. So, the idea is that we share the same kinds of mental states with Data, not because we share the same hardware, but because these states are functionally identical. For instance, the belief that there is an apple on the table functions similarly for Data in his internal psychology as it does for us.

There are a plethora of objections to functionalism. The most popular is known as the "inverted spectrum" objection. The idea behind this objection is that it is possible, for instance, that there are some who live their lives experience the color spectrum in a way that is inverted from the way that most others see it. For instance, what we experience as redness might be experienced as greeness  by an invert. If this is possible, it certainly doesn't seem to amount to any functional difference in behavior or internal psychology, so these experiences cannot be defined by how they function. While this objection has intuitive pull, there are some issues with it that I won't go into here.

The objection to functionalism that I find most interesting comes from the philosopher Ned Block (Functional Reduction). For him, the problem has to do with the fact that functionalism, at its core, is an ontology of the mental which is incompatible with a physicalist ontology. That is, when we ask what something is, fundamentally, we are asking an ontological question. So, when we ask what a mental state is, we are wanting an ontology of the mental (e.g., "in virtue of what is a given state a mental state"). For the functionalist, a mental state just is a functional state. This ontology is at odds with a physicalist account of what something is. That is, in an ontological mode, when we ask what an electron is, we are not asking what it does, but what it is fundamentally made up of. So, if we are both functionalists and physicalists, we are committed to a dualist ontology. This is just as problematic as it was for Descartes when he posited that the mind was a different kind of stuff from the body.

In order to get around this problem of dualism, we might reject physicalism for a functionalist ontology, as the philosopher Daniel Dennett does. I won't go into why this is a crazy view, here, but to be committed to this is to say that, at the end of the day, there is only doing, not stuff. The only other viable route that I see is to reject functionalism. But if we do that, we have no reason to think that functional equivalence implies mental equivalence. So, Data might have certain kinds of states in his positronic brain which function similarly to our mental states, but there is no reason to think that these are the same kinds of mental states - at least as far as conscious mental states are concerned (one might plausibly work out a weak functionalist account of unconscious mental states). This is the price we have to pay, however. And it sucks, because I was looking forward to uploading my consciousness to the cloud.


Comments

  1. This is simply an amazing write up Nick. This was refreshing and easy to digest, you really followed a line of compelling thought and I agree with your findings. Might we ask how you do feel about consciousness, what sort of school (functionalist,positivist, Cartesian Duelist, etc), if any, you would commit to? So could you also elaborate on what you mean by mental software? You argue that our mental software is made up of at least epiphenomena?

    ReplyDelete
  2. Here's my super delayed reply :)

    I'm a hardcore physicalist. In particular, I'm an identity theorist about consciousness. So, I think particular kinds of qualitative mental properties just are particular kinds of physical properties.

    ReplyDelete

Post a Comment

Popular posts from this blog

Moral and Political Intuitions: Deontology Versus Consequentialism

As I near my 40s, instead of focusing on continually refining my own moral or political views, I find myself more concerned with trying to explain how others have arrived at their own. That is - especially when social media has a tendency to amplify in-group vs. out-group dynamics - I want to see others with the dignity I wish others would grant me. So, here, I'd like to expound upon two families of concepts that I've found to be useful: intuitions and two broad approaches found in normative ethics. Philosophers use the word "intuition" in a specific way that may not line up with how the linguistic community as a whole may use it. So, to be precise, when I use "intuition" I mean something like an instinct, primary response, or knee-jerk reaction. To speak of intuitions in this sense is not to grant them any special epistemic status. Rather, they are to be treated as raw data to be evaluated in light of other considerations. For instance, many of us have or h

Libertarianism

When I think about certain political issues, there is a tiny libertarian in the back of my head taunting me and giving me room for doubt. That is, from a philosophical point-of-view, libertarianism is quite attractive, as it is simple and easy to apply. For instance, we might wonder with uncertainty as to whether or not a given regulation will have good consequences. For the libertarian, this is inconsequential. If the regulation, say, requires one party to help another party, it is impermissible - simple as that. Since I am not quite sure exactly what I should think of this view, in what follows I shall discuss a particular (possible) problem for libertarianism with the hopes of making some progress on the issue. 'Libertarianism', of course, gets used in different ways. So, for my purposes, I shall use the term to refer to a view that is committed to the following principle NO (no obligation): we have no (non-derivative) obligation to help others . (The reason for adding &#