The Functioning Theory

of Consciousness

 

by Bret Alan Hughes

 

 

onsciousness is purported to be one of the most difficult problems in philosophy. However, I believe that a square look at the scientific facts and an unbiased application of common sense resolves the root of the quandary. As described in this paper, the functioning theory extends materialism and functionalism by:

 

1.      Answering the “what am I” question for each of us humans,

2.      Identifying what physically consciousness is (solving the mind-body problem), and

3.      Providing solutions not only to some “easy problems” of consciousness—the objective capacities for understanding, interpretation, basic awareness, sensation, and qualia—but also to the “hard problem”: phenomenal consciousness.

           

A key approach, I believe, in understanding and explaining consciousness is to recognize and accept what generally accepted science has said about the human body and its consciousness. For example, science has said that each of our bodies is a collection of individual cells that naturally function and interact through biology—that is, through physical and chemical processes. Science has also indicated that our consciousness is destroyed at death.

Functionalism describes the mind as a function of the patterns of neurological activity in the brain. However, it is unclear that a function of something can be destroyed. For example, is it very clear that a function of a computer can be destroyed? Hence, functionalism’s description for consciousness is, at minimum, vague.

The functioning theory, on the other hand, identifies consciousness as a specific functioning of the brain. This distinction fixes the above problem with functionalism by making clear that—just as a functioning of a computer can be destroyed by permanently mangling, breaking, or stopping the physical processes of the system’s matter—so can be destroyed the functioning and, thereby, the consciousness of the brain.

A lacking with many other treatments of consciousness is that they do not identify what each of us most essentially is. Without such identification, the quest to solve the mind-body problem becomes nebulous, the quest’s goal becomes exceedingly slippery. This lacking reveals itself when there is no identification of what physically consciousness is; there is no firm boundary line between what is and what is not consciousness.

However, what snaps this dividing line into place is a simple recognition: you are, most essentially, your consciousness. In combination with the understanding that your consciousness is a functioning of some, but not all, of your body (for example, the functioning of your left big toe is obviously not fundamentally relevant to your consciousness); we arrive at the foundation of the functioning theory:

 

You are, most essentially, your consciousness: a specific functioning of the cognizant and cognizant-related parts of your brain (primarily the thalamocortical system).

 

This foundation might sound too general or too high-level, but many things do not have a simple, low-level essence. For example, a desk cannot be broken down to a single molecule, nor can a rainbow or a volcano. Things are usually best described at a scale appropriate for their definitions. Consciousness is no exception.

This foundation is the basic solution to the mind-body problem—a solution that has been gaining more and more scientific and general acceptance. The roots of this solution have been around for well over a hundred years. In 1890, William James published The Principles of Psychology, in which he wrote that “consciousness … ‘corresponds’ to the entire activity of the brain.” By year 2000, Gerald Edelman, Nobel Prize winner and Director of The Neurosciences Institute, not only wrote that “with consciousness we are what we describe” but also described “consciousness as a physical process”.[1]

This solution to the mind-body problem, however, needs to be supplemented by explanations of how a functioning, a physical process, can have the objective capacities for understanding, interpretation, basic awareness, sensations, and qualia as well as have phenomenal consciousness. Only thus can we feel confident and reassured about the validity of this solution.

One way to provide such validity is by showing how a computer’s functioning could have these capacities. One reason why this explanation is convincing is because generally accepted science has said that the brain acts as an extremely complex biochemical machine, very much analogous to an extremely sophisticated computer. In fact, at a fundamental level, both the brain and the computer operate using essentially binary mechanisms: the neuron and the bit. Although the computer and the brain each have their own specialties in which they operate most efficiently, they each can duplicate the other’s functionality.

Now, before I continue, I will dispel a common roadblock in understanding consciousness. This roadblock occurs when someone tries to make inseparable, in one way or another, the various aspects of consciousness—such as trying to mix up the meanings of various words with some flavor of phenomenal consciousness. This creates a nearly impossible requirement of an all-or-nothing, all-at-the-same-time proof for consciousness.

To break that roadblock, I make clear the separation between objective capacities for understanding, interpretation, basic awareness, sensation, and qualia and phenomenal consciousness, the “hard problem.” While it is true that all these components are combined into our consciousness, it is unproductive and unreasonable for people to reject isolation of these components. The method of component identification and isolation in order to better understand and explain a complex process or object is a scientific standard.

Now, with that roadblock out of our way, let’s first examine whether it is possible for a computer to have the objective capacity to understand. To determine this, I here provide an objective definition of understand, having any phenomenal aspects of it removed.

 

Understand: to take, retain, and recall meaningful information associated with an end.[2]

 

Of course, (the functioning of) the human brain can understand. But consider what science tells us the human brain is: a collection of association neurons—that is, things that associate. In other words, besides inputs from senses and outputs to glands and muscles, the brain is in the information business. Awareness of this fact clarifies the ring of truth in the above abridgement of understand.

Consider a computerized system that analyzes light by means of an attached instrument. Let’s say this system is programmed to vocalize “yellow” if the analyzed light is in the appropriate frequency range for yellow. This system, the functioning of a physical system, could then be said—by the above abridgement—to objectively understand something about yellow: such as how to objectively identify yellow light.

Note that the above abridgement of understand does not require some sort of full, perfect, or complicated understanding. These are not at all needed to have an understanding of something.

Truly, do you know everything there is to know about yellow—such as the exact frequency range of its light, the possible frequency combinations humans interpret as yellow, and all the things that are yellow? Of course you don’t. In a similar vein, you can be fooled about what is yellow. An object’s yellow light could be masked to you by a much higher intensity of another color of light or by a combination of other light frequencies. However, you do have some understandings of yellow, as is in accord with the above abridgement of understand.

Both the computer and (as noted before) the brain are in the information business. More specifically, these physical systems operate through abstractions and associations. In a computer, an abstraction might be implemented by the setting of a program variable—that is, a physical memory setting. In the brain, an abstraction might be implemented by the potential activation pattern of one or more specific neurons. An abstraction can be an input, such as perceptual; an output, such as muscular or glandular; or an internal representation, such as of an attribute, an entity, a property, a classification, a grouping, or a symbol.

These representations can be implemented in many ways—both for a computer and for a brain. A computer can hold and use a representation of an image or sound, a verbal or abstract description of an object or a relationship, etc. This is how, to illustrate, the computerized system was programmed to objectively identify yellow light: by means of a functional representation of yellow light. Similarly, a brain can hold and use many types of representations, in sophisticated functional ways.

Associations, on the other hand, are implemented in both a brain and a computer through programmed circuitry: through a program. For the above computerized system, humans programmed the (direct) associations between two of the computer’s abstractions—that of the identification of yellow light and that of the vocalization of “yellow”—and the computer’s abstraction of yellow. For you, evolution and experience programmed the various associations between your brain’s abstraction of yellow and many of your brain’s other abstractions.[3] In short, it is through programmed associations between abstractions that both you and a computerized system have the objective capacity to understand yellow.

As is indicated in the third dictionary definition of understanding from the footnote on page 3, understanding is very closely related to interpretation. The above computerized system could be said to interpret the light in a certain frequency range as yellow. However, if the system were programmed to vocalize “blue” instead of “yellow”, then this system would be said to interpret yellow light as blue.

Consider a similar situation for yourself. Let’s say your experiences with the words blue and yellow had somehow always been swapped for you. In this case, you would also interpret yellow light as blue. This interpretation is based on your associations with the physical manifestations of yellow and the word you use to describe them, blue.

An interpretation is not inherent in an abstraction by itself; an interpretation is from the activation of a functional association of abstractions. An interpretation can be arbitrarily complex, activating intricate webs of stronger and weaker excitory and inhibitory associations between numerous abstractions. The action of interpreting can accumulate in short-term affects within neurons and long-term affects in the associations between neurons. Both of these affects can, in turn, modify future interpretations and future activations of abstractions. Accordingly, the resulting interpretations and neural affects are particular to how the activation of abstractions and the lower-level interpretations combine and interact over time: not only the specific activations are important but so is the order and timing of those activations.

To illustrate, consider some simplified examples of how you interpret language. When you read words, hear words, or verbally think, high-level abstractions of particular sounds get activated in your brain. The temporal sequence of such activations gets interpreted together as words, higher-level interpretations. At this point, an abstraction of blue, dog, bravery, or 7 might get activated and interpreted.[4] The temporal sequence of words (and pauses) further gets interpreted as phrases, sentences, paragraphs, etc.—even higher levels of interpretation.

Your brain is programmed to make all sorts of interpretations. In fact, pretty much all your received perceptual information is interpreted. Your brain interprets binary inputs from your sensory neurons as specific colors, tastes, smells, etc. For example, a high frequency of binary impulses from sensory neurons in your eyes is usually interpreted as bright light.

At a high level, the receiving of perceptual information can be interpreted as perception. This interpretation might be implemented in the brain using associations between perceptual input abstractions and the brain’s abstraction of perception. For a computer, perceptual information could be from any input, like a camera or a keyboard. Receiving this input could also be interpreted as perception by associating the receiving of the input with an abstraction of perception (a perception identifier) in the computer.

Your brain is programmed to be able to interpret a combination of discrete information as a synthesized whole. For example, you may interpret the light from the individual pixels of a monitor or TV as a singular image or even as a moving picture. You may interpret the movement of a laser’s light on a wall as a shape or as a moving shape. By just blindly touching an object, you can mentally construct a three-dimensional representation of the object.

Likewise, your brain is programmed to synthesize a representation of self. This is essentially a set of abstractions associated with your abstraction of self. Just as receiving perceptual information is (appropriately) interpreted as perception, so are various processes of your brain (appropriately) interpreted as analyzing, planning, reasoning, dreaming, calculating, etc. Although your representation of self is more of a program state, a snapshot of a running program, it can be thought of as a folder labeled “self” that contains information about your current state: what your perceptions are, what you are doing, what you are thinking about, etc.

Interestingly, one of the processes associated with your abstraction of self is the updating of your representation of self. This updating gives your consciousness the objective capacity for basic awareness. To put it succinctly, your basic awareness is from the updating of your representation of self. This process can be thought of, simply, as the updating of the information in your “self” folder.

Let’s say, to illustrate, you get kicked in the shins. Because of your objective capacity for basic awareness, your representation of self will include the perceptual information from your shins about their pain. More specifically, there is an activated association between your abstraction of self, the abstractions from this sensory input, and an abstraction of pain. Hence, your consciousness interprets that you feel pain—because that pain-associated sensory information is included in your representation of self, within that folder labeled “self”.

The reason why the objective capacity for basic awareness exists, of course, is to allow an organism to internally represent its state, so the organism has the capacity (even when perception is disrupted or incomplete) for behaving as to benefit itself as a whole—that is, strategically, as an individual multicellular organism. For example, based upon the objective capacity for basic awareness, an organism can avoid using a hurt leg in general but analyze if it can use the leg to run away from a predator. The capacity for such strategic processing has been, of course, very evolutionarily important.

You might be wondering how the brain can take various factors into account and decide what to do. The answer is similar to a cost-benefit analysis. Just as how a computer can functionally value something—associate it with a number in a certain context—so can your brain, with an intensity abstraction in a certain context. Various abstractions can combine, each weighted by its positive or negative intensity value, to evaluate the net cost-benefit of associated abstractions. These results can then be used to decide or chose between alternative situations or scenarios. Such decisions can be arbitrarily complex and can be used at many different levels.

This type of functional valuation is why pain abstractions within your representation of self are important to you. Evolution and experience has programmed your brain to functionally value things, like pain; to functionally value certain things more than others; and to functionally value things flexibly, according to situation and/or analysis.

To illustrate, imagine some creature living millions of years ago. Let’s say this creature found itself in a position to either travel through a thorn bush or around it (without any trouble). By functionally valuing pain negatively, this creature would have been more apt to travel around the thorn bush. Since traveling through the thorn bush could have caused significant damage to the creature, such as from infection; the creature would have been more likely to survive and perpetuate its genetic (and mental) characteristics through such functional valuation and analysis.

Similarly, let’s say this same creature later found itself face-to-face with its perfect mate while grazing in a field. If our creature did not functionally value mating, it might not reproduce and perpetuate its genetic characteristics. Obviously, things like pain, mating, and eating are evolutionarily very important to many creatures, so one would expect evolution to have programmed the functional valuation of these things, perhaps inherently, into the brains of such creatures.

As has been shown thus far, a computer can objectively interpret and value as well as make decisions based on cost-benefit analysis. By just associating direct perceptual information with an abstraction of the computer’s self, the computer would have a representation of self. While updating this representation of self, the functioning of the computer would have an objective basic awareness. (Further, the computer’s functioning could then even interpret this updating as basic awareness.)

But what about sensation and qualia, could a functioning, especially that of a computer, have the objective capacity for these things? Let’s tackle them one at a time.

First, to tackle the objective capacity for sensation, we need to identify sensation’s underlying source. So, what precisely is sensation from? As you can recognize for yourself, sensation is from the direct comparison of over-time perceptual information. For example, comparing your current perceptual information with your previous perceptual information (by means of memory and analysis) could give you the objective capacity to interpret that you are falling, being pinched, being caressed, or being warmed by the Sun.

Comparing and interpreting over-time perceptual information allows an organism to internally represent how its state is changing over time. Thus, instead of only being able to react to its current state, an organism can be more sophisticated in its analysis of its situation and thereby benefit its survival, replication, and perpetuation. Obviously, these benefits are very evolutionarily important.

A computer could, of course, store and compare over-time perceptual information and interpret the comparison’s results. Thereby, a computer’s functioning could be programmed to have the objective capacity for sensation.

Next let’s determine whether the functioning of a physical system could have the objective capacity for qualia. Like sensations, qualia are inherently tied to memory. However, qualia are more inclusive and can involve much more complex analysis. Qualia provide a sense of what it is like to be the organism itself: they provide a set of current states relative to previously experienced states—a set of statuses. For you, to illustrate, qualia involve perceptions of yourself: what you are doing, where you are at, what you are perceiving, how you are feeling (emotional states), etc.

To concisely identify their underlying source, qualia are from the comparisons of your over-time representation of self. Hence, qualia provide you with not only a memory of what it is like to be yourself but also an ability to construct the continuity, or storyline, of your life. By programming a computer to create and compare its over-time representation of self as well as to interpret the comparison’s results, a computer’s functioning could thereby have the objective capacity for qualia.

Consider a robot that travels around on some distant planet. If this robot interprets its location from perceptual input and keeps in memory how much it travels each day, then the robot can determine how much it usually travels in a day (by taking an average) as well as to analyze (and respond) when it has traveled much more or much less than it usually does. In fact, such comparisons could be made at anytime, possibly by means of interpolated or complicated analysis. Hence, the robot would have an objective capacity for qualia in regard to this aspect of itself. The more aspects taken into account would mean the more qualia the robot has the objective capacity for.

So far, this paper shows that a computerized system can have the objective capacity for understanding, interpreting, basic awareness, sensation, and qualia. Such objective capacities are the so-called “easy problems” of consciousness. With this groundwork having been laid, we now are ready to turn towards the “hard problem” of consciousness: phenomenal consciousness.

Clearly, the capability for organisms to deal with complexity is evolutionarily important. But how can an organism deal with arbitrarily complex situations? If an organism identifies a predator, for example, should the organism run or hide? Could the predator see or hear the organism if it tries to run or hide? Maybe the organism should just freeze or be unconcerned. The answers to such questions depend on a potentially very large complex array of direct and indirect factors. So, how can an organism bring all this information together and sensibly deal with arbitrarily complexity situations? Evolution’s answer is consciousness.

Over time, consciousness—a specific functioning of the brain—receives a coherent representation of the organism’s comprehensive situation by means of activated abstractions in certain parts of the brain, takes into account related associations between various abstractions to interpret relativistic meaning, and correspondingly produces high-level associations and activations. Thereby, consciousness (this specific functioning of the brain) can take in (get affected by) great complexity and, based upon evolved and learned information (settings and processes), make appropriate high-level decisions and actions for the organism.

Using objective capacities, evolution and experience have designed how the different aspects of your situation and self are conveyed to (interpreted by) your consciousness—as perceptions, as sensations, as qualia, as emotions, as feelings, as preferences, etc. Amazingly, your phenomenal consciousness comes from the particular way of how this information affects your consciousness: how these settings and processes affect the specific functioning of your brain that is your consciousness, the ultimate you. This information comes together in such an extreme richness and fullness and at such an extreme rate that it leaves you (your consciousness, this specific functioning of your brain), based upon elaborate echelons of interconnecting and interinfluencing interpretations, utterly convinced that you are an individual, complex, living, and whole organism—just as evolution has, appropriately, designed your brain to do.

 

Now, let’s quickly touch upon four important aspects of your consciousness—attention, intentionality, experience, and subjectivity—to see how the functioning theory can explain them.

Attention is primarily a functioning within consciousness that can allocate conscious processing power: for example, to a visual object, to an abstract problem, to hearing in general, or to a particular sound. Besides being able to affect bodily activities and actions, attention might also allow conscious access to lower-level perceptual abstractions for a more thorough analysis or a fresh take of sensory information, instead of just depending on higher-level interpretations that are unconsciously generated.

Intentionality can take on different meanings. In a common sense, your intentionality comes from what you, a functioning of your brain, decide to do or focus on. In a philosophical sense, intentionality comes from the meaning or interpretation of a certain abstraction or set of abstractions, based on the associations made over time between abstractions.

Experience produces changes in the brain—both short-term and long-term. For example, take the simple experience of merely consciously monitoring the sensory (subconscious) interpretation of the color blue. The interpretation of the color may be simple, but its associations may be extremely complex.

That color experience may be associated with memories (abstractions and sets of abstractions) from many times you looked at the sky, as well as their corresponding associations. The color may be associated with not only childhood memories but also abstract concepts, such as clarity and openness. All these associations may get activated to various degrees and strengths to produce changes within and between neurons to make neurons and connections more sensitive or less sensitive to further activations within the brain. The result is an influence on your thinking that you can sometimes consciously perceive.

Concerning subjectivity, the brain is only indirectly connected to the external world; our senses provide a good but not a perfect representation of reality. Hence, a person’s consciousness creates and uses internal representations that can be more or less accurate (if not arbitrary) in various ways. Each of us can experience the same event in a different way, form our own opinions about it, etc. Each observer experiences our common reality but has his or her own subjective experiences, opinions, thoughts, etc.

 

Based on generally accepted scientific information, this paper shows how you could be your very consciousness itself: a functioning of a biochemical machine—a machine that uses the objective capacities for perception, representation, interpretation, basic awareness, sensation, and qualia (or, in short, input, processing, and memory) to generate your phenomenal consciousness. (Conversely, it is interesting to note that a functioning of something can be a phenomenal consciousness.) Thus, your brain generates your very consciousness itself: the ultimate you.

For more information on how consciousness is implemented, a solid source of information is A Universe of Consciousness, by Gerald M. Edelman. In that book, consciousness corresponds to the functioning of the “dynamic core”. For more information on the functions of consciousness and how it interacts with the rest of the brain, a good source of information is A Cognitive Theory of Consciousness, by Bernard J. Baars.[5] In that book, consciousness roughly corresponds to the functioning of the “global workspace” and the “self-system”.



[1] I use the word functioning instead of process to emphasize that consciousness is much more of a running state (like the running of a computer operating system) than a system of events that leads to some end result (such as a chemical reaction).

[2] If you would like to see how I came to this objective abridgement, here is my breakdown of understanding’s dictionary definitions that I used to unearth its fundamental, objective meaning.

 

1.        To grasp the meaning of

o        grasp: to take

o        meaning: something meant

§         meant: to have in the mind as a purpose

         mind: memory: the process of recalling what has been learned and retained especially through associative mechanisms

         purpose: something set up as an end to be attained

2.        To accept as a fact or truth

o        accept: to recognize as true

§         recognize: to take notice of in some definite way

         notice: to take information of

3.        To interpret in one of a number of possible ways

o        interpret: to explain or tell the meaning of: present in understandable terms

 

And, to resolve learned in the first above definition, I have:

 

learned: acquired by learning

o        learning: knowledge or skill acquired by instruction or study

§         knowledge: the fact or condition of knowing something with familiarity gained through experience or association

         knowing: having or reflecting knowledge, information, or intelligence

 

Similarly, here I provide an objective abridgement of intelligent, from its dictionary definitions:

 

1.        Revealing or reflecting good judgment or sound thought

2.        Guided or directed by intellect

o        intellect: the power of knowing

3.        Guided or controlled by a computer

 

Intelligent: Guided by having information that reflects good judgment.

 

Of course, one could take issue with phrases like “good judgment” and (from the definition of understand) “meaningful information” and “an end to be attained”, but the interpretation of these phrases can be provided by the setup, or programming, of the system. An example of such programming is that your brain is programmed by evolution to coordinate the cells of your body for survival as a multicellular organism. The topics of programming and interpretation are discussed later in more detail.

[3] At a fundamental level, your brain is built—that is, programmed—by your DNA and its interaction with the environment. Your DNA is a result of billions of years of evolution. Your many years of experience further (physically) modify your brain and its programming.

[4] Just like interpretation, both meaning and (philosophical) intentionality are enabled by programmed association. For example, the meaning of a word is mainly a top-down activation of associated abstractions based on the word’s top-level abstraction. Although intentionality is often mixed up with phenomenal consciousness and certain high-level functions, intentionality is also just based on functional associations between abstractions. I will discuss intentionality in more detail later.

[5] Of course, I don’t agree with everything written in the books I recommend here. For example, I agree more with Dr. Baars’ description of memory as representational over Dr. Edelman’s description since memory is functionally representational, regardless of its implementation.