This is an interesting discussion. I personally don’t think that our understanding of consciousness requires quantum mechanics (for the sake of filling the gap in our knowledge). But before I go any further, I don’t know if we are all talking about consciousness in the same sense (unless I am misunderstanding the posts). Consciousness has varied meanings in specific situations - intuitively I get what you’re trying to say but it is in the ambiguity that explanations can dodge closer scrutiny. I can’t remember where I read it, but the various views of consciousness were explained as follows:
- not knocked-out, as in “he regained consciousness in the ICU” - I am sure we are not talking about this aspect of the word, but I had to mention it.
- the internal dialogue of “conscious thoughts” - the thoughts that run through your head, even the thoughts you experience as you read this, are experienced as language
- the sense of me being “conscious of the world” - the controller in the control room interacting with the world
- the non-automatic thoughts where I “consciously contemplate the way to solve the problem” - we actively think about what we must buy from the shop on the way home but we do not actively think about the muscle manipulation required to unlock a door or to climb a flight of stairs
You may say that the last three are all piecemeal descriptions of the same thing, the same consciousness we are talking about, but I think that they are all separate and have been confused by our familiarity with the word we use for all four different experiences.
But why do I mention this? Just to confuse the discussion? No. I think that our understanding of consciousness will come from the various breakthroughs that psychologists have made in describing the necessary components which form part of the four aspects above (individually).
For example: Cognitive Linguistics is a field in psychology (that I take great interest in) that describes the mental cognitions involved with all aspects of language, like grammar, the mental lexicon, linguistic categorisation and so on. One concept in Cognitive Linguistics posits that once language is acquired by a being, mental thoughts in that language naturally follow. The sense of self that we naturally talk to becomes more “complete” as more language is acquired.
As for the sense of me who is acting as a controller in a control room, monitoring the world and deciding on a next move, it is largely an illusion anyway, more a case of us watching a constant instant replay rather than experiencing the world and making decisions. How do we construct the whole complete image? Well, if we look at it this way perhaps we get a better answer … How could any organism live without such a coordinated sensory perception? If we could only hear or see at any one time without being able to do both we would be at a disadvantage. A coordinated view is a survival trait and a more accurate coordinated view is an even better survival trait. Our view of the world is not perfect and can easily be fooled (optical illusions, etcetera) so using this as part of a description of the consciousness we are discussing introduces new problems.
What I’m really trying to get at is if consciousness (as a unification of all the pieces of consciousness) is real at all or a naturally emergent property of the delusions our mind creates.
So where do I think consciousness (the pieces) come from ultimately? How can the firing of neurons make me think I’m an individual? To me it’s an example of Langton’s Ant at work. Yes, I know, you can explain almost anything with the Ant if you think about it, but really, in this case I think that the analogy is very helpful to our understanding.
What is Langton’s Ant?
Langton’s Ant is a demonstration that simple algorithms can have unpredictable results. Imagine a world which consists of nothing but white squares; there is one white square and on all sides white squares extend on towards infinity. A really boring, really massive chess board. In this square at the source (there is no middle for an infinitely large chessboard, that’s why I mentioned a single white square at the beginning) there is a programmed ant. This programmed ant has a simple list of instructions:
- Move forward to the square in front of you
- If you arrive at a white square, turn right
- If you arrive at a black square, turn left
- Change the colour of the square you are in from black to white or vice versa
- Go to step 1.
The instructions are extremely basic. Looking at them we imagine that the ant will turn a circle and arrive at the source and turn a circle in the opposite direction and head back towards the source and continue to make symmetrical patterns of this sort. But when we run the simulation, that is not what happens. After about 470 steps it seems to stop making symmetrical patterns and start making random-looking patterns and most bizarrely, after about ten thousand steps it builds a “highway”, a long wide repeating pattern which (it is presumed) continues on to infinity. But why? Why does it do these weird things especially seeing as we did not program the ant to do anything like that? Even more bizarrely, you can’t stop the ant from making a highway. You can put three, eight or even fifty ants at random locations and even though they run into each other while following their instructions (they come to a black square that should have been white) they will eventually build the highway but it will take more steps to get there (only very few configurations end in stalemate where two ants build a pattern, hit each other coming the other way and go back and delete the entire pattern, reach the start and then build the same pattern, then delete it and so on). Even if you “pollute” the universe, place hundreds of randomly placed black squares, the ant cannot be stopped, it will still build a highway (in a different place).
If neurons are given simple reactionary instructions to follow, how can they build a consciousness?
But beware, this is not a kind of “I’ll throw my hands up and say; ‘That’s a good enough explanation’” mentality that I’m promoting, it is rather that the emergent properties are not always predictable at the lowest level. I would love to learn more about consciousness, I just don’t think replacing one unknown for another unknown will be an adequate solution.
James