On action and consciousness
Excerpt from Solve the brain before the end of the world, neuromorphics lab 2012 contest) – early submission (subject to revision)
The brain operators under two simple rules:
1) To control action, the brain accumulates evidence for different hypotheses, then passes the mismatch between sensory reality and the competing internal expectation.
Example: I’m looking at a hand-written digit, and (from the first several eye movements) I’m not sure whether it is a 0, 5, or 6. With each subsequent eye movement around the digit, I accumulate evidence for each of these categories (0s, 5s, 6s) by comparing the current view with my learned prototypical representation of 0’s, 5’s, and 6’s (my expectation). Where these spatial evidence maps disagree/mismatch the most is where you will act in the form of a saccade (ie an eye movement). For example, if the competition is between the 0 and the 6, you will make the action (the eye movement) to the upper right where the 0 and 6 mismatch the most. If the competition is between the 5 and the 6, the action would again be to move the eye to the area of greatest mismatch between the 5 and the 6 (the lower left).
2) To control thought/recognition/consciousness, the brain accumulates evidence and passes the match between sensory reality and competing internal expectation.
Example: Standard adaptive resonance theory: when the bottom-up sensory stimuli match the top-down internally generated expectation, the “match” causes a resonance that amplifies neural activity and is perceived as conscious experience.
A critical different between the first class of things the brain does (action by passing the mismatch) and the second (thought/recognition/consciousness by passing the match) is that the first is dichotomous, while the second a continuum. Action is condensing a “continuous” system to produce binary outcomes. While consciousness and the like, is doing quite the opposite. With consciousness as a continuum, the relevant question to ask of a being/machine who performs identical actions to humans is not whether it is conscious or not, but rather how conscious.
To achieve “human grade” consciousness, I think any system that can take discrete building units (electrons, quarks, etc) and turn them into a continuous/dynamic system in the time interval between each successive discontinuity (ie, between every action), is in some respects conscious. Here by “action” I mean an interaction of a system with anything that is not a part of that system, ex, interaction with external environment, with other agents, etc. Sticking with the example of moving an eye, every fixational eye movement, microsaccade, etc – when the eyes are open, of course – is an interaction with the environment and therefore an action.
In the end, the brain is a chemical system (ie discrete parts) and the experience of consciousness is its ability to simulate a continuum that bridge discontinuities/actions. The more a system is able to exhibit these smooth transient dynamics in the transitions between actions (which, essentially means at the scale of a millisecond in human-grade consciousness), I would say the more conscious it is. The ability of a discrete system to approximate a true field with a high degree of temporal precision is the degree of consciousness it embodies.
If consciousness – in this loose definition – is an approximation of a field with discrete parts, some very fundamental problems in physics will need to be addressed before we can create anything at the same level of consciousness as a human. For one, nobody to date knows how to approximate the types of fields in the Standard Model of quantum dynamics with discrete parts to anywhere near the precision that is required between sequential actions that occur a millisecond apart. A very basic problem with approximating fields with discrete parts is that when you try to put an electron or quark (a discrete part) on a lattice (a step closer to a continuous field) and rotate the electron or quark 360 degrees, it is no longer the same particle. This resistance of fundamental particles to be placed on a lattice hampers our ability to artificially approximate fields with discrete parts. Machines, in this interpretation, are therefore far from being conscious despite their ability to perform human-like actions.