Archive

Archive for the ‘consciousness’ Category

physical symbol systems & AI

March 30, 2010 Leave a comment

from Wikipedia: Physical symbol system:

A physical symbol system (also called a formal system) takes physical patterns (symbols), combining them into structures (expressions) and manipulating them (using processes) to produce new expressions.

“A physical symbol system has the necessary and sufficient means for general intelligent action.”
– Alan Newell and Herbert Simon

from Wikipedia: Alan Newell:

Newell came to believe that information processing is the central activity in organizations.

from “The Role of Symbol Systems“:

This is a brief summary of Allen Newell’s discussion of the role of symbol systems in cognition. This material is taken from Unified Theories of Cognition , section 2.5.

It is a physical law of nature that any processing must necessarily be done locally. Furthermore itt is a basic proposition of information theory that any given technology has a particular limit to the amount of encoding that can occupy a given region of physical space. Thus if a computational system is to have any sufficient complexity there must be some method for utilizing information that is not confined withing a limited region of space. A symbol, represented concretely by a symbol token, provides a means for representing distal knowledge. Since the symbol token is generally a more compact abstraction of the knowledge itself it can be manipulated in a more restricted region of processing space. The assumption is that the symbol token obeys the representational law , that is encoding knowledge X into symbol X’, encoding transformation T into transformation T’, applying transformation T’ to X’ to produce Y’ and subsequently decoding Y’ into Y (in the format of the original knowledge) is exactly equivalent to applying T to X to produce Y. If an elaboration of the symol in terms of the original knowledge is actually needed the symbol also provides the means for accessing the distal knowledge represented by the symbol, i.e. an address.

Symbols are not useful in and of themselves but rather are components of symbol systems, which have the following characteristics:

Memory
– Contains structures that contain symbol tokens
– Independently modifiable at some grain size

Symbols
– Patterns that provide access to distal structures
– A symbol token is the occurence of a pattern in a structure

Operations
– Processes that take symbol structures as input ans produce symbol structures as output

Interpretation
– Processes that take symbol structures as input and execute operations

Capacities
– Sufficient memory and symbols
– Complete composability
– Complete interpretability

Newell argues that these characteristics are sufficient for a symbol system to produce all computable functions, that is it is universal. If a large variety of knowledge and goals are to be represented, distal access and univerality are necessary features of the ensuing knowledge system. Thus, a symbol system can realize a knowledge-level system, albeit imperfectly. It therefore follows that a cognitive architecture designed to approximate a knowledge system should have a symbol system as its basis.

Newell argues that the human cognitive architecture is itself realized by a symbol system. His argument rests on his assertion that – given enough time and external represenational ability – humans can approximate a universal machine. Though this hypothesis is obviously impossible to determine experimentally, he takes our efflourescence of adaptation as empirical evidence of his hypothesis. Humans are able to produce such a wide variety of response functions in such a wide variety of situations that it appears that humans are universal machines. It does not seem reasonable that humans would have every single response function included in their cognitive architecture so it would follow that we are actually composing response functions from a much smaller set. Furthermore the human distributed memory system requires some sort of distal access. Since a symbol system can approximate a universal machine and provides the capability for distal access and function composition it appears to be a natural choice as a foundation for the human cognitive architecture. Any cognitive architecture modelled after the human cognitive architecture, such as Soar , should then be built to support a symbol system.

Categories: consciousness

Hofstadter’s Strange Loops and Distributed Consciousness

March 1, 2010 Leave a comment

The abstract to Douglas Hofstadter’s contribution to the 2006 Science of Consciousness Conference in Tucson:

Strange Loops, Downward causation, and Distributed Consciousness by Douglas Hofstadter

As everyone knows from hearing microphones screeching in auditoriums, feedback loops give rise to a highly stable type of locking-in phenomenon. A related phenomenon arises in other types of feedback loops — in particular, in video feedback. The patterns that result from such feedback loops exhibit stability and robustness, and therefore take on a seeming reality at their own level.

The brain’s mirroring of the world is far more complex than that of a television camera, since its purpose is to “make sense” of the world, which means the selective activation of small sets of symbolic structures, or as I call them, “symbols”, which reside on a level far higher than that of neurons. The interplay of symbols in the brain constitutes thought, and thought results in behavior, whose consequences are then perceived anew by the selfsame brain. Such a feedback loop exists in any system that has internal symbols, but when the symbolic repertoire is unlimitedly extensible (through the mechanism of chunking) and when it additionally gives rise not only to permanent records of past episodes but also to the possibility of imagining future and counterfactual scenarios (which is the case for human brains but not for, say, dog brains), then the system’s representation of itself becomes an extremely stable, robust, locked-in, epiphenomenal pattern (which I dub a “strange loop”), and the system thus fabricates for itself an “I”, whose reality (to the system itself) seems beyond doubt.

The “I” seems to act on the world purely through high-level phenomena such as desires, hopes, beliefs, and so on — and this lends it an apparent quality of “downward causation” (i.e., thoughts and other emergent phenomena “pushing around” particles, rather than the reverse). To the extent that the “I” is real, so is downward causation and also conversely: to the extent that downward causation is real, so is the “I”.

Each human being, by virtue of being acquainted with (and thus internally mirroring) many other human beings, houses not only one strange loop or “I”, but many such, at extremely different levels of fidelity — metaphorically speaking, mosaics at wildly different grain sizes. Thus each human brain is the locus of not just one consciousness (or “soul”) but of many such, having different levels of intensity or presence. Conversely, a given individual, although it inhabits primarily a particular brain, does not inhabit that brain exclusively, and as a consequence each human “soul” and each human identity is a somewhat distributed entity.

The near-alignment of one brain and one soul is thus misleading: it gives rise to the illusion that consciousness is not distributed, and it is that illusion that is the source of much confusion about what we human beings really are.

Debugging Trace

August 4, 2009 Leave a comment

from “Queen Victoria’s Personal Spook, Psychic Legbreakers, Snakes and Catfood: An Interview with William Gibson and Tom Maddox”

TM: […]You know what Marvin Minsky says about consciousness? It’s a debugging trace. It’s like a little piece of froth on the top of this larger thing. I think Bill believes that. Consciousness is just part of the act (laughs). All this other shit that goes on is equally important.

From Stewart Brand’s The Media Lab snagged from here

“Has any of this cleared up what consciousness is or is for?” I asked. “People have such a small number of memory registers,” Minsky said, “that we can’t think of much. Everything has to be on automatic. Consciousness is not a window. It’s more like a debugging trace you use for reprogramming around problems. Humans are really amazing, considering.”