Beyond Earth (ATWG) - Chapter 24 - The Intelligence Nexus in Space Exploration: Interfaces Among Terrestrial, Artifactual and Extra-Terrestrial Intelligence by Joel D. Isaacson

From The Space Library

Jump to: navigation, search

Chapter 24

The Intelligence Nexus in Space Exploration: Interfaces Among Terrestrial, Artifactual and Extra-Terrestrial Intelligence

By Joel D. Isaacson

Introduction

Space exploration and habitation will surely tax our natural intelligence, stretch our ability to invent intelligent artifacts, and will possibly put us eventually in contact with extraterrestrial forms of intelligence. Hence, many different forms of intelligence will, at some fundamental or generic level, be at the core of all space-related endeavors. My journey, my long journey, has been in quest of roots of "intelligence;" I have arrived at an ultimate simplicity that explains a great deal about intelligence, and this I wish to share with you.

This chapter speculates about generic aspects of "intelligence" as would be shared by all intelligent entities, and suggests modes of interface among all three types of intelligence, i.e., terrestrial, artifactual, and extraterrestrial. It also suggests that intelligence permeates Nature; and that it has no central focus because it is distributive, dissipative, cooperative, and emergent. Questions relating to Intelligent Design are raised, and some future research directions are proposed.

Elements of Perception

Consider an array of sensors that detect local differences in certain attributes of signal patterns. Such distinctions are recorded and form new signal patterns in their own right. Since a fundamental operation in perception is distinction-making in patterns, another iteration of distinctions-of-distinctions is then performed, and this kind of repeating process is applied recursively. To summarize, there are two fundamental elements in perception: (a) dis-tinction-making and (b) the indefinite recursion of distinction-making.

The importance of distinction-making is not new. For example, G. Spencer-Brown focused on distinction-making in his Laws of Form (1), resulting in a very rich formal system. However, these distinctions require a full-blown human observer and the use of writing modes to record them on a piece of paper. What would be a logic of distinctions at the level of single receptors that could not write?

Imagine a linear array of receptors that processes signals reaching it, whose basic function is to determine local differences among adjacent signals. The recording of differences would require at most four types of tokens. A "token" is a kind of sign or symbol. Here it stands for the relations of distinct-from/not distinct-from that hold between a signal and its two adjacent signals. There are exactly four possibilities when a signal is compared with its two neighbors — see Ref (2). The next pass of distinction-making re-encodes a prior 4-token array with a newly-formed 4-token linear array, and this repeats recursively.

Traces of such processing show an indefinite succession of 4-token strings.


The structure of such traces has been studied in great detail in (2). The main characteristics are:

  • autonomic mode of processing (self-organization)
  • autonomic error-correction
  • autonomic mode of 3-level memory (See Ref (2), where the 3 levels are "Subsurface Memory," "Intermediate Memory," and "Deep Memory." These levels develop spontaneously, of their own accord, in the course of BIP-processing. The levels correspond roughly to "short-term memory" (STM) and "long-term memory" (LTM) as commonly understood in cognitive science.)
  • dialectical patterns
  • autonomic syllogistic inferences
  • limit cycles or attractors (Hegelian cycles)
  • autonomic generation of palindromes
  • complementarity of 4-letter strings Some of these features are related to Douglas Hofstadter's "strange loops," as presented in his discussion of the natural intelligences of Gödel, Escher, and Bach in (3).

One subtle consequence of this model is that actual raw patterns of signals are not required for perception. Rather, it is their patterns of internal/local distinctions that matter. Thus, ordinary sensory detectors may be bypassed, and the first stage of processing be applied to patterns of distinctions if captured by some means. To get quickly to the main point, for cognitive subjects—entities capable of cognition—of diverse sensory modalities, common ground lies in being able to process "streaks" of patterns rather than just raw sensory patterns

Patterns are not innate "information," but they become so only when they enter into a certain relationship with a cognitive subject. So it's meaningless to discuss information without regard to a cognitive subject interacting with it. A given chunk of information, when considered with respect to a given cognitive subject, need not be physical with respect to that cognitive subject! It may be non-physical (at least in relation to the physical world in which the cognitive subject is immersed). How could this be? In hard science and engineering we normally think of "patterns of signals" as the physical carriers of information. However, the physical signals, per se, are not always important to the cognitive subject. It is rather local signal-differences — a derivative property of raw physical signals — that matters. This allows the unexpected introduction of information carriers that are non-physical signals, referred to here as "fantomarks."

What are "fantomarks"? Well, let's first examine what "marks" are. In physical-symbol systems (including conventional digital computers), "mark" is a generic name for physical signals of all kinds, and for physical symbols or their representations in term of physical bit-signal patterns. In short, "marks" are physical carriers of information that is being manipulated. "Fantomark" stands for phantom-mark. Fantomarks are information carriers that cannot be sensed or detected or recorded by human beings, and/or other living things or systems, and/or instruments, devices, or systems made by human beings. Patterns of fantomarks, while not physically perceivable to a cognitive subject, also have local differences, just as patterns of physical marks have differences. These local differences constitute patterns in their own right (dubbed "streaks") that correlate with the underlying fantomark patterns, but are not coincident with them. Streaks are derivative entities, and are physical. If a subject has access to a given streak, it can process it as a source of information based on a basic principle which states that:

WITH RESPECT TO ANY SPATIAL OR SPATIO-TEMPORAL SPACE OF ANY DIMENSIONALITY THAT EMITS PATTERNS OF SIGNALS, ABSOLUTELY NO COGNITIVE FUNCTION IS POSSIBLE UNLESS A COGNITIVE SUBJECT HAS THE CAPACITY TO LOCALLY MAKE AND RECORD SIGNALDISCRIMINATIONS. IF THE SUBJECT DOES POSSESS THIS CAPACITY, THEN SUCH CAPACITY IS THE ULTIMATE GENESIS OF ANY COGNITION, OR LEVEL OF COGNITION, THAT MAY DEVELOP IN THE SUBJECT. (Isaacson, 1987, see (4))

Namely, it recognizes a streak as a physical signal-pattern in its own right, and attempts to determine and record its local differences. The resulting pattern is the streak pattern of the previous streak. This activity may proceed recursively, which sets in motion a pattern-processing cellular-automaton.

Cellular automata are special kinds of computational models that are used in dynamical systems and share some properties with fractal and chaotic systems. One important aspect of cellular automata is that global patterns resulting from their computations are emergent (essentially non-algorithmic) from simple local interactions. It turns out that the emergent behavior of the cellular-automaton in question is dialectical, more or less in the classical Hegelian sense.

Why introduce "dialectical" elements into the discussion? The simple answer is that we have no choice in the matter. Dialectical behavior and features are emergent from the underlying cellular automata, which, in turn, are governed by the basic principle stated above. Further, cellular automata can be represented in terms of certain neural networks that are functionally equivalent to them. This provides us, then, with working models of neural mechanisms that give rise to Hegelian-type dialectics inside the brain. In effect, the model—of its own accord— offers a neural correlate of the Hegelian dialectics. This was documented in a technical report titled "Dialectical Machine Vision," prepared for the Strategic Defense Initiative Organization and the Office of Naval Research in 1987. Following is a summary of some relevant points.


The report explores four different neural structures that are necessary for the existence and functioning of "vision":

  • Sensory-level,
  • Sensation-level,
  • Perception-level, and
  • Recognition-level. These levels are essential to a cognitive function, as dictated by the Basic Principle and the recursive process engendered by it, and they map readily to anatomical structures in the neurological system's visual pathways. The sensory-level is in the retina; the sensation-level corresponds to the lateral geniculate nucleus (LGN) — this is the standard name for this anatomical structure in the visual pathway —; and the perception/recognition levels are mapped to the V1 area (and beyond) in the visual cortex of the brain, through bidirectional interaction with the LGN.

Visual awareness is postulated in virtue of a "sensation-resonance" process, whereby incoming imagery is held in short term memory in the LGN, and resonates with selected reverberatory circuitry from a massive neural network imagery-bank held in the visual cortex. As the communication between the LGN and V1 is bi-directional, visual memory turns out to be not so much storage of information patterns, as storage of sensations associated with previously experienced patterns of visual information. Perception and recognition are accomplished via "sensation-resonance," namely, resonance between current and previous sensations.

Among other things, this permits us to work out the binding problem for two-dimensional shapes, and also offers a scenario for face-recognition in infants that is based on the theory developed in the report. That theory contains fairly detailed sketches for possible solutions to both problems, with actual demonstrations. The "binding problem" relates to mechanisms whereby individual pixels (which fire/not-fire) are bound together into unified objects that are subject later to perception and recognition.

At the core of "dialectical image processing" some cellular automata (CA) perform recursive-edge-detection on imagery. If we translate those CA into a functionally equivalent neural-network, we can fill in much detail that is missing so far in the art, and is relevant to a stage we call "very-low-level-vision" (VLLV), where VLLV is even below David Marr's "early vision" (5).

The cellular automata in question are called "Dialectical Image Processors" (DIP). A DIP finds boundaries of objects in a recursive fashion. The first step, comparable to the retina proper, finds the boundary of a "raw" external image. The output of this step, comprised of the boundary, is an image in its own right. The next step finds the boundary of the current boundary. (Note: This kind of image processing, i.e., finding the boundary-of-a-boundary, is not intuitively obvious to most people and constitutes one of the innovations in this approach.) If we have an image that is a boundary (of some object), the only way to perceive that boundary-image is to determine its own boundaries.

The subsequent step defines the boundary-of-the-boundary-of-the-boundary, and so on, as an indefinite number of steps recursively follow. Each step results in a new image comprised of the boundary of the previous boundary. So, in effect, each step is like having a new ("virtual") retina operating on imagery received from a previous retina. This sequence of "retinas" can be thought of as belonging to a train of "virtual homunculi," where each homunculus attempts to interpret what the previous homunculus' retina has just processed.


However, old theories based on homunculi are notoriously naive, as they can't escape the obvious problem of "infinite regress." Why bother, then? Well, DIP is able to escape the curse of infinite regress because it is (mathematically) guaranteed to converge on a limit cycle. "Limit cycle" is a term from dynamical systems theory, including chaos and cellular automata. A process that converges on a closed/fixed loop is said to have reached a limit cycle. Once in the limit cycle, at the LGN stage of processing, the image is afforded a (dynamic) short term memory. In addition, the cycling activity causes periodic firing of neurons in the LGN over the entire area of the mapped image, giving rise to some primitive sensation in the cognitive subject. That "sensation" is the basis for a theory of visual awareness that follows in the report. That "sensation" element also builds a bridge towards ideas about the role of sensation in consciousness, proposed by Nicholas Humphrey in the early 1990s.

Ties to particle physics

When this recursive process is applied to a single signal (against a backdrop of the "void') and linear patterns are allowed to expand in both directions, (that is, along a one-dimensional line where things are propagating on both sides of active segments - see figures in Ref (6)) from step to step, the entire pattern that emerges is self-similar fractal. It can be shown that these self-similar configurations subsume the structure of elementary particles, at the quark-level description, known as the "baryon octet" (6). This indicates that, hidden deep in elementary acts of perception are structures that mimic the organization of elementary particles of matter.

In other words, the observable universe, down to the level of quarks, is consistent with the structure of elementary perception. Or, put another way, physicists who have discovered the structure of the baryon octet may have reported about their own fundamental perceptual apparatus as much as on fundamental particles of matter. Is it possible that the structure of perception and the structure of matter are coincident?

I believe that this is the case, that perception (observer) and matter (observed) are inseparable, and that both are processes involving recursive distinction-making.

Intelligent Design

The picture emerging here is that intelligence is distributed at all scales across vast, interlocking networks throughout the universe, and that there is no particular explanatory advantage to be gained in localizing intelligence in a single agency, such as a "designer" or a deity.

Following the logic of Occam's razor, which modern science defines as: "Given two equally predictive theories, choose the simpler", when we suppose that the universe has a single designer we introduce a very complex (and unknown) entity, and this only complicates the scientific explanation of intelligent patterning throughout the universe. This implies that as a matter of science, the notion of a designer-less universe provides a more consistent and simpler explanation of "reality." This is, however, a matter of economy and simplicity, and as matter of deduction it cannot be settled by decree one way or the other. Hence, people who believe in deities are entitled to their non-scientific beliefs even though their beliefs would not meet Occam's criteria.

In this preliminary report, analogous to the tip of an iceberg, I postulate that recursive distinction-making is a generic process that occurs across all types of perception and intelligence, and is independent of the particular sensory modalities deployed. Hence, I conclude that raw sensory signals may be bypassed, and fantomark patterns (or their streaks) may turn out to be the preferred modes of intelligent communication among species. The relevance to our venture into outer space, of course, is that such a possibility may be crucial for our ultimate success in communicating with extra-terrestrial intelligences.

This gives us, of course, a rather different view of interspecies communication than we might get from the cartoons wherein aliens land and tell some innocent to "Take me to your leader." Nor are we talking about Star Wars or Star Trek, where humanoids from different planets, systems, or galaxies converse over a glass of space beer via the convenience of Gene Roddenberry's marvelous concept of a "universal translator." This is a much more plausible scenario involving the sort of encoded messages we could expect will be sent, received, and understood across the vastness of space by intelligent forms of organic or non-organic life. As we venture further beyond Earth, these are the types of messages that we can anticipate and that we must be prepared to deal with.

Future research directions


The concept of Panspermia relates to the hypothesis that the seeds of life are prevalent throughout the universe, and that life on our planet was initiated when such seeds landed from outer space and began propagating themselves.


Francis Crick (with Leslie Orgel) suggested in 1973 a theory of directed panspermia, in which seeds of life (such as DNA fragments) may have been purposely spread by an advanced extraterrestrial civilization (7).


Critics, however, argued that this was implausible because space travel is damaging to life due to radiation exposure, cosmic rays and stellar winds.


However, the principles of intelligence described here permit us to introduce now the notion of tele-panspermia, which postulates panspermia guided by means of coded fantomark patterns (or their streaks). According to this concept, diffusion of life does not necessarily require the physical transport of actual "seeds" via meteors, comets, and the like.


Telepanspermia may be guided by means akin to pilot waves in Bohmian quantum mechanics. So, work on defining such guiding mechanisms in telepanspermia may converge with non-local hidden variable theories in fundamental physics.


Development of an information theory that is extended to fantomark-coded messages and streaks would facilitate the invention of superior intelligent artifacts. It could also hold a key to communication with extraterrestrial modes of intelligence, and eventually help us understand our cosmic ancestry and the relationship between the implicate and explicate orders as outlined by David Bohm (8).

References

  • (1) Spencer-Brown, G. Laws of Form. London: Allen & Unwin. 1969
  • (2) Isaacson, J. D. Autonomic String-Manipulation System, U. S. Patent No. 4,286,330, Aug. 25, 1981; accessible via http://www.isss.org/2001meet/2001paper/4286330.pdf
  • (3) Hofstadter, D. Gödel, Escher, Bach: an Eternal Golden Braid. Basic Books, 1979
  • (4) Isaacson, J. D. Dialectical Machine Vision: Applications of dialectical signal-processing to multiple sensor technologies, Tech. Report No. IMI-FR-N00014-86-C-0805 to SDIO, Office of Naval Research, 31 July 1987 (limited distribution).
  • (5)Marr, D., "Early processing of visual information", Philosophical Transactions of the Royal Society (London) Series B, 275:483-524, 1976
  • (6) Isaacson, J. D. "Steganogramic Representation of the Baryon Octet in Cellular Automata." Archived in 45th ISSS Annual Meeting and Conference: International Society for the System Sciences, Proceedings, 2001. Online version: http://www.isss.org/2001meet/2001paper/stegano.pdf
  • (7) Crick, F. H. C., and Orgel, L. E. "Directed Panspermia," Icarus, 19, 341,1973 \*(8) Bohm, D. Wholeness and the Implicate Order. London: Routledge. 1980

About the Author

Extracted from the book Beyond Earth - The Future of Humans in Space edited by Bob Krone ©2006 Apogee Books ISBN 978-1-894959-41-4