Dynamic Models of the Mind







George Kampis


Fujitsu Chair of Complex Systems,
Japan Advanced Institute for Science and Technology

Chairman,
Dept. of History and Philosophy of Science
Eotvos University Budapest















Philosophy of Science/Cognitive Science



These fields deal with foundational questions:

explanation                representation
justification                intentionality
induction                    meaning/mental content
causality                     functionalism and mechanisms
entities                        symbols and subsymbols
realism                        language and mental logic




What can philosophy of science teach to science?

perhaps nothing :-)
….. because all good phil sci. comes from science
but feedback is often possible
causality is an example here
its understanding comes to a good extent from science
but science forgets about it to a large extent (as we will see)














Advance Summary



Thesis #0 (Background)

Phil Sci = science in a more abstract, extended form



Thesis #1

The mind is a causal system; this has consequences for brain theory.





Thesis #2

Causal systems are complex (or ‘deep’); causality offers a different
framework from the one usually followed in cog.sci./brain science.


















Structure of the Talk


          & What the Brain is Good for                     (a provocative outlook for brain sci.)















Mind and Brain

Statement #1. We study the brain because of the mind. (Easy, safe).
    3 examples:
     learning
        role of various cognitive levels
        categorization
        vs. Hebbian
     memory/recall
        narratives (time, space, actors, animation, coherence etc.)
        “Stalinist memory” (constructive memory)
        vs. 'storage'
     perception
        “seeing as” (e.g. duckrabbit)
        top-down (e.g. object recognition, “Gestalt”)
        vs. switchboard theory



Statement #2. Brain theory directly depends on mind theory.
(Stronger statement; ostensively true [behav. vs. cog.neurosci]; list is psychological).


Learning, representation and X (sic)… are a priori conceptions for brain modeling.
Formulates (mild) requirements for how to accommodate the mind.
Determines style of research etc.

Statement #3.
We may need to understand the mind first in order to approach the brain.

(This is the working hypothesis
here)






Dynamical Hypothesis

We are heading towards representations and ‘ontology’; here is how.

Dynamical Hypothesis: a recent development of considerable interest

        T. van Gelder (ed)       1995: Mind as Motion, MIT Press,
                                         1998: DH in CS, Behav.BrainSci. 21, 1-14.
        E. Thelen (et al)         2000: The Dynamics of Embodiment, BBS.
        R. Port                      2002: Dyn Sys H in CS, MacMillan Enc.CogSci.



DH is a generalization of neural networks and connectionism
    TvG was student of J. Pollack, both dissatisfied with connectionism
    criticized architerctural – formal constraints: only certain terms occur
    dynamical systems: a broader and more general class
        (or at least more natural, cf. Thelen and Port)



note that NN (i.e. ANN) is ‘arbitrary’
        – only functional model
        – no (detailed) structural equivalence with brain/neurodyn models


“if we don’t overcome arbitrariness, do overcome the limitations/constraints”


DH: the mind is (general) dyn. system
        features – central processing in style of peripheries
                    – importance of time, not just attractors
                    – cognition without (explicit) representations (centrifugal regulator)





Representations


DH and NN: cognition without representations?
            old problem with a standard solution:
          there are representations but not ‘explicit’ (or tokenized)



 "Neural networks (which are purely fictional and arbitrary) and neurodynamic models (which are committed to electrophysiology)
alike strive for models of learning. Learning, in turn, is commonsensically understood as the process that brings forth a learned state.
A learned state contains information about enviromental regularities. In other words, it contains a representation.

What are representations good for? A representation is important only insofar as it has a causal effect, or in other words, if it is active.
Representations in neural networks and neurodynamic models – or to take a more general virewpoint: representations in dynamical systems –
are anything but active. They mostly act as filters of perception but lack own causal power.
"



representation def.
    something that stands for something else

passive representation
    e.g. text    AI/cogsci typical, Simon – Newell, Fodor etc.
            linguistic mind/brain – Pinker


representation in NN
    ‘stored’ in parameter space of dyn. system (i.e. slow variables)

 







a desired, important property of representations:
    a representation should ‘work’ in the sense that it should be able to determine
    what future representations are possible (usually this is what we mean by
    ‘knowledge’)        [cf Chomsky against behaviorism]



in philosophical jargon:
    new representations should be consequences (effects) of the semantic properties of existing ones    
    [well-known, cf. Fodor inferential role semantics etc]


Are ‘active’representations possible?
    dynamic models (i.e. NN, neurodyn etc). have a deficit here,
    anything is learnable in them ('associations'),
    no feedback to the learning process or dynamics itself























Ontology


philosophical meaning motivates this technical expression
        re-usable representation
        e.g. database of context independent (or cross-contextual) concepts, objects etc.



e.g. machine translation
        impossible beyond a certain level on grammar and lexicon alone
        famous example: metaphor, non-literal meaning

        recent theories (G. Lakoff, M. Johnson, D. Draaisma, E. Thelen etc) assume
                that most meanings in the mind are metaphoric


to ‘understand’ metaphors etc, we need a built-in knowledge of real world
        its objects and properties
        in detailed, text-independent form
        = ontology













Properties of Ontologies

ontologies have combinatorial properties in the mind
     as real world objects do:
             two apples are a pair (specific set)
             an apple and a pear are two fruits
             a set of stones arranged in a bow is a bridge etc.



    e.g. to translate ‘on the top’ involves the fact that a bridge is self-supporting etc.;
    people ‘see’ or ‘imagine’ this (more about mind's eye later)

Therefore, an ontology must be rich in combinatorial properties
    (leading to explosion, open-endedness)


Can we build such ontologies in a dynamical system?

An analogy from chemistry helps.
     a changing set of molecules
    a changing set of dynamical equations
    not one single dynamical system, but many different



I will say that an ontology requires very much indeed: it requires a full causal system.
In the rest of the talk, I will suggest that causal systems can
indeed accommodate
active representations and combinatorial ontologies.






Causality


the problem of natural causation

causality is one of the most important concepts in science

    I. Hacking, N. Cartwright, J. Pearl
    not equations etc! which are empty without a natural, ie. causal background
    fundamental yet notoriously difficult to characterize



theory of causality (is an oxymoron, because causality is natural, not theoretical…)
mainstream: counterfactual dependence theory (D. Lewis and others)


    A causes B    means         “if not A then not B”


it has quite obvious problems!
still, e.g. dynamical systems ‘causality’ is of this counterfactual type


    if not x(t) = xsubt    then not x(t’) = xsubtprime


the theory comes in varieties
    (e.g. Reichenbachian common cause, Salmon causal explanation etc).

but is it universal?









I supply reasons to disbelieve this
.
Example:

time evolved symmetries exist which are not event-caused

(can be consequence of rules, which can be emergent, i.e.
which can be consequences of other rules – or events defined in terms of
further variables etc.) ……




Somewhat paradoxically: causality is not a relationship
between a cause and an effect,

but something more fundamental, which is not grasped in an account of event causation.




A different notion
Causal depth: every causal process is a unity of several
simultaneous causal relations (over events).



Causal Depth Thesis (robustness thesis)
Natural causation is always deep causation;
in science nothing is considered causal, unless accompanied with causal depth.






Causal depth, its explanation
think of levels (cell, molecule, ion/charged particle etc.)
splitting up the level concept
    levels are subsets of variables from a lump set
    autonomous levels are result of decoupling
    stratified structure due to specific conditions
    fundamental concept is that of underlying set of variables



How large is the underlying set that spans the depth of the system?
    it can be small (‘freezing out’ – “almost non-causal systems”)
   it can be astronomic (e.g. macromolecules and relational properties)


“Degrees of freedom”  metaphor
    (not exact, but gives a right impression)
    many coupled d.f.; not fixed, like in an open system
    i.e. a causal system is in many respects like an open system:
        it carries "active and “inactive” or “potential” degrees of freedom



Depth as modality
    not a formal concept
    a name that indicates a property, a pointer
    marks existence of non-arbitrary natural unity
    modal property such as indexicals (space, time)



How to grasp causation in models?
    Causality is not a modeling concept as such
    approximate with very high dimensional systems
    dimension changes
    express essential structural features of depth

     (but: causality is prima facie an experimental concept)





Mental Models
an approach that integrates representation, ontology and causality


“Mental model” - origin of the notion
         K. Craik 1943, N. Goodman 1978, P.N. Johnson-Laird 1983 etc.


A mental model is a token "small-scale universe" in the mind
         e.g. Johnson-Laird: his approach to natural deduction and problem solving
        based on representations of sets by their members (i.e. tokens)
        didactical notion is ‘pebbles of the mind’
        “All A are B” imples several copies of A's and B's in the mind
        constitutes a mock-up world (an analogy is a toy train)

 

Mental processes such as reasoning involve using parts
of mental models in various arrangements.

(Success of J-L theory is the explanation of ‘faileddeductions by this)


How do mental models work?
(e.g. how they work in the J-L theory)
    subconsciously
    subject to formal operations
    require separate processing
    based on mental objects taken as abstract entities









Here is a suggestion:
We are one step short of what is required. Causality can supply the rest.
(Depth is the price to be payed.)



In other words:
    taking mental models seriously means that
    mental models are not abstract objects but real objects
    they have causal power as
    ‘theoretical entities’ (in the spirit of W. Sellars)



Characterization of causal mental models
    can supply ontologies (as in real world)
    automate semantic properties (e.g. mental model of ‘bridge’ is a bridge)
    [can support conscious experience as by-product]
    are active (representational and transformational
        aspects are interchangeable)




















OK, but where are Mental Models in the Brain?
And What is the Brain Good for



The burden-of-proof question... half-seriously, who has to tell?
    prevailing one-level mind/brain theories
    are not causal (only counterfactual etc).
    but causality is important
    it implies ‘multi-level’, or depth
    which solves problems



The rest of the task is to search for a substrate that can support this.

Formulation of requirements for brain theory
to accommodate causal mental models
:

Single-level systems are not rich enough
‘Depth’ is naturally supported by entities, e.g. molecules
It is difficult, but maybe not imossible to grasp depth in dynamics
    exotic dynamics
    infinitely long lived transients
    non-attractive states
    changing dynamical systems
    chaotic itinerancies etc.











Summary, and End of the Talk



Causal Brain Hypothesis
The causal mind implies a causal brain. The causal brain must be multi-level, deep.
A priori, from causality, there is no reason to believe that electrochemical activity
would be an adequate level of information processing in the brain. If there is a functional
role to it, that role is different from the processing itself (e.g. the switching of domains etc).
Otherwis,e perhaps just symptom or detector of the causal (‘multi-level’) brain activity.



Conclusion

Style of brain research  is conditioned by concepts about the mind.
The causal mind poses a challenge for the current style.