Causal Intentionality

George Kampis

Japan Advanced Institute for Science and Technology
&
Eotvos University Budapest




What is Causal Intentionality?
I will say that I believe intentionality is inseparable from causality.
There is an easy way to misunderstand this. So let me immediately say that I understand causal intentionality as a
property of natural mental entities,
and not of consciousness, volition, or anything of that kind.
In particular, I will be talking about intent and not indend.


I will suggest, specifically, that intentionality, in the sense of mental content, is closely related to agency, as a
form of causality. I will further suggest that agency can be a perfectly natural form of causality.
This brings us to the topic of causality in general; that will be the starting and closing topic of the lecture.




Two Theses on Mind and Science
As science (scientific naturalism as a philosophical standpoint) is central for my presentation,
let me help myself to the following two theses.

Together they express the spirit of the foregoing investigations anticipated above.

(1) (a) Science is the proper vehicle for the study of the mind.
(b) The existence and nature of intentionality is ultimately a scientific question.

(2) (a) Science is predominantly the domain of causality.
(b) Causality is a mark by which scientific knowledge claims can be distinguished from all other knowledge claims.

For a slow start, let me discuss these theses very briefly.

Ad (1)(a) As predictble, this is motivated by Quine (and perhaps Neurath). Quine famously pointed out that it is both unnecessary and impossible
to do philosophy whithout science, which brings them to equal footing. I follow Dennett and other scientific naturalists on this.

Ad (1)(b) In particular, I maintain that as science develops, philosophy cannot avoid depending increasingly on it.
Especially the existential and “nature-of” [kind] claims require science’s [present and future] contribution. This gives sense to the word “ultimately”.

Ad (2) This is a somewhat larger topic. Since this topic is not, in its full-fledged form, a topic of the current lecture, I will confine myself to but a few remarks.

Ad (2)(a) I. Hacking [repeatedly] and many others, including J. Pearl [recently] have argued that causality as an irreducible factor is essential for science.
Ad ad (2)(a) Although we are within a remark, I stop to make another.
    Hacking himself suggests a realism about [theoretical] entitites.
    However, realism [of any kind] is not my concern here.
    All we need is causality as an a priori form of a dependence relation bw. action -> event, event -> event

Ad (2)(b) Pure text, “theory” [in the US legal sense], speculation, superstition, much of philosophy, vs. Science: the distinctive mark is whether a given statement supports causal statements. A causal statement is not just a statement of some kind, it is [in the sense of (2)(a)] a unity of natural and propositional factors. To support a causal claim = support a manipulation scheme, that is [as I argued in Kampis 1998], to support a mechanism, which (as I argued at the same place) is akin to Design, in Dennett’s sense. For details, see there.



Mind and Intentionality

 “If it does nothing, it is nothing”.

It follows from my theses, that mental objects as posits suffer from the same disease
as do mental representations in the inherent sense.

Le me discuss this in two steps: (i) inherent (ii) derived.

[Since we are now discussing very general questions, for the present part of the lecture I will not distinguish intentionality,
mental content, mental entity, etc, for they all behave in the same way under the present treatment, or so I believe.]



(i) Representations by themselves are not causal [hence they do not exist]

(a) it so “feels”. Representations are passive. Representations are the result or product of some activity; proverbially manipulable transferrable etc.; their essential property is containment; famous parallels exist to text and symbols, from Lullus to Hobbes and DesCartes. I say it so feels because it might be difficult to rigorously prove that representation [~al realism] implies propositional mind, implies acausal entities. No need to worry, since we have:
(b) Fodor’s difficulties. His classical RR assumes causal (biological) parallelism. Meaning is not cause, but mental cause-and-effect patterns must match the transformations of meaning, in some preestablished harmony in the mind.
(c) AI – machine functionalist difficulties. View mental x as symbol of formal system; state of finite state machine; member of a list (cf. LISP-style AI). That is, representational mental entities need separate processing – do not process “themselves”. An infamous problem, not solved by giving it a name (as does Haugeland when speaking of automated formal systems etc.).


(ii) [easy] Posits by themselves are not causal [hence they do not exist]

Am I guilty of some trick here?
I consider posits from the point of view of science as in theses (1) and (2).
Now posits in general [and the derived notion of intentionality as a notion that rests on a posit in particular] assume that existence questions are “meaningless”, so it cannot be a big discovery that posits are nothing else but posits. i.e. that they do not exist in some stricter, e.g. causal sense.
But this is certainly not how the scientist thinks.
Theoretical entities etc must support causal schemes or will be discarded from science.
Then, if intentionality is understood in the sense of a posit, this implies not only a “weak existence” (as the existence of a posit) but a strong nonexistence, in the sense of falling victim to speech improvement in science.
[N.b. how speech improvement, so dear to Carnap and Churchland, is a positive force in science leading to a demarcation criterion.]




Suggested Remedies

(1) Active representations [in NN and elsewhere]
This is wooden iron. I fail to see how meaning [which is perhaps the narrowest, minimalist sense of intentionality]
could cause any transitions

(i) unless personal understanding, consciousness and will are also invoked, which makes things worse
(ii) or unless “causality” is understood in an a-causal way, as regularity [excluded by th (2)],
(iii) or, again, unless meaning is understood in some weird form. I stop discussing this.


(2) The Dynamical Hypothesis
[Van Gelder, Port, Thelen etc.] Philosophical and methodological generalization of NN.
Time and causality in, representations out [much in the sense of our discussion of RR].
Just how far out is debated by Bechtel and others.

What happens to meaning and intentionality here? The answer, in one word, is social structure.
Exported to social externalist domain with handwaving towards Ryle.

This notion is not easy to dismiss [possibly it can be made causal, although in a complicated way, with reference to social interactions
that turn mental states into intentional ones, and intentional states that cause social interactions – this is the more difficult part, of course].
Not easy to dismiss but I don’t believe in it.



Get some Help

* Meaning might be social but intentionality not [anti-individualism of content vs. “causal individualism”]

Thesis (3)
Intentionality must be a property of the individual causal mind.

Then, either we accept that both meaning and intentionality reside within the mind, or, alternatively,
we can consider the possibility that


** Meaning and intentionality are separate from each other.


In the second half of this talk I will outline an approach to intentionality in the sense of (3)
which is not as radical with respect to
separation as * yet provides an example for **



Plan for the Rest

Following some earlier works I will suggest that “the mark of intentionality” is not meaning (& co.) but (a form of) agency.
I discuss this in 3 steps


(i)     Intentionality and Agency
(ii)    Intentionality as Agency
(iii)   Agency and Causality




Intentionality and Agency

The notion that intentionality is closely associated with agency is an old, but unduly neglected one.
Two historical remarks.


(a) Brentano and Freud
As old as the “modern” notion of intentionality: Brentano
speaks of [directed, oriented] activity of the mind, which is energetic and focussed on a target.

It is a known historical fact that Brentano’s lectures in Vienna were attended by S. Freud (as the only non-physiology
course he was taking at the time). There is a publicized speculation that Freud’s idea of  “psychic energy” and libido may
stem from Brentano. “Freud’s intentionality”: intentionality as directionality of energy and activity.



(b) Teleology of motion
Jennings vs. Loeb debate 1906-1912
Jennings emphasizes internal activity and structure in “tropisms”
Behavior is not just repsonse to the external stimuli but autonomous action
“if ameoba were as big as a whales we would describe them as having purposes and plans” etc.
Pleh points out importance of this view in Dennett’s program on intentionality via Humphries.
Note how attribution [of beliefs, desires, mental states] is related to a kind of behavior, the
behavior of an agent [active, initiates motion, etc] as a precondition.




Intentionality as Agency

I am taking the view that this association is not accidental:
Intentionality is not just [vaguely] ”related to” but [directly] based on agency.

(Remark) This automatically puts intentionality into the form of a scientific hypothesis,
since agency claims are causal claims.


(Speculation) Agency and intentionality might be synonymous ...
(Suggestion) Agency is necessary for intentionality: intentionality is a property of agents
.

What is Agency?
Recent interest in software agents helps summarize agent properties in a list like

Agency as a form of Causality
Suggestions from Reid and Chisholm (beware of Brentano!)
Agent-causation is a form of causation where causes are

For most theorizers the concept is akin to deliberation, volition, etc.
Agent-causation is a tool of the anti-naturalists.
Yet agent causation can possibly be accommodated in a scientific programme.



Agency and Causation

In two steps:
(i) developmental psychology (ii) causality in science, which completes our round


(i) Cognitive Developmental Psychology
Suggests that causality is the only thing we can trust
[as the ontogenetic and philogenetic foundation of knowledge]
and further, that causality [in the sense of causal behavior] is essentially agency

Thelen et al.    
        Role of own motor activity and manipulation [both causal] in the origin of concepts

        “embodiments” as examples for [self-]agency

Watson & Gergely    
            early appearance of social competence (i.e. bootstrapping of “ToM”)

            interest [age 3 months] turns from contingent [repetitive, predictable] to
            semi-contingent [free, autonomous] environmental stimuli as a
            first step towards a teleological and intentional interpretation of environment.

The meaning is that contingency-detection helps launching an ‘agent-detector’ module which
individuates those entities to which goals and mental states will be attributed at a later stage of development.
Actions
[causality in this limited sense] and agency are synonymous for the child
with each other [and with the limited contingency of behavior]

Speculate: Agency just Causality? Causality just agency?



Causality in Science

I think agency and causality are very closely related.
Perhaps causality is greatly misunderstood usually.

I argued (Kampis 2002a,b) that there is this progession:
Epistemic causality     -> ontological causality     -> deep causality
(Hume, Lewis)             (causal realists)                 (natural causality)

Deep causality
Natural causality is deep causality.
The depth of a causal relation is a modal unity of several simultaneous causal processes.

One particularly simple example for causal depth is multi-level causation.
A less obvious example is object causation.


I argued that scientific causation is just deep causation,
E.g. the very meaning of an experiment is that it has multiple aspects and multiple access points,
where one causal process provides indicator variables for other(s) etc.
[This property gives the experimenter the freedom to repeat an experiment differently – a “no magic” criterion.]


I am now suggesting that deep causality is compatible with the view that intentionality is [based on] agent causation.

Deep causation can support agent causation:
by allowing a sudden switch in the “dominant” causal mode, i.e. from one causal process that we have
chosen to follow to another. Hidden vs. explicit causation. The suggestion is that such a conversion can
produce genuinely new causes which can be candidates for agent-causes.





Conclusion

For intentionality to make sense in science [and therefore in philosophy] at all, it must be causal.
It is likely to be based on agency, for which a proper understanding of natural causation is required.
I suggested that causal depth is a probable necessary condition for full-fledged causality that can support
agency and therefore causal intentionality.