Currently in a reading group, we were reading some definitions of “What is Cognition” (https://doi.org/10.1016/j.cub.2019.05.044).
1. There were criteria
- that were very vague (“complex, human-like”, “something more complex than associative learning”) or very “all-inclusive” in order to include animals;
- others that IMHO were too narrow: to involve “sentence-like mental representations”, “the use of concepts”, or “the ability to use a model”, “causal reasoning”, or to involve “intentional states” — all of which McGilchrist would probably attribute to the narrow view of the left hemisphere.
Other criteria made sense to me: that cognition is
- not just automatic/ scripted (“behaviors that escape characterization as a […] scripted program”, or “effortful [distinguished from] automatic”),
- and conscious (“typically available to conscious awareness”).
Now it struck me that many resonating criteria involved invisibility:
- absence (“absence of direct stimulation” (Suddendorf), “freedom from immediacy” (Shadlen), “stimulus-independent” (Bayne), “exclude any behaviour to a goal stimulus that is actually present to the animal’s senses” (Webb), “processes that originate in the brain rather than solely with environmental stimuli” (Chittka)),
- abstraction (“certain abstract operations in between [peripheral senses and motor output]”, Intro),
- and imagination (Bayne, Mather).
Some others could at least be related to invisibility:
- predictions (Chittka, Suddendorf) — of an unknown (invisible) future,
- transfer (Chittka, Clayton) to new contexts — and to the invisible future,
and maybe even the following ones:
- flexibility (“behavioral flexibility” (Chittka), “flexible problem solving” (Clayton), “flexibility, as in predating routines” (Mather), “elemental features: flexibility […]” (Shedlon)),
- adapting (“handling information in an adaptive way” (Heyes), “adapting to environments that were unanticipated” (Shadlen)).
- adjusting (“adjust to changes in the world”, Webb),
- and even the intention mentioned above — which has to do with that unknown future!
2. So, what does this prominent role of invisibility mean for our relationship with artificial intelligence and tools?
First, I think, it emphasizes the importance of externalizing tools (as discussed in week 3), to make invisibles more visible and thus alleviate the cognitive tasks. So, @gsiemens was spot on when he pointed to external devices as early as 2005. (And I cannot resist a hint to my own tool here :-))
Second, it makes it even more difficult to define a relationship between human and artificial cognition: All machine ‘cognition’ is in some way invisible, it is all virtual. So, dealing with the invisible, with the absent, from a reality and presence starting point, as humans do, is strictly speaking impossible for a machine. Or is there an error in my reasoning?
Thanks to @downes for the interesting comment and further elaboration.
LikeLike