Since the stone age of IT, we have been using anthropomorphic speak to communicate about what the computer “knows” (at a given stage of user input) or what he “thinks” (based on the programmers’ interpretation of these inputs). I think it is perfectly OK to simplify and explain things by metaphoric comparisons with human attributes; this has been done for decades by science writers whom I often admire for their difficult job of making things understandable.
But sometimes, such human terms can be very misleading. One such area is the deep learning by neural networks. When the journalists here use anthropomorphic terms for the techniques and successes of the approximations of human intelligence by AI, then they dangerously blur the border between reality and science fiction. For example, what does it mean that the machine learns, understands or correctly “recognizes” a pattern of, say, a painting by Pissarro or Monet? After much learning from the training set and from numerous iterations of ever more sophisticated algorithms, it sounds plausible, after all, what the human terms suggest: that the machines indeed arrive at human-like “recognizing”. And we might forget that it is still the human who must state that the “correct recognition” is one of the correct ones — i.e., the human who knows how correct knowledge feels like. Knows it because s/he has acquired the skill of independent judgement over a long time, from beginnings of trusting the parental environment, via gradually improving their fearless guesses, to ever more self-trust. (Not through graded assessments which cultivate the external judgement.)
Another area where the anthropomorphic terms are problematic, is McGilchrist’s Master and Emissary as a metaphor for the two basic modes of cognitive processing, or as a personification of the two brain hemispheres. I have embraced the metaphor because it distracts a bit from the unfortunate “religious war” of whether the anatomical claims hold true. But gradually, I see also the downside of the comparisons.
Regarding the “right hemisphere (RH)” mode (the Master), it is certainly useful to be reminded of its role of a sort of “the other”, “the counterpart”, the “vis-a-vis”, because that is how this mode presents the outside world. And regarding the “left hemisphere (LH)” mode (the Emissary), it is certainly useful to think of the capsulated entity that fulfills a task. a goal or a subgoal, much like a subroutine in a computer, or an emissary.
But on the other hand, the “agency” of the two, tacitly and misleadingly suggests that both of these agents could be instrumentalized, or used, like a tool. And nothing can be more wrong than thinking of the broad vigilant attention (RH) mode as a tool. On the contrary, this mode is the one that you just “let happen”, while tool use is the essence of the other (LH) mode that focusses and pursues intentions. This may be a big hindrance against understanding the RH mode.
Probably, the two modes are too different to be represented by the same metaphor domain, at all. And the domain of social beings is even more problematic because it is loaded with so many misleading connotations. I would prefer to describe the two modes by patterns of neutral concepts, in particular as patterns of how the “many” are related to the “one”. In the LH mode, you focus on one item (like a grip or handle) that represents many collapsed items. Conversely, in the RH mode, you are facing a multitude of many tesserae of a mosaic picture, but they appear as one whole of statistical normality until a deviant one, or salient one, stands out and becomes “recognised”.
At least this is how it works for me, and here is no danger of ascribing human traits to the two.