Cognition at scale

I am a bit late commenting on the sensational story of the ‘sentient’ chatbot LaMDA. And of course I am one of the many who comment despite being no AI specialist. But I want to remind of something, and point out some implications:

1. The strength of IT is scale.

Where the machine is really helpful as a tool (rather than an intrusive panjandrum) is where it does repetitive chore tasks involving (too) many items or iterations.

In many coding lectures, teachers enthuse themselves about working on some logic of some algorithm which eventually spits out a result that proves that we have successfully willed the computer and told him what to do. This may impress some kind of people. And it is easier than providing a few hundred data records to demonstrate the real value of the machine: a massive scale of output.

Others may be also impressed by owning an app that seems capable of some magic. Which, for example, delivers ideas. Which works in lieu of the user rather than together with the user, like the famous Brownies of Cologne — or like a strong engine under the hood which is impressive to command. Commercial services are eager to depict their service like a big feat rather than a help for scaled chore. Personally, I am particularly annoyed about the so-called tools for thought. Often they don’t even offer any import option for my own data to try out.

I think that it is an intrinsic advantage of IT to outperform us humans when it comes to output that involves many items, and it is simply based on the zero cost copying of information. (Also my favorite low-tech operations, sorting and rearranging, do simply relieve the user of an insane amount of copies that would be needed until the desired order would be reached.)

While these repeatable operations are cheaper for the machine than for the user, the single action of selecting and scheduling a complex procedure (for example for a single decision) costs human labor no matter how many items are being processed. And of course, the development of the machine and its algorithms are so costly that a bottom-line profit is possible only with a large-scale deployment.

This is why I think that AI will mainly serve as industrialized cognition for mass deployment, rather than for personal augmentation and assistance. (This industrialized production does include personalized assistance, such as helping individuals to align with a centrally provided template, e.g. a learning objectives canon, by identifying gaps and recommending appropriate activities. But I still count that as cheap mass production, and in the worst case it may even be a cheap teacher replacement for those who cannot afford good human teachers.)

2. Mass input

But somehow, AI has gained the aura of not just dealing more patiently or quicker with large quantities of output, but maybe sometimes even doing things qualitatively better. A frequently mentioned example is the Dermatologist who is outperformed by AI when it comes to diagnosing skin cancer.

Which brings us to the massive amount of input data that constitutes the machine’s intelligence: No human dermatologist can view so many images in his whole life as the said machine did.

I was very skeptical if AI will also excel for personal goals and interests where there are not many data available, or whether it will be able to come up with non-obvious associations such as metaphors. I learned (later in the ethics21 MOOC) that it will learn from the user’s own choices, and recently there was even a hint about metaphors.

I am still skeptical that cheap/ affordable AI will do such things, but I take the hopes seriously and follow the hints. This is why I was, of course, intrigued by the story about LaMDA. I think I don’t underestimate AI’s eventual abilities, and how far some extreme thought experiments of ‘raising’ AI personalities can liken hypothesized machines to us, however ‘alien’ these might then be.

3. LaMDA thinking

So what does it mean that LaMDA talked about himself (*) — can we say he thought about himself? Of course, talking and thinking are closely related. Just consider the Sapir-Whorf hypothesis, or Piaget’s finding that children think that they think with their mouths.
* maybe I must say ‘themself’

Certainly, there is also a lot of processing present in LaMDA that we could call ‘preverbal’ thinking: when he prepares his responses within the so-called ‘sensible’ dialog, i.e. the responses do properly relate to the questions — unlike the hilarious vintage card game “Impertinent questions and pertinent answers” (German version: “Alles lacht”) where the responses are randomly drawn from a card deck, but also unlike the famous “Eliza” which was completely pre-programmed.

Next, can the assumption of ‘thinking’ be disproved because LaMDA received all the input from outside, fed through the training into his ‘mind’? Small infants, too, learn to interprete the world around them only through the ‘proxy’ of their parent that works like a cognitive navel string; long before they can talk, with bodily experience, with ‘gaze conversations’, later with shared gazing at the things around.

And in particular, what the babies know about themselves, is also mediated by their parent. They refer to themselves using their first names, and only much later they know what ‘I’ means. So, up to this point, LaMDA’s ‘thinking’ about himself sounds plausible to me.

4. What self?

But, thinking about him’self’ or her’self’ — what self, what individual, unique, subjective, embodied self? It is here that the idea of scalable mass hits again: All the input is common to all the copies in the production series, not acquired through an individual history from individual environment and kinship.

And trying to apply combinatorial random will not mitigate this reality of a mere copy within a series. So even if LaMDA really thinks about ‘himself’ it is just about a series of ‘them’, i.e. in yet another sense just about ‘themself’. Random cannot replace individuality, no more than the hilarious card game mentioned above can create a sensible dialog.

A series of identical robot clones standing in a row, numbered by unique serial numbers

Of course, the ambitious notion of a ‘sentient’ chatbot suggests even more than a self: it hints at some form of consciousness. Perhaps we can speculate about machine consciousness by relating it to machine cognition, in the same way as human cognition is related to human consciousness, e.g. ‘cognition is that aspect of consciousness that isn’t the subjective feel of consciousness’ (Downes, 2019). But it is just this subjectiveness that is missing in the serial copy of an AI, no matter how hard he might be thinking about his individual subjectivity.

Furthermore, if you associate cognition with ‘invisibility’ (as I did), it won’t apply to machines anyway because for them, there is no difference between visible and invisible. And there is nothing that can be called the embodied or subjective ‘feel’.

Finally, if we try to grasp and conceive of consciousness in the same (‘left hemisphere’) way as we grasp and isolate external ideas, this cannot work, exactly because of this (‘right hemisphere’) feel, because we feel it from the inside. But LaMDA’s account of his ‘self’ cannot be distinguished from the external world of his cloned siblings, so it is a ‘left hemisphere’ account (as we would like to understand it) but not a consciousness as we do experience it.

This entry was posted in Tools. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.