#ethics21 Week 6 thoughts

Today Stephen explicitly encouraged us to blog some thoughts how care ties into … analytics.

From what he said over the years, my first thought is about the parallels between care and AI. One parallel is how the ‘One-Caring’ (as Nel Noddings called them, see Jenny Mackness’ wonderful notes) knows what to do, without rules and without being able to explain it, by recognizing. This idea was detailed in the previous week, and I find it very plausible, except that I would not draw some conclusions, see below.

Another parallel is how the One-Caring learns their ethics: not via centrally provided templates, principles, etc. or from central authorities, but rather decentrally from examples, via ripple effects, and perhaps like ebb and flow. I have written about this before, but it was only recently that I realized how important this decentrality is.

Beyond these parallels, of course the differences between human and AI come to mind, and it is difficult not to just resort into a wholesale rejection like “Robots will NEVER be able to do xxx”. So we need to guess, will robots be empathetic?

Empathetic behavior is probably not too difficult to emulate. But empathetic feelings?

As I said in week 2, “I think I don’t underestimate AI’s eventual abilities, and how far some extreme thought experiments of ‘raising’ AI personalities can liken hypothesized machines to us — which I tried to describe in my EL30 post about alien AI intelligences.” But even in these thought experiments, the alien AIs will ‘feel’ their empathy towards their conspecific alien species fellows who raised them, not towards us earthlings, and the ‘feelings’ will be alien to us, not satisfying our “craving for the real, the genuine and the authentic, and this is, for me, the cue that AI’s role of personal coaching of unique individuals, will be limited” (as I wrote in Week 4 mentioning research by N. Meltzoff).

Take coaching first (to exclude the additional aspect of vulnerability and dependence of the Cared-For that Noddings so thoroughly described). A human coach knows whether the client/ student can bear another challenge, and he signals that he believes that the client might succeed. The client, in turn, needs to trust that the coach really believes this, otherwise the encouragement won’t work.

Now an AI is said to be able to determine whether the student’s performance warrants another challenge. (Let’s assume this is correct, even though it can be doubted that AI works not only with big data but also with the few personal data of the single student. Maybe the AI knows the student already since a long time, maybe for a long mutual relationship.)

But will the client trust the robot coach? I don’t think so, unless he is betrayed and gaslighted and told that the robot is a human — which sounds like a nightmare scenario from a sci-fi novel where the world is split up and polarized into states who allow such inhumanities and other cultures who are shocked by this habit similarly like eating fried cats or keeping slaves.

So I think a robot coach cannot help growing self-confidence. And even less so, a robot carer won’t gain the trust of a vulnerable cared-for. But for the empathetic feeling to develop in the one-caring, this trust is crucial — it’s a mutual development. It’s a chicken and egg thing, as Sherida said in today’s live session.

So, how can we apply the parallels and differences between AI and human care or learning? One thing is that understanding AI might help understanding how learning works, incl. how to learn ethics. Not via rules as in previous generations of AI (‘GOFAI’, good old-fashioned AI) but via patterns and recognition. Thus the old Computer Theory of Mind can be replaced by a more actual metaphor, an AI theory of mind. I find this very plausible (not least because during CCK08 I thought we need an ‘internet theory of mind’). But I don’t think it would be a popular idea, and people don’t know and don’t trust modern AI.

The other possible consequence would be the other way around: instead of inferring from AI to humans, replace humans by AI who “[A]re better at reading the world than we are” (as Weinberger was quoted). This would probably be welcomed by the people who think of old rule-based AI and hope that ethics could be just be programmed into the machines. But I would not want to get my fate decided by machines who just ‘know’ and ‘recognize’ what is right without knowing why. For hard ethical questions, I do not just know what is right, but I need to think hard about it, and perhaps need to arrive at a very unsettling ‘lesser evil’ solution.

This entry was posted in 47, Ethics21 and tagged . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.