#EL30 Alien Intelligence AI

In EL30, a very interesting topic so far was about ‘Ethical AI’ and what makes humans different from AI.

Inevitably, the association pops up that the AIs will be very smart and autonomous, and we hope that they will want to behave ethically. And within this idea, two tacit assumptions are hidden:

  • one about our own ethics: that it is based on generally valid principles which can be deduced from reasons, ideally in a scientific way, and can be formalized into rules for the AIs,
  • and one about the kind of AI’s intelligences: while we have no idea about the level how much they might one day approximate our own level, we are sure that AIs will only be able to think in the formalized way of science and technology — never more human-like…?

I think these are both misconceptions.

1. Of course, the traditional AI of expert systems and reasoning engines was fundamentally different from humans. And of course, it is difficult imagine how an AI could be subjective, creative, have passions or even empathy. Therefore, it might be useful to think of them as alien intelligences: as if they were our co-learners from the “Interplanetary File System” (IPFS, see week 5 🙂 ). These intelligences are equally difficult to imagine, but very probably do exist nevertheless.A robot with sad blue eyes and a rotated blue heart on the lower right. Many bloggers have thought about what may be uniquely human ‘beingness’. Creativity and kindness seems to be central. Or, for example, Jenny says that being “is one of those ideas that cannot be made explicit without losing its meaning.“, and Laura mentions what has no answers and can’t be measured. I searched my own blog and think it can’t be objectivity, or finding optimized, rational, assessable, uniform solutions, because these are prone to cognitive automation. Rather, human minds grow (or, as Jenny’s commenter Gary emphasizes, develop — hence the pivotal role of machine learning), from a seed of trusted grounding, into diverse individuals.

Thanks again to Jenny who, in a private conversation, pointed me to a passage from Iain McGilchrist’s book:

“Human imitation is not slavish. It is not a mechanical process, dead, perfect, finished, but one that introduces variety and uniqueness to the ‘copy’, which above all remains alive, since it becomes instantiated in the context of a different, unique human individual.” (p. 247)

i.e., it does not work like programs are copied, but with individuality and subjectivity that guarantees sufficient diversity for further evolution.

Once AIs grow similarly as humans, and since they now share the network properties of our own neuronal networks, it is no longer absurd to suspect that some creative and passionate forms may one day be constructed. Not absurd, just very alien. But it can’t hurt when we learn how to live tolerantly and respectfully together with alien coworkers.

So the idea that AIs will always be just what amounts to a left ‘hemisphere’ dominance, is dangerous. Even more so since it is just us humans who strive to become ever more that way.

2. In the ethical issue, then, I don’t think we should just hope that we can use the AIs as tools. “AI for the good”, or “for common welfare” (see the blog parade where I participated), might function as a distraction from impending dangers. Also, “Ethically Aligned Design” is IMHO misleading since we software developers cannot effectively influence how technology will be used or abused. Such an overdone demand would only lead to more frustration and surrender.

However, we can alert the appropriate layers about the impending dangers: politics and the voters, and point to critical flaws. IMHO, it is particularly the problem when AI is used to administer, or even create, shortages such as access to the labor market, as I said in the blog parade. And the problem of traceability of the algorithms. Transparency is a key requirement. For a start, a minimum transparency should be established by a mandatory labeling requirement of artificial communication partners. After all, former telephone directory entries here had to carry a symbol if an answering machine was connected. If we acknowledge that trusted individuality is a key human feature, we need to be able to trust our genuine fellow humans.

Like Keith, I hope that AIs will be useful tools rather than patronizing deciders. If in doubt, as a rule of thumb, they could already do a great benefit if they just do some sorting before the human decision-making. And there is some hope that the ‘passionate’ variety of AI will not come too soon, if we first focus on the feature that Jenny mentioned: “they don’t get tired” — so they probably won’t be constructed with passions.

This entry was posted in 57, EL30 and tagged . Bookmark the permalink.

4 Responses to #EL30 Alien Intelligence AI

  1. jennymackness says:

    Thanks for this interesting post Matthias. For me it’s a question of thinking about what we stand to gain by becoming more dependent on ever-more human-like machines (AI) and more importantly, what we stand to lose. I suspect that it’s what we stand to lose that can’t be measured and that we are in danger of losing without even being aware of it.

    And before we rush headlong down the track of having an Alexa in every single room of a family home (BBC Radio 4 recently talked to a UK family about this), and so integrated into family life that the children treated this machine in a similar way to a human being, then we at least need to pause and consider what balance we want and whether there is something uniquely human that we want to hold on to, before we all become ever more machine like and machines become more than just tools.

    Like

  2. Pingback: Data, personal learning and learning analytics – Jenny Connected

  3. x28 says:

    Thank you Jenny for your thoughts and for showing a case (children) where the mandatory labelling requirement would not suffice.

    Like

  4. Pingback: E-Learning 3.0: The Human versus the Machine – Jenny Connected

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.