#ethics21 Faked teachers?

A challenging question in this week was not on the course channels but in OLDaily: “Still don’t think they can be teachers?” in a comment about expressive emotive humanoid robots.

Yes, I fear they can, and maybe poorer students will be fobbed off with faked emotions, in countries with a commercialized education system.

But what kind of education will that be? Not: learning to recognize subtle patterns (which would reveal the fake), but: drill of facts and procedural skills that will be useless by the time the students graduate, since such jobs will be automated by then anyway. And worse: getting used to fakes and dishonesty by false emotional expressivity, destroying the natural aptitude of trust.

Sorry to sound apocalyptic, but as I have said especially in week 4 and week 6, faked AIs for gaslighting the underprivileged would be a nightmare for me.

Posted in 57, Ethics21 | Leave a comment

No fear of entangled links

My workflow includes something that might seem scary to the uninitiated observer: I connect icons on a map without any bothering about how entangled they are becoming.

Screenshot of a zoomed-out map showing 17 columns of densely connected, items.

Older readers might be reminded of Burda Sewing Patterns where one could trace a colored line among densely packed shapes.

Schematic example of Burda Sewing pattern lines
Click here for a genuine photo from https://www.saturdaynightstitch.com/

If you don’t trust that they can often be nicely disentangled, you can now try a new function in my Thought Condensr tool: a puzzle game.

The trick is that rearranging too early might seduce to premature grouping and pigeonholing, and then to miss new relationships.

If you never thought of trying out my tool, maybe this is the opportunity: relax with the little puzzle game.

Posted in 26, Visualization | Tagged | Leave a comment

#ethics21 Week 7 Takeaways

It is wonderful that in the #ethics21 MOOC, some important thoughts were really carried through. This made the limitation of AI much clearer for me.

One such thought was how algorithms would look like that provide care, integrity or trust, with inputs from a whole caring community. The other thought was whether a large scaled-up MOOC with many smaller groups could each have an “artificial Stephen” modeled after his many live inputs (see the Friday’s discussion at ~70 min).

Both felt somehow wrong immediately — I would not want to participate in such a MOOC, or to have a robot carer for solace — but why? To be fair, the scenarios don’t confirm the prejudices against AI as a mechanistic expert system, nor the misunderstanding of teaching as transmission. Still, there is an underlying pattern of Input – Output in the perceptron layers and the End-to-End Machine Learning Workflow, that doesn’t seem to leave room for something like reciprocity. Once the training stage is over, the AI won’t learn any more from the end users, even if it is continually tweaked by the ‘data processor’ stakeholders using new data from the ‘data subjects’ — if I understood this correctly.

German street sign "Einbahnstraße", i.e. One Way Road

Maybe my idea of Higher Education is too idealistic, based on Humboldt’s ideal (see an old CCK08 post) that university teachers and students should learn together, in a community of curiosity and the unity of research and teaching. Or Howard Rheingold’s long-standing practice as a “co-learner” teacher. I do think that even the unique questions, misunderstandings, or surprising/ outstanding elements for highlighting and annotation, of each new student generation, can influence the teacher towards new insight, even if just by a bit of questioning of old ‘matters of course’, or just a bit of refocussing.

Similarly regarding Care, I think that the careful active listening by the one-caring to the cared-for (see previous post) may indeed occasionally entail that the former learns from the latter, for instance the valuable perspective of a very old person.

This intergenerational mutual learning may be more or less absent in the typical K-12 environment where just centralized templates of content and skills are wanted, but in environments where fostering independence is important (infants, elderly, HE students, or general critical literacy), it may be rare but still crucial.

And beyond the intergenerational interactivity, I think every feedback has the (however small) chance to generate genuine new insights. However, this will probably not pertain to the specialized domain that the robot teacher is trained for, but probably rather come from distant associations, and I cannot yet imagine how AI would implement and handle such extra-domain input.

Unfortunately, I don’t have an idea, either, about how group-working MOOCs could be scaled-up without the abovementioned artificial Stephens. For scaling-down, I think this course has shown that too small a number is also difficult, at least the blog commenting suffers if there is not a critical mass.

Posted in 60, Ethics21 | Tagged , | 1 Comment

#ethics21 More Week 6 thoughts

After many hours of listening to the 9 videos of this module, the reward is a very satisfying, plausible and useful takeaway. AI in education should and could mitigate vulnerabilities and oppression. It is useful (sorry for the utilitarianism) because we can derive practical consequences for the design of the new systems.

1. When I think of vulnerabilities and dependence in education, I think foremost of grading and assessments, and I have blogged about this and I have received constructive agreement and disagreement in the comments here Ungrading, my take and here #EL30 Week 6: Automated Assessments. From the former, I learned that “[students] want to know how they compare to others in their cohort and beyond.” while from the latter, I was nicely reassured that “for the final summative assessments deciding about the future life of a human, such [problematic] algorithms are not acceptable.”

So if AI were restricing itself to formative assessments, can it avoid hurting the weaker students with fear while still let them know how they compare to others? I think yes, the advantage of the digital device over the physical classroom is that it can preserve anonymity. If the pupil wants to know, the system can tell him or her how the others are doing without naming them.

Another source of fear and oppression is time pressure. I keep noticing in this course how much the synchronous and the asynchronous styles differ, and I am happy that the possibility of blogging relieves me from the rapid-fire pressure of the live session.

2. There is also another takeaway from the thorough coverage of the relationship between the one-caring and the cared-for, in particular from Noddings’s work. The one-caring should not act without the expressed wish signalled by the cared-for. So this seems to be once again a matter of pull vs. push, which is so important in many tech-related issues. However, in the context of vulnerable and dependent persons, this interplay of request and response, poses yet another subtlety in that the cared-for may be hesitant or embarrassed to explicitly express a need, and so the task of the one-caring is even more difficult, to still recognize the wish by careful active listening, and still not overriding the other by preemptive patronization.

Maybe technical self-service can mitigate some of the embarrassing sentiments, too. When self-service supermarkets arrived to replace the mom-and-pop groceries, one of the success factors was that customers did not have to be embarrassed when they needed time to decide or if they did not know how to pronounce a product’s name. (I remember when Sunlight soap had a sun icon at the very spot that would have distinguished the English ‘g’ from the ‘c’ of the German equivalent, to mitigate the pronounciation problems.) So the reduction of human attendance had at least a tiny welcome flip side.

However, what the active listening of a one-caring can recognize as the unspoken wishes of a cared-for, is probably not always recognizeable by a machine, simply because the cared-for will approach the machine differently.

Icons of a nurse and a patient.
Posted in 60, Ethics21 | Leave a comment

#ethics21 Personal or not

I am still surprised how much arguing seems to be needed against the possibility of a universally valid ethics. For me, ethics has always been a personal thing, and I wonder if the term just had different connotations in my youth or/ and in my country. Although I have already mentioned some aspects in week 1 on Oct 14 (privileged professions, GDPR, and discretionary decision-making) I think I should expand on this a bit more.

Perhaps our personal ideas of ethics did not seem valid for everyone because sometimes they even opposed the common authoritative views and were allowed only as an exception. For example, there were discussions if a school subject called ethics should become a replacement for those who refused to attend the Religion lessons. And those who refused the military draft for ethical reasons were only allowed to do their replacement service after explaining their ‘conscience’ to a jury. (In the hearing, BTW, hypothetical situations comparable to the trolley problem were common questions, so we did have to think about such dilemmas a lot.)

By contrast, for our conduct within the professional work in the public service, the term ‘ethics’ was not used; it was simply the duty. Of course there would have been plenty of opportunity to abuse some discretionary leeway and get away with unnoticed tricks. But the obligation was founded by an oath of allegiance (mentioned before, which BTW was not theatrically done upon the bible or something, but instead by a handclasp with the principal director, but nevertheless it meant an equally binding relationship). The oath also contained the ‘FDGO’, the free democratic basic order, and as students, we often thought that we would probably not be permitted to the public service because people who participated in critical protest marches were suspected of disloyality (and the surveillance at the marches was a matter of course).

Perhaps the public service in Germany has some speciality that needs to be mentioned, although it did not directly apply to me and although its importance has now largely decreased: There is this special employment relationship of a “Beamter“, an officer who has a mutual special trust relationship with the state. This means that his or her employment can almost never be terminated (except for felony), and in turn, he/she was entrusted with certain tasks such as reliable infrastructure with Mail or Rail, or sensitive jobs such as teacher or administration officer. (The history of this status is related to the Prussians’ clever fight against bribery: by relocating the officer frequently, it was less easy for him to network with the locals to gain personal benefits.) This tradition may explain why some of what otherwise may be called ethical, is just seen as a professional obligation.

It has been amazing in this course how many terms don’t have a simple one-to-one translation. And particularly the word ‘care’ has so many different meanings. The image below shows one meaning that has been particularly important to my parents’ generation after the war.

CARE packet, see rich description at the linked source
CARE packet, CC BY-NC-SA by dc-r docu center ramstein

Posted in Ethics21 | Leave a comment

#ethics21 Week 6 thoughts

Today Stephen explicitly encouraged us to blog some thoughts how care ties into … analytics.

From what he said over the years, my first thought is about the parallels between care and AI. One parallel is how the ‘One-Caring’ (as Nel Noddings called them, see Jenny Mackness’ wonderful notes) knows what to do, without rules and without being able to explain it, by recognizing. This idea was detailed in the previous week, and I find it very plausible, except that I would not draw some conclusions, see below.

Another parallel is how the One-Caring learns their ethics: not via centrally provided templates, principles, etc. or from central authorities, but rather decentrally from examples, via ripple effects, and perhaps like ebb and flow. I have written about this before, but it was only recently that I realized how important this decentrality is.

Beyond these parallels, of course the differences between human and AI come to mind, and it is difficult not to just resort into a wholesale rejection like “Robots will NEVER be able to do xxx”. So we need to guess, will robots be empathetic?

Empathetic behavior is probably not too difficult to emulate. But empathetic feelings?

As I said in week 2, “I think I don’t underestimate AI’s eventual abilities, and how far some extreme thought experiments of ‘raising’ AI personalities can liken hypothesized machines to us — which I tried to describe in my EL30 post about alien AI intelligences.” But even in these thought experiments, the alien AIs will ‘feel’ their empathy towards their conspecific alien species fellows who raised them, not towards us earthlings, and the ‘feelings’ will be alien to us, not satisfying our “craving for the real, the genuine and the authentic, and this is, for me, the cue that AI’s role of personal coaching of unique individuals, will be limited” (as I wrote in Week 4 mentioning research by N. Meltzoff).

Take coaching first (to exclude the additional aspect of vulnerability and dependence of the Cared-For that Noddings so thoroughly described). A human coach knows whether the client/ student can bear another challenge, and he signals that he believes that the client might succeed. The client, in turn, needs to trust that the coach really believes this, otherwise the encouragement won’t work.

Now an AI is said to be able to determine whether the student’s performance warrants another challenge. (Let’s assume this is correct, even though it can be doubted that AI works not only with big data but also with the few personal data of the single student. Maybe the AI knows the student already since a long time, maybe for a long mutual relationship.)

But will the client trust the robot coach? I don’t think so, unless he is betrayed and gaslighted and told that the robot is a human — which sounds like a nightmare scenario from a sci-fi novel where the world is split up and polarized into states who allow such inhumanities and other cultures who are shocked by this habit similarly like eating fried cats or keeping slaves.

So I think a robot coach cannot help growing self-confidence. And even less so, a robot carer won’t gain the trust of a vulnerable cared-for. But for the empathetic feeling to develop in the one-caring, this trust is crucial — it’s a mutual development. It’s a chicken and egg thing, as Sherida said in today’s live session.

So, how can we apply the parallels and differences between AI and human care or learning? One thing is that understanding AI might help understanding how learning works, incl. how to learn ethics. Not via rules as in previous generations of AI (‘GOFAI’, good old-fashioned AI) but via patterns and recognition. Thus the old Computer Theory of Mind can be replaced by a more actual metaphor, an AI theory of mind. I find this very plausible (not least because during CCK08 I thought we need an ‘internet theory of mind’). But I don’t think it would be a popular idea, and people don’t know and don’t trust modern AI.

The other possible consequence would be the other way around: instead of inferring from AI to humans, replace humans by AI who “[A]re better at reading the world than we are” (as Weinberger was quoted). This would probably be welcomed by the people who think of old rule-based AI and hope that ethics could be just be programmed into the machines. But I would not want to get my fate decided by machines who just ‘know’ and ‘recognize’ what is right without knowing why. For hard ethical questions, I do not just know what is right, but I need to think hard about it, and perhaps need to arrive at a very unsettling ‘lesser evil’ solution.

Posted in 47, Ethics21 | Tagged | Leave a comment

#ethics21 Mapping

In the Friday’s live session, several aspects came up that warrant a more detailed discussion of my tool and its provisionary use for visualizing some connections.

1. First: no, it does not offer varying line widths, just a coloring called ‘pale’ for vague connections.

It is not intended as an expressive visualization of clearly defined structures.

2. Placeholders:

“the idea here is if you get enough input and enough of these graphs and you think of these words, these symbols just as placeholders, but actually not actually as representations of anything.”

The label of an item is almost irrelevant if the description is immediately available alongside as in the right pane of my tool, which therefore satisfies the idea of a placeholder label that does not need no agonizing over it. You can even omit the label altogether. And color the icon as pale if you are still unsure.

3. Keyword mapping:

“simple keyword mapping isn’t really going to do the job” … “If you have multiple layers, multiple connected layers, you can really do some fine tuning”

Between the two outer columns, you can always place intermediate/ temporary items. (See the picture below for how I did it before submitting my Week 4 task.)

So by fiddling and fumbling with the items, you can manually try out what kind of operations you would later expect the AI to do, as it was discussed in the live session. My think tool is not optimized for finished structures, but rather for raw material for thinking.

Here is a quote of the book I just started to read:

“By treating information as a resource, as raw material rather than a finished product, we give ourselves permission to adapt, modify, and transform it into a shape that aids understanding and makes us better thinkers.”

Stephen P. Anderson; Karl Fast; Christina Wodtke. Figure It Out: Getting from Information to Understanding (Kindle-Positionen 231-232). Rosenfeld Media.

But of course you are welcome to use it however you want.

Posted in Ethics21 | Tagged | 3 Comments

#ethics21 Week 5

Although the course has fairly much drifted towards the oral, I will briefly write down my thoughts here in my blog.

Screenshot of a calendar, saying Week 5

I have become increasingly aware of how little I know about the heavy complicated philosophical traditions that are tied to almost each of the seemingly simple concepts. So I appreciate Stephen’s comprehensive lessons, and I am happy to invest the many many hours of listening.

There is one thing I still hope to understand: Why is the idea of consequentialism so much despised and attacked by many? Is it that some early capitalist ideologists did somehow ‘hijack’ the idea for justifying egoism?

For me, the earliest encounter with the idea was Max Weber’s contrasting Verantwortungsethik (ethics of responsibility) with Gesinnungsethik (ethics of attitude, see Wikipedia and a previous blog post), and I could not understand what is wrong with the responsibility to consider the consequences of one’s decision.

Later, I was surprised that it is equated with Utilitarianism, but still, what’s wrong with utility? Do I smell some elitism here, of those who don’t need to care about utility because they live in abundance, and are proud to focus on Das Schöne, Wahre, Gute (the beautiful, true, good)? But beauty does have a utility (see e.g. “Aesthetics as Pre-linguitic knowledge” by Alan Whitfield), and truth does save costs in the information economy, and ‘the good’ has many connotations of the useful.

So if there is some elitism, and perhaps some dreaming of a meritocracy, I will be watchful if there are also growing tendencies against democracy — which I see as my duty because it was part of my oath of allegiance, 40 years ago, at the start of my professional career in the public service.

Posted in Ethics21 | 1 Comment

#ethics21 Undistributed posts reference

Somehow, three two of my course posts have not shown up in the daily newsletters, archives, or in the RSS feed (at least I did not discover them), so here are their links:

#ethics21 Week 4: many concepts, Nov 7 (included later)

This week has brought plenty of ethical concepts, and I need to share how interesting I found it to put them all on a map and connect them.

#ethics21 Week 2 Curiosity, Oct 17

In Week 2, we will learn about Applications of Learning Analytics, and I am curious about a few things that have puzzled me for a while.

#ethics21 Introducing myself, Oct 09

Other course participants started introducing themselves and indicated why they joined the course, so I will do this, too.

Posted in Ethics21 | 1 Comment

#ethics21 Week 4: many concepts

This week has brought plenty of ethical concepts, and I need to share how interesting I found it to put them all on a map and connect them.

I did not do a proper “concept map” (with labeled relationship arrows) but just associated those that seemed most similar. And of course the list is by no means complete or relevant; I just copied most of the terms from Stephen’s slides, and some additional ones from Jenny Mackness’ latest post which is incidentally about values, too.

I really recommend to try it out for yourself. Just click below to reveal all the words, then select them all from top to bottom, then open http://demo.condensr.de in a new window and then drag and drop the entire selection onto the canvas. (If anything goes wrong, right-click on the graph and select ‘Wipe clean’ and try once more. The drag & drop can be watched in this video. Don’t hesitate to contact me directly.)

(Click here to toggle the word list)












character traits

Of course I used the full version which is equally free to be downloaded here. And my results, after some autolayouts and zooming, are shown below. First a tree with minimal connections, then a circle with added cross connections. The colors on the left represent the ‘betweenness centrality’.

Posted in Ethics21 | Leave a comment