#ethics21 More Week 6 thoughts

After many hours of listening to the 9 videos of this module, the reward is a very satisfying, plausible and useful takeaway. AI in education should and could mitigate vulnerabilities and oppression. It is useful (sorry for the utilitarianism) because we can derive practical consequences for the design of the new systems.

1. When I think of vulnerabilities and dependence in education, I think foremost of grading and assessments, and I have blogged about this and I have received constructive agreement and disagreement in the comments here Ungrading, my take and here #EL30 Week 6: Automated Assessments. From the former, I learned that “[students] want to know how they compare to others in their cohort and beyond.” while from the latter, I was nicely reassured that “for the final summative assessments deciding about the future life of a human, such [problematic] algorithms are not acceptable.”

So if AI were restricing itself to formative assessments, can it avoid hurting the weaker students with fear while still let them know how they compare to others? I think yes, the advantage of the digital device over the physical classroom is that it can preserve anonymity. If the pupil wants to know, the system can tell him or her how the others are doing without naming them.

Another source of fear and oppression is time pressure. I keep noticing in this course how much the synchronous and the asynchronous styles differ, and I am happy that the possibility of blogging relieves me from the rapid-fire pressure of the live session.

2. There is also another takeaway from the thorough coverage of the relationship between the one-caring and the cared-for, in particular from Noddings’s work. The one-caring should not act without the expressed wish signalled by the cared-for. So this seems to be once again a matter of pull vs. push, which is so important in many tech-related issues. However, in the context of vulnerable and dependent persons, this interplay of request and response, poses yet another subtlety in that the cared-for may be hesitant or embarrassed to explicitly express a need, and so the task of the one-caring is even more difficult, to still recognize the wish by careful active listening, and still not overriding the other by preemptive patronization.

Maybe technical self-service can mitigate some of the embarrassing sentiments, too. When self-service supermarkets arrived to replace the mom-and-pop groceries, one of the success factors was that customers did not have to be embarrassed when they needed time to decide or if they did not know how to pronounce a product’s name. (I remember when Sunlight soap had a sun icon at the very spot that would have distinguished the English ‘g’ from the ‘c’ of the German equivalent, to mitigate the pronounciation problems.) So the reduction of human attendance had at least a tiny welcome flip side.

However, what the active listening of a one-caring can recognize as the unspoken wishes of a cared-for, is probably not always recognizeable by a machine, simply because the cared-for will approach the machine differently.

Icons of a nurse and a patient.
Posted in Ethics21 | Leave a comment

#ethics21 Personal or not

I am still surprised how much arguing seems to be needed against the possibility of a universally valid ethics. For me, ethics has always been a personal thing, and I wonder if the term just had different connotations in my youth or/ and in my country. Although I have already mentioned some aspects in week 1 on Oct 14 (privileged professions, GDPR, and discretionary decision-making) I think I should expand on this a bit more.

Perhaps our personal ideas of ethics did not seem valid for everyone because sometimes they even opposed the common authoritative views and were allowed only as an exception. For example, there were discussions if a school subject called ethics should become a replacement for those who refused to attend the Religion lessons. And those who refused the military draft for ethical reasons were only allowed to do their replacement service after explaining their ‘conscience’ to a jury. (In the hearing, BTW, hypothetical situations comparable to the trolley problem were common questions, so we did have to think about such dilemmas a lot.)

By contrast, for our conduct within the professional work in the public service, the term ‘ethics’ was not used; it was simply the duty. Of course there would have been plenty of opportunity to abuse some discretionary leeway and get away with unnoticed tricks. But the obligation was founded by an oath of allegiance (mentioned before, which BTW was not theatrically done upon the bible or something, but instead by a handclasp with the principal director, but nevertheless it meant an equally binding relationship). The oath also contained the ‘FDGO’, the free democratic basic order, and as students, we often thought that we would probably not be permitted to the public service because people who participated in critical protest marches were suspected of disloyality (and the surveillance at the marches was a matter of course).

Perhaps the public service in Germany has some speciality that needs to be mentioned, although it did not directly apply to me and although its importance has now largely decreased: There is this special employment relationship of a “Beamter“, an officer who has a mutual special trust relationship with the state. This means that his or her employment can almost never be terminated (except for felony), and in turn, he/she was entrusted with certain tasks such as reliable infrastructure with Mail or Rail, or sensitive jobs such as teacher or administration officer. (The history of this status is related to the Prussians’ clever fight against bribery: by relocating the officer frequently, it was less easy for him to network with the locals to gain personal benefits.) This tradition may explain why some of what otherwise may be called ethical, is just seen as a professional obligation.

It has been amazing in this course how many terms don’t have a simple one-to-one translation. And particularly the word ‘care’ has so many different meanings. The image below shows one meaning that has been particularly important to my parents’ generation after the war.

CARE packet, see rich description at the linked source
CARE packet, CC BY-NC-SA by dc-r docu center ramstein

Posted in Ethics21 | Leave a comment

#ethics21 Week 6 thoughts

Today Stephen explicitly encouraged us to blog some thoughts how care ties into … analytics.

From what he said over the years, my first thought is about the parallels between care and AI. One parallel is how the ‘One-Caring’ (as Nel Noddings called them, see Jenny Mackness’ wonderful notes) knows what to do, without rules and without being able to explain it, by recognizing. This idea was detailed in the previous week, and I find it very plausible, except that I would not draw some conclusions, see below.

Another parallel is how the One-Caring learns their ethics: not via centrally provided templates, principles, etc. or from central authorities, but rather decentrally from examples, via ripple effects, and perhaps like ebb and flow. I have written about this before, but it was only recently that I realized how important this decentrality is.

Beyond these parallels, of course the differences between human and AI come to mind, and it is difficult not to just resort into a wholesale rejection like “Robots will NEVER be able to do xxx”. So we need to guess, will robots be empathetic?

Empathetic behavior is probably not too difficult to emulate. But empathetic feelings?

As I said in week 2, “I think I don’t underestimate AI’s eventual abilities, and how far some extreme thought experiments of ‘raising’ AI personalities can liken hypothesized machines to us — which I tried to describe in my EL30 post about alien AI intelligences.” But even in these thought experiments, the alien AIs will ‘feel’ their empathy towards their conspecific alien species fellows who raised them, not towards us earthlings, and the ‘feelings’ will be alien to us, not satisfying our “craving for the real, the genuine and the authentic, and this is, for me, the cue that AI’s role of personal coaching of unique individuals, will be limited” (as I wrote in Week 4 mentioning research by N. Meltzoff).

Take coaching first (to exclude the additional aspect of vulnerability and dependence of the Cared-For that Noddings so thoroughly described). A human coach knows whether the client/ student can bear another challenge, and he signals that he believes that the client might succeed. The client, in turn, needs to trust that the coach really believes this, otherwise the encouragement won’t work.

Now an AI is said to be able to determine whether the student’s performance warrants another challenge. (Let’s assume this is correct, even though it can be doubted that AI works not only with big data but also with the few personal data of the single student. Maybe the AI knows the student already since a long time, maybe for a long mutual relationship.)

But will the client trust the robot coach? I don’t think so, unless he is betrayed and gaslighted and told that the robot is a human — which sounds like a nightmare scenario from a sci-fi novel where the world is split up and polarized into states who allow such inhumanities and other cultures who are shocked by this habit similarly like eating fried cats or keeping slaves.

So I think a robot coach cannot help growing self-confidence. And even less so, a robot carer won’t gain the trust of a vulnerable cared-for. But for the empathetic feeling to develop in the one-caring, this trust is crucial — it’s a mutual development. It’s a chicken and egg thing, as Sherida said in today’s live session.

So, how can we apply the parallels and differences between AI and human care or learning? One thing is that understanding AI might help understanding how learning works, incl. how to learn ethics. Not via rules as in previous generations of AI (‘GOFAI’, good old-fashioned AI) but via patterns and recognition. Thus the old Computer Theory of Mind can be replaced by a more actual metaphor, an AI theory of mind. I find this very plausible (not least because during CCK08 I thought we need an ‘internet theory of mind’). But I don’t think it would be a popular idea, and people don’t know and don’t trust modern AI.

The other possible consequence would be the other way around: instead of inferring from AI to humans, replace humans by AI who “[A]re better at reading the world than we are” (as Weinberger was quoted). This would probably be welcomed by the people who think of old rule-based AI and hope that ethics could be just be programmed into the machines. But I would not want to get my fate decided by machines who just ‘know’ and ‘recognize’ what is right without knowing why. For hard ethical questions, I do not just know what is right, but I need to think hard about it, and perhaps need to arrive at a very unsettling ‘lesser evil’ solution.

Posted in Ethics21 | Tagged | Leave a comment

#ethics21 Mapping

In the Friday’s live session, several aspects came up that warrant a more detailed discussion of my tool and its provisionary use for visualizing some connections.

1. First: no, it does not offer varying line widths, just a coloring called ‘pale’ for vague connections.

It is not intended as an expressive visualization of clearly defined structures.

2. Placeholders:

“the idea here is if you get enough input and enough of these graphs and you think of these words, these symbols just as placeholders, but actually not actually as representations of anything.”

The label of an item is almost irrelevant if the description is immediately available alongside as in the right pane of my tool, which therefore satisfies the idea of a placeholder label that does not need no agonizing over it. You can even omit the label altogether. And color the icon as pale if you are still unsure.

3. Keyword mapping:

“simple keyword mapping isn’t really going to do the job” … “If you have multiple layers, multiple connected layers, you can really do some fine tuning”

Between the two outer columns, you can always place intermediate/ temporary items. (See the picture below for how I did it before submitting my Week 4 task.)

So by fiddling and fumbling with the items, you can manually try out what kind of operations you would later expect the AI to do, as it was discussed in the live session. My think tool is not optimized for finished structures, but rather for raw material for thinking.

Here is a quote of the book I just started to read:

“By treating information as a resource, as raw material rather than a finished product, we give ourselves permission to adapt, modify, and transform it into a shape that aids understanding and makes us better thinkers.”

Stephen P. Anderson; Karl Fast; Christina Wodtke. Figure It Out: Getting from Information to Understanding (Kindle-Positionen 231-232). Rosenfeld Media.

But of course you are welcome to use it however you want.

Posted in Ethics21 | Tagged | 3 Comments

#ethics21 Week 5

Although the course has fairly much drifted towards the oral, I will briefly write down my thoughts here in my blog.

Screenshot of a calendar, saying Week 5

I have become increasingly aware of how little I know about the heavy complicated philosophical traditions that are tied to almost each of the seemingly simple concepts. So I appreciate Stephen’s comprehensive lessons, and I am happy to invest the many many hours of listening.

There is one thing I still hope to understand: Why is the idea of consequentialism so much despised and attacked by many? Is it that some early capitalist ideologists did somehow ‘hijack’ the idea for justifying egoism?

For me, the earliest encounter with the idea was Max Weber’s contrasting Verantwortungsethik (ethics of responsibility) with Gesinnungsethik (ethics of attitude, see Wikipedia and a previous blog post), and I could not understand what is wrong with the responsibility to consider the consequences of one’s decision.

Later, I was surprised that it is equated with Utilitarianism, but still, what’s wrong with utility? Do I smell some elitism here, of those who don’t need to care about utility because they live in abundance, and are proud to focus on Das Schöne, Wahre, Gute (the beautiful, true, good)? But beauty does have a utility (see e.g. “Aesthetics as Pre-linguitic knowledge” by Alan Whitfield), and truth does save costs in the information economy, and ‘the good’ has many connotations of the useful.

So if there is some elitism, and perhaps some dreaming of a meritocracy, I will be watchful if there are also growing tendencies against democracy — which I see as my duty because it was part of my oath of allegiance, 40 years ago, at the start of my professional career in the public service.

Posted in Ethics21 | 1 Comment

#ethics21 Undistributed posts reference

Somehow, three two of my course posts have not shown up in the daily newsletters, archives, or in the RSS feed (at least I did not discover them), so here are their links:

#ethics21 Week 4: many concepts, Nov 7 (included later)

This week has brought plenty of ethical concepts, and I need to share how interesting I found it to put them all on a map and connect them.

#ethics21 Week 2 Curiosity, Oct 17

In Week 2, we will learn about Applications of Learning Analytics, and I am curious about a few things that have puzzled me for a while.

#ethics21 Introducing myself, Oct 09

Other course participants started introducing themselves and indicated why they joined the course, so I will do this, too.

Posted in Ethics21 | 1 Comment

#ethics21 Week 4: many concepts

This week has brought plenty of ethical concepts, and I need to share how interesting I found it to put them all on a map and connect them.

I did not do a proper “concept map” (with labeled relationship arrows) but just associated those that seemed most similar. And of course the list is by no means complete or relevant; I just copied most of the terms from Stephen’s slides, and some additional ones from Jenny Mackness’ latest post which is incidentally about values, too.

I really recommend to try it out for yourself. Just click below to reveal all the words, then select them all from top to bottom, then open http://demo.condensr.de in a new window and then drag and drop the entire selection onto the canvas. (If anything goes wrong, right-click on the graph and select ‘Wipe clean’ and try once more. The drag & drop can be watched in this video. Don’t hesitate to contact me directly.)

(Click here to toggle the word list)


value
utility
benefit
worth
good
goodness
beneficence
gracious
generous
non-maleficence
harm
evil
judgement
calculating
measure

beauty
joy
pleasure
happy
wellbeing
eudaimonia
meaningful
harmony

truth
knowledge
wisdom
honesty
integrity

fairness
equity
equality
justice
non-discrimination
democracy

goal
end
purpose
pursuit
intention
intentionality
outcome
interest

autonomy
consent
privacy
confidentiality
anonymity

dignity
respect
compassion
acceptance

independence
dependent
vulnerable

consequence
responsibility
accountability
explicability
transparency
stewardship
duty
obligation
accuracy
objectivity
reliability
conscientiousness
trustworthy
trust
faithfulness

competence
authority
professionality

rules
principles
laws
commandments

virtue
character traits

Of course I used the full version which is equally free to be downloaded here. And my results, after some autolayouts and zooming, are shown below. First a tree with minimal connections, then a circle with added cross connections. The colors on the left represent the ‘betweenness centrality’.

Posted in Ethics21 | Leave a comment

#ethics21 Week 4, more

When you saw all the issues presented in Stephen’s videos, you might want to look for a master switch for all of the AI in the world and switch it off immediately and completely. But that’s not going to happen. So I go on thinking what is most important for me.

Unlike the issue described in my previous post, my following concern may be clearly subsumed under major categories, such as ‘Opacity and Transparency‘, and under the codes of ‘Honesty’, but among these broad topics it might get buried. It is: Mandatory labeling of artificial agents.

Initially, it might not seem like a big deal, since the first few instances of AI that we encounter, often proudly declare how they use progressive technology, or they affect only trivial occasions. But very soon, artificial agents will be able to appear totally indistinguishable from genuine humans. And then the trust in their owners’ honesty of labeling the AI, will become an extremely critical ingredient for living together with such creatures, when the subjectivity and individuality of the human is the only distinguishing feature. And therefore, I think, we need to be careful even with small subtle indications of fakes and inauthenticity, because they might blur the difference and make the distinction ever more impossible. (Examples below.)

Concealing the artificiality will not only be a matter of dishonesty which some ‘intellectually’ privileged may use in their “war on stupid people” for gaslighting and patronizing them, and will not only grow a cultural climate of ubiquitous suspicion of pervasive fakes.

In education, it simply won’t work, since human learning and imitation sometimes depends on genuine human partners: In a study by Andrew N. Meltzoff (whose office kindly sent it to me), titled “Understanding the Intentions of Others: Re-Enactment of Intended Acts by 18-Month-Old Children“, they experimented with imitation from human vs. inanimate agents and found that “Children showed a completely different reaction to the mechanical device than to the person“. As I wrote in my predictions for this year, people are craving for the real, the genuine and the authentic, and this is, for me, the cue that AI’s role of personal coaching of unique individuals, will be limited.

Examples

In many cases, we don’t really care if we are dealing with a real, identifyable, person who addresses us personally. If we get an email from a support center, it’s not too important if the name in the From: field (the boss?) actually is the person who wrote the response, or if he or she crafted the response specifically for me, composed it from reusable fragments, or used a stock template. But to impress the customer, vendors will tend to appear as personal as possible.

I am old enough to remember when advertising letters first used our name within the text rather just on the envelope: wow! When we got used to this sort of computer printout (in equidistant typewriter fonts), our names suddenly appeared in proportial fonts, like real printed advertising brochures! And soon after, it was not only in the header but right within the body … or on pictures … or on a label of the keys of a car we should win from a lottery.

A few more examples that are still innocent but may be not in future: The announcement in our suburban train starts with “Meine Damen und Herren” = My Ladies and Gentlemen. Without the “My”, the phrase would be incorrect and alien (probably similar to French Mesdames et Messieurs). And as long as the automatic recording was at least done by a human, it is still tolerable, as is the “Heartfelt welcome” and the closing phrase “We wish you a nice day”, since the plural ‘we’ might be interpreted as the AI together with its owner organisation. But a 1st person singular should not be abused for faked “feelings” of an AI — or am I too fussy here?

Speaker in a train, with symbolic sound waves emitted
Stock photo?

The more we get used to innocent fakes, the more difficult it will become to detect serious ones. For the birthday of a friend, platforms remind me of the day and advise me to “show them that you thought of them”. In the news, we often see that reports about casualties and crimes contain generic stock photos of ambulances, fire trucks, and police cars instead of actual photos of the event. (Maybe my tolerance is too low, and as a non-TV-viewer I am not already sufficiently used to autumn scenes in spring, and family relatives that don’t resemble each other, and windows that always only show some artificial light behind them rather than visible life…) And even cats seem to love artificial ruffling.

Posted in Ethics21 | Leave a comment

#ethics21 Week 4, Codes

I have been skimming a lot of issues and codes by now. But there is a specific harm that I have not found mentioned anywhere, nor a more general principle where it might belong: Proliferating popups, alerts and ‘notifications’.

Certainly, AI can contribute to their abusive spawning. And when we oppose the tracking by advertising companies, isn’t it mainly the danger of additional popups that we really fear?

On one hand, they might be seen as merely a nuisance. But on the other hand, we are all aware today that attention is a scarce good in the ‘attention economy’, and obtrusive interruptions and distractions cause real harm to our cognitive performance. And in a learning context, inappropriate distractions can be a a big factor for failure!

So, shouldn’t people’s attention budget be considered a value, and its protection an ethical principle? In the ethical codes, there are noble general principles such as “must not unreasonably curtail people’s real or perceived liberty” or even values such as “human dignity” that may seem violated by forced alerts and notifications. But the loose treatment is so pervasive here that the ‘offenders’ probably don’t feel addressed by the higher level codes.

In particular, the idea of a ‘notification’ gets stretched in ever more abusive ways. Of course there are events where I do want to get informed by a ‘push’ information, by a ‘bring’ rather than a ‘fetch’ information about that event. Usually, if another user interacted with me or my resources. E.g. when someone comments on a blog (which Blogspot doesn’t seem to email me any more), or when I get a “friend request” (which may still be spammy, but at least I can report such abuse). But when the platform itself makes yet another ‘friend suggestion‘, this is not a ‘notification’ any more, but at best, it’s ‘news’.

For fine-granular distinctions like these, the awareness of wrongs is apparently not yet sufficiently pronounced — perhaps because it is not explicitly addressed in the codes. I searched for terms like ‘intrusive’ in the codes but they only referred to the peeking of private infos, not to pushing infos.

But if the inflation of ignorable ‘alerts’ continues to grow, the danger gets bigger that one day we ignore a rare crucial warning, with real harm to health and life.

Varous notifications symbols such as bells and vibrating mobiles, in different colors.
Posted in Ethics21 | Leave a comment

Windows 11 spook finished for me

Windows 11 broke my taskbar, and so I had to go back to Windows 10.

Almost timely for Halloween, my update to Windows 11 last night, brought a nasty surprise: the taskbar options “Never combine buttons” and “with labels” were removed, and what was my central productivity dashboard was replaced by something like the stale Mac dock.

A ghost labeled Windows 11, has a Mac dock in his thought bubble, and the taskbar below, containing two labeled Firefox- and two Notepad- buttons, is broken into two pieces.

Furthermore, hovering over the task button of a minimized window wouldn’t restore and foreground that window anymore. (If you never used this feature, see this short video scene here at t=87.)

Of course, telemetry analytics might have evidenced that fewer users use such powerful options. Of course, as users are being ever more patronized and stultified by restricted operating controls in the browsers or touch devices, fewer users even expect more powerful controls, and the feedback effects suggest that users want dumb controls. So they drifted away from powerful OS controls towards patronizing ‘sovereign posture’ apps and ended up with the miserable browser tabs mess long ago.

(How do I avoid too many open tabs, you may ask? Well, if in doubt, I drag the page’s icon from the address bar and drop it onto the desktop, which is specifically designed for temporary stuff like this that may be filed or trashed soon after.)

Combined task buttons, by contrast, destroy the affordance of the direct manipulation principle, adding an annoying intermediate step of hierarchical selection among the grouped items.

For good measure, scroll bars became so narrow that hitting them is a pain, and get broader only after hitting them. This behavior seems to be fashionable since a while on unusable websites.

I wonder if there is a pattern to be recognized from all these behaviors that might skew the ‘usability’ testing: Scroll bars that react to hitting by getting broader, links that react by changing their color only when hovering, an autohiding task bar that reappears only when approaching its hideout, or task buttons that react by offering their annoying alternatives — all this might appear obliging and responsive for the consuming users and please their craving for fidgeting and unrest and fighting boredom, but for actually working with IT, it is just the intrusive app pushing itself in between the user and her workpiece.

So I ended the spook. I did not just run away, but filed a complaint in their Feedbacks, and twittered about the problem. Perhaps you can do that, too, to show them that Windows users are not the sheep that other OSs expect.

Posted in Usability | Leave a comment