Diversity vs. divergent

Yesterday I read a twitter thread talking about ‘divergent’ and ‘diversity’ as if these words belonged together, so I had to look up their etymology.

Ultimately, they do stem from the same Proto-Indo-European root (with descendants as diverse as wreath, worm, rhapsody, extroversion, warp, worth and many more). But already in Latin, their ancestors were very different: vertere ( = ‘to turn’) vs. vergere ( = ‘to bend, turn, tend toward, incline’).

In any case, the relationship is an occasion to think about one’s own understanding of ‘diversity’. If it only applies to groups or people that are, in some sense, ‘divergent’ from some ‘normal’ reference point or from some center, it might be a misunderstanding.

Maybe one overlooks differences that are less obvious, such as prefering synchronous over asynchronous style, oral over written style, guided over independent, mobile over desktop, neat outlines over scruffy maps, or any such, however vaguely demarcated, inclinations?

If one is not aware of their own style, how can they cater, then, to genuine diversity?

Circle of 13 armchairs of very different styles.
By Nancy White
Posted in Cognitive Styles | Leave a comment

Science Denial

Now I finished “Science Denial” by Gale M. Sinatra and Barbara K. Hofer. Their answer to the wicked problem of “What to Do About It” focuses on educating people about how science works.

In particular, that scientists are fallible and “there is no single method that leads to some objective truth.” (p. 5), but rather, it is the collective effort that plays its role in vetting claims and reaching the scientific consensus.

This is, IMHO, a very different picture from the one that makes science so attractive for some, and misleads others: the certainty about true or false, right or wrong, which can be used as a replacement for religion (for those who feel that religion seems too old-fashioned but who still crave for being a sheep following a shepherd), and which can be used as a banner to follow like a sports fan club (who is certain that their team will win and hence they are on the right side of history).

While this complacent arrogant image might have put off the deniers, they also fell prey to the underlying binary thinking, just with the added thrill of being on the opposite side of, and feeling even smarter than, the mainstream. While every dumb database ‘knows’ that there are three possible values — true, false, and ‘NULL’ ( = don’t know, yet) — they equal unproved with disproved (much like simple-minded ‘myth-busters’ do, BTW).

Sinatra and Hofer give plenty of useful advice to science communicators, for example “‘Both sides’ is for opinions, not science” (p. 176). IMHO, these tips are more promising than expecting that individuals are “adopting a scientific attitude” (p. 8), evaluating complex information, or “Monitor your own cognitive biases.” (p. 165) and “Know the role of your emotions.” (p. 167).

But what I think is very necessary, is that many experts themselves do not reinforce the impression of certainty and complacency. In particular, it is dangerous if they do so in a neighbor discipline which the layman cannot really distinguish. I, for example, could not sufficiently keep apart the scopes of Virology, Immunology, and Epidemiology, when the pandemic started.

Book cover "Gale M. Sinatra and Barbara K. Hofer: Science Denial. Why it happens and what to do about it."
Posted in Knowledge | Tagged | Leave a comment

Summary of my course blogposts

After my 19 blog posts for the cMOOC called “Ethics, Analytics, and the Duty of Care“, I need a summary — although this is almost impossible for such a massive course of approximately 790 slides/ 43 hours video/ 500+ pages spoken text (net weight, i.e. without the 23 technical and ~ 20 discussion videos).

Screenshot of YouTube thumbnails of 30 of the course videos.

The most important positive insight for me was that AI in education should and could mitigate vulnerabilities and oppression (457), in particular by applications similar to formative (not summative) assessments, and by relieving time pressure.

There was a lot in the course that I could easily agree with, in particular the idea that ethics is not something that can be generalized, deduced from rules, or programmed. I understand better now how the term ‘ethics’ is being used in this way that was alien to me (456), and why the consequentialist view is loaded with so much historical ballast (453).

The large module on Care Ethics was especially interesting. On one hand, due to the parallels with connectivism (455 and 461). On the other hand, it inspired more thorough thinking about vulnerability and dependence (457), and the delicate relationship between the one-caring and the cared-for (458).

It is here where my skepticism of AI starts, and it was good to engage with the topic and go beyond the mere unease.

One objection is that the relationship with a robot cannot be the same as with a human, even if it is a good fake, just because we interact differently when we know it’s a faked human (455, 460). (The tempting solution would of course be to betray the dependents about the robot’s true identity (450), and I think we must be very clear that this would violate any ethics that objects dishonesty as an abuse of the privileged position of being a better cheater.)

Another objection is about growing independent, as it is expected for higher ed students. This does not only require trust (above objection), but also learning to transfer one’s knowledge to different domains and to come up with associations beyond the narrow subject matter at hand. But realistically, AIs will be limited to one specialized domain each (445 , 458). Furthermore, students who avoid independent work might indulge in the comfort of the machine. Of course that’s just my speculation.

Finally, there was plenty of opportunity to think about the political dimension. The tree vs. mesh structure of society (444), the power on the labor market (447 ), the power distribution between end users and those who pay the development (462), and whether it will be the poorer students who will be fobbed off with faked teachers (460). All of this suggests that we should be very wary.

The course interactivity was a bit disappointing for me because there was almost no blogging and commenting which I would have preferred over the oral synchronous sessions.

Posted in Ethics21 | 4 Comments

#ethics21 Module 7, more

After 13 videos and more than 10 hours of watching I realized that I may have misunderstood who does the training of an AI model.

I thought that training an AI by supervising and reinforcing its learning and creating a model is one thing, and that using it by interacting with it is another, later, thing. Now I learned that there no such simple division of labor between developers and users, and that the end user’s specifications count as training, as well: for example giving Feedly’s Leo examples of posts that I liked to read.

But now I am left wondering how far my influencing the model may extend. There must be some limit somewhere? If I am being cared for by a care robot and tell him that plenty of sweets are best for me, will he believe this and bring me ever more sweets?

And I suppose that here is the border between a personalized service and a fully personal one, and here is also the response to my doubts in week 2, and similarly, the response to my suspicion of a One Way relationship that I raised here at the beginning of module 7.

What I particularly liked in the last video is that, again, a very extreme alternative thought was carried through, a scenario of a very mutual relationship between human and AI:

“if we treat the AI as, you know, a person that feeds back into the training of the AI, the AI eventually begins to regard itself as a person and treat itself as a person in its own decision making. So, I don’t think this is such a hard philosophical conundrum as it might seem” (1:18:59)

and

“interaction with artificial intelligence and analytical engines is ongoing and dynamic and doesn’t end and our major role in these interactions is to train them. If we train them, well, they will become reliable responsible, ethical partners.” (1:20:59)

Here, the ‘we’ seems to include both the developers and the end users, but I am not sure about their distribution of influence and power. Unless we get some sort of ‘Indie AI’, the capital paying for the costly production, will probably have more say.

Posted in Ethics21 | Leave a comment

#ethics21 Week 8

Now that the negative thoughts from the previous post are out of my way, I can turn to this week’s topic.

In the Monday’s introduction, there was a lot of talking about “society as a whole”. In particular, the ethics of the whole society. As 10 years before with the knowledge of a whole society, I had my difficulties to get my head around that. So I’ll first revisit how it became easier for me to understand it then after Stephen’s comment.

I considered approaching the ‘knowledge’ of a profession or discipline xxx as a newcomer, namely learning how ‘they’ think and speak and how it may ‘feel like’ to be one of them. First I might encounter ‘them’ as some individual new colleague, a ‘you’ in the singular. Then gradually, the commonalities and patterns of their ‘being an xxx professional’, become ever more familiar, and the borders between them begin to blur, and I see them as a ‘you’ in the plural. At the end of this process, the xxxs’ collection ‘as a whole’ contains, strictly speaking, all of them except myself. Then it is only a small step to get from the ‘they’ or ‘you’ to the ‘we’. We all.

Now ethics is similarly learned. From individuals in one’s close proximity. Via ‘ripple’ effects or, as I expressed it in my first vague post, via contagion. Later I learned that this is compatible with connectivism, see ebb and flow. And it has a lot to do with decentralisation, as opposed to central authorities and templates.

A decentralized network topology, with nodes in the proximity of one red node having icon colors and connector colors in tones of decreasing warmth.

Both with knowledge and with ethics, it seems like the ideas ‘spread’ across the interface, or more precisely, grow at the interface, between human and human. That’s why it is so dangerous to poison the trust at this interface with fakes.

Posted in Ethics21 | Leave a comment

#ethics21 Faked teachers?

A challenging question in this week was not on the course channels but in OLDaily: “Still don’t think they can be teachers?” in a comment about expressive emotive humanoid robots.

Yes, I fear they can, and maybe poorer students will be fobbed off with faked emotions, in countries with a commercialized education system.

But what kind of education will that be? Not: learning to recognize subtle patterns (which would reveal the fake), but: drill of facts and procedural skills that will be useless by the time the students graduate, since such jobs will be automated by then anyway. And worse: getting used to fakes and dishonesty by false emotional expressivity, destroying the natural aptitude of trust.

Sorry to sound apocalyptic, but as I have said especially in week 4 and week 6, faked AIs for gaslighting the underprivileged would be a nightmare for me.

Posted in Ethics21 | Leave a comment

No fear of entangled links

My workflow includes something that might seem scary to the uninitiated observer: I connect icons on a map without any bothering about how entangled they are becoming.

Screenshot of a zoomed-out map showing 17 columns of densely connected, items.

Older readers might be reminded of Burda Sewing Patterns where one could trace a colored line among densely packed shapes.

Schematic example of Burda Sewing pattern lines
Click here for a genuine photo from https://www.saturdaynightstitch.com/

If you don’t trust that they can often be nicely disentangled, you can now try a new function in my Thought Condensr tool: a puzzle game.

The trick is that rearranging too early might seduce to premature grouping and pigeonholing, and then to miss new relationships.

If you never thought of trying out my tool, maybe this is the opportunity: relax with the little puzzle game.

Posted in Visualization | Tagged | Leave a comment

#ethics21 Week 7 Takeaways

It is wonderful that in the #ethics21 MOOC, some important thoughts were really carried through. This made the limitation of AI much clearer for me.

One such thought was how algorithms would look like that provide care, integrity or trust, with inputs from a whole caring community. The other thought was whether a large scaled-up MOOC with many smaller groups could each have an “artificial Stephen” modeled after his many live inputs (see the Friday’s discussion at ~70 min).

Both felt somehow wrong immediately — I would not want to participate in such a MOOC, or to have a robot carer for solace — but why? To be fair, the scenarios don’t confirm the prejudices against AI as a mechanistic expert system, nor the misunderstanding of teaching as transmission. Still, there is an underlying pattern of Input – Output in the perceptron layers and the End-to-End Machine Learning Workflow, that doesn’t seem to leave room for something like reciprocity. Once the training stage is over, the AI won’t learn any more from the end users, even if it is continually tweaked by the ‘data processor’ stakeholders using new data from the ‘data subjects’ — if I understood this correctly.

German street sign "Einbahnstraße", i.e. One Way Road

Maybe my idea of Higher Education is too idealistic, based on Humboldt’s ideal (see an old CCK08 post) that university teachers and students should learn together, in a community of curiosity and the unity of research and teaching. Or Howard Rheingold’s long-standing practice as a “co-learner” teacher. I do think that even the unique questions, misunderstandings, or surprising/ outstanding elements for highlighting and annotation, of each new student generation, can influence the teacher towards new insight, even if just by a bit of questioning of old ‘matters of course’, or just a bit of refocussing.

Similarly regarding Care, I think that the careful active listening by the one-caring to the cared-for (see previous post) may indeed occasionally entail that the former learns from the latter, for instance the valuable perspective of a very old person.

This intergenerational mutual learning may be more or less absent in the typical K-12 environment where just centralized templates of content and skills are wanted, but in environments where fostering independence is important (infants, elderly, HE students, or general critical literacy), it may be rare but still crucial.

And beyond the intergenerational interactivity, I think every feedback has the (however small) chance to generate genuine new insights. However, this will probably not pertain to the specialized domain that the robot teacher is trained for, but probably rather come from distant associations, and I cannot yet imagine how AI would implement and handle such extra-domain input.

Unfortunately, I don’t have an idea, either, about how group-working MOOCs could be scaled-up without the abovementioned artificial Stephens. For scaling-down, I think this course has shown that too small a number is also difficult, at least the blog commenting suffers if there is not a critical mass.

Posted in Ethics21 | Tagged , | 1 Comment

#ethics21 More Week 6 thoughts

After many hours of listening to the 9 videos of this module, the reward is a very satisfying, plausible and useful takeaway. AI in education should and could mitigate vulnerabilities and oppression. It is useful (sorry for the utilitarianism) because we can derive practical consequences for the design of the new systems.

1. When I think of vulnerabilities and dependence in education, I think foremost of grading and assessments, and I have blogged about this and I have received constructive agreement and disagreement in the comments here Ungrading, my take and here #EL30 Week 6: Automated Assessments. From the former, I learned that “[students] want to know how they compare to others in their cohort and beyond.” while from the latter, I was nicely reassured that “for the final summative assessments deciding about the future life of a human, such [problematic] algorithms are not acceptable.”

So if AI were restricing itself to formative assessments, can it avoid hurting the weaker students with fear while still let them know how they compare to others? I think yes, the advantage of the digital device over the physical classroom is that it can preserve anonymity. If the pupil wants to know, the system can tell him or her how the others are doing without naming them.

Another source of fear and oppression is time pressure. I keep noticing in this course how much the synchronous and the asynchronous styles differ, and I am happy that the possibility of blogging relieves me from the rapid-fire pressure of the live session.

2. There is also another takeaway from the thorough coverage of the relationship between the one-caring and the cared-for, in particular from Noddings’s work. The one-caring should not act without the expressed wish signalled by the cared-for. So this seems to be once again a matter of pull vs. push, which is so important in many tech-related issues. However, in the context of vulnerable and dependent persons, this interplay of request and response, poses yet another subtlety in that the cared-for may be hesitant or embarrassed to explicitly express a need, and so the task of the one-caring is even more difficult, to still recognize the wish by careful active listening, and still not overriding the other by preemptive patronization.

Maybe technical self-service can mitigate some of the embarrassing sentiments, too. When self-service supermarkets arrived to replace the mom-and-pop groceries, one of the success factors was that customers did not have to be embarrassed when they needed time to decide or if they did not know how to pronounce a product’s name. (I remember when Sunlight soap had a sun icon at the very spot that would have distinguished the English ‘g’ from the ‘c’ of the German equivalent, to mitigate the pronounciation problems.) So the reduction of human attendance had at least a tiny welcome flip side.

However, what the active listening of a one-caring can recognize as the unspoken wishes of a cared-for, is probably not always recognizeable by a machine, simply because the cared-for will approach the machine differently.

Icons of a nurse and a patient.
Posted in Ethics21 | Leave a comment

#ethics21 Personal or not

I am still surprised how much arguing seems to be needed against the possibility of a universally valid ethics. For me, ethics has always been a personal thing, and I wonder if the term just had different connotations in my youth or/ and in my country. Although I have already mentioned some aspects in week 1 on Oct 14 (privileged professions, GDPR, and discretionary decision-making) I think I should expand on this a bit more.

Perhaps our personal ideas of ethics did not seem valid for everyone because sometimes they even opposed the common authoritative views and were allowed only as an exception. For example, there were discussions if a school subject called ethics should become a replacement for those who refused to attend the Religion lessons. And those who refused the military draft for ethical reasons were only allowed to do their replacement service after explaining their ‘conscience’ to a jury. (In the hearing, BTW, hypothetical situations comparable to the trolley problem were common questions, so we did have to think about such dilemmas a lot.)

By contrast, for our conduct within the professional work in the public service, the term ‘ethics’ was not used; it was simply the duty. Of course there would have been plenty of opportunity to abuse some discretionary leeway and get away with unnoticed tricks. But the obligation was founded by an oath of allegiance (mentioned before, which BTW was not theatrically done upon the bible or something, but instead by a handclasp with the principal director, but nevertheless it meant an equally binding relationship). The oath also contained the ‘FDGO’, the free democratic basic order, and as students, we often thought that we would probably not be permitted to the public service because people who participated in critical protest marches were suspected of disloyality (and the surveillance at the marches was a matter of course).

Perhaps the public service in Germany has some speciality that needs to be mentioned, although it did not directly apply to me and although its importance has now largely decreased: There is this special employment relationship of a “Beamter“, an officer who has a mutual special trust relationship with the state. This means that his or her employment can almost never be terminated (except for felony), and in turn, he/she was entrusted with certain tasks such as reliable infrastructure with Mail or Rail, or sensitive jobs such as teacher or administration officer. (The history of this status is related to the Prussians’ clever fight against bribery: by relocating the officer frequently, it was less easy for him to network with the locals to gain personal benefits.) This tradition may explain why some of what otherwise may be called ethical, is just seen as a professional obligation.

It has been amazing in this course how many terms don’t have a simple one-to-one translation. And particularly the word ‘care’ has so many different meanings. The image below shows one meaning that has been particularly important to my parents’ generation after the war.

CARE packet, see rich description at the linked source
CARE packet, CC BY-NC-SA by dc-r docu center ramstein

Posted in Ethics21 | Leave a comment