#ethics21 (Technical:) Graph

I am fascinated by Stephen’s idea (mentioned in the Friday’s discussion) to map the Applications of Analytics and the Ethical Issues and then connect issues with applications to see what pattern emerges.

In a technical video he showed how to properly import the data into the database. Being impatient, I tried a quick and dirty solution with my tool. (Caveat: not coupled to database, so, no items added by participants, and no connector lines synced back!)

Here is a video

and here is a screenshot:

Screenshot of a mapping tool, containing two tree-shaped graph structures facing each other

The resulting map where you can connect (ALT + drag) yourself is here: http://x28hd.de/demo/?applications-issues.xml

Posted in Ethics21 | Leave a comment

#ethics21 Week 2 Curiosity

In Week 2, we will learn about Applications of Learning Analytics, and I am curious about a few aspects of AI that have puzzled me for a while.

I think I don’t underestimate AI’s eventual abilities, and how far some extreme thought experiments of ‘raising’ AI personalities can liken hypothesized machines to us — which I tried to describe in my EL30 post about alien AI intelligences. So I am not surprised when Stephen even ponders the possibility of an “artificial me” or a copy-and-pasted “connectome“.

But when he is so optimistic about “personal AI” (e.g. for selecting feeds for one’s reading or selecting the audiences for one’s writing, or how “algorithms should be owned and run by individuals”), I have doubts:

Will this satisfyingly work also with the small amount of data from a single person’s interests?

Certainly, AI solutions based on big data can be able to align individuals with central templates such as common canons of memorized knowledge. But what about their personal goals for which there is much less data available?

Five question marks in circles of varying color, size and aspect ratio.


how can AI get to new insights?

After all, AI learns from data of the past and ‘ruminates’ them.

And as far as I understand it by now, the basis of much of machine learning is the sort of relationships that combine concepts that somehow belong together, within a frame/ script/ scheme, e.g. by ‘co-occurrences’ of, say, a word on the same page. But can it also come up with types of links like metaphors, or by distant associations?

Also, novel data are probably available only in much smaller amounts — which combines both of my above doubts.

Posted in Ethics21 | Leave a comment

#ethics21 Week 1 Task

Already the first week’s task is very challenging: “What Does Ethics Mean to You?”. But I won’t be ducking out.

The most important aspect to me is probably: not to abuse my privileges, and maybe even to try to leverage them for something that is useful for others.

It is not like legal obligations, it is voluntary and beyond legal compliance. This also means that ethical goals, in contrast to laws, are not a consensus among the many of a democracy (at least not yet).

I understand, however, that the term ‘ethics’ is increasingly being used in a different way, very similar to legal rules and principles that are generally accepted and should be followed by anybody. My explanation for this trend is as follows:

a. In Professionalism (see Stephen’s post about FELT), it used to be a higher standard for a privileged community, voluntary, beyond the public’s obligations (as above). But with the growing competition for the scarce academic jobs, the effects of ‘bad actors’ grew, too, and rules became more mandated and formalized. For example, instead of citations as credit for inspiration (which would also suggest attribution to blogs), the focus shifted on bibliometrics and copyrights and formal rules (sometimes even banning blog citations).

b. From my European angle, much of what elsewhere is checked by the research ethics compliance procedures (at least in social sciences), seems to be about data privacy — which has never been a voluntary matter here, but was legally enforced by GDPR and its predecessors.

c. Automatic decision-making agents seem to be considered as the extension of personal discretionary practice, i.e. ethical rather than legal. But this view ignores that for most decisions involving multiple citizens, the officers in charge are legally bound to comply with the equity principle.

So with this shift from personal patterns to public rules, it is understandable what Stephen bemoans in his presentation of this week: that “We spend so much more effort trying to prevent what’s bad and wrong”. After all, public interaction rules focus on the border of one’s personal liberties where the liberties of another citizen may be harmed.

Above, I too mentioned the negative (avoiding the harm/ abuse) before the positive (aspiration of trying something useful). It is easier to estimate when my privileges would be unfairly abused, than to determine how much of the ‘noble and good’ I should do. I won’t claim that I would ever compensate the unfairness sufficiently.

Which inevitably brings us to the political and to the structures of power and influence. In the live event at around 16:21 (slide 5), Stephen discusses network topologies and suggests that we are inevitably moving to a mesh structure. I would distinguish economic from political power. Ecomomic power (of the few) is mostly derived from the very fact that they are few who are controlling some scarce resources (at least this is what my understanding of game theory suggests, which may be outdated since I acquired it when I wrote my graduating thesis in Game Theory in 1977-79). By contrast, political power is the power of the many, who could override and confine the former (at least in a democracy). This political power may be necessary to limit the job market effects — where the owners of AI machines are the few.

A power law distribution

An important special case of unethically abusing one’s privileges, is dishonesty (leveraging a greater ability to deceive others). With mental abilities gaining ever more importance in the information economy, these ‘assets’, too, divide us into Haves and Havenots, and it’s time to watch out for increased gaslighting and patronization online. My nightmare would be that artificial intelligences surreptitiously obtain the trust of simpler minds by pretending they are genuine humans.

More links: my own posts about similar topics include Blog parade “AI for Common Welfare?” (harm on the job market), #EL30 Week 6: Automated Assessments (decisions and scarcity), and #EL30 Alien Intelligence AI (programmed ethics).

Posted in Ethics21 | Leave a comment

#ethics21 Introducing myself

Other course participants started introducing themselves and indicated why they joined the course, so I will do this, too.

During my entire work life, I was employed in IT. And if IT, or the Internet, or AI, eventually became a curse rather than the blessing that we intended, I would feel somewhat guilty and complicit. This is why I try to closely follow how AI is going to be deployed, and to see what I can do to help warning if misuse is lurking.

I don’t know if this would be called my ‘duty’ or my ‘responsibility’ or ‘accountability’ or ‘obligation’ or whatever, since I don’t have sufficient knowledge about ethics. So that’s another motivation to participate.

A third expectation is to learn more about the connectivist and decentral aspect of care ethics. I am wildly guessing that these aspects have something to do with how this type of ethics is being learned: not from central authorities, but rather via ripple effects, and perhaps like ebb and flow.

A calendar week view, where the week number is changed to Week "-1"

To introduce myself briefly, I’ll just say that I have retired from the computer center and elearning center of the University of Heidelberg, and that I now develop a free think tool (I hope some occasional references to this will not annoy anyone as advertising, since I assure I do not have any commercial benefit from it whatsoever).

For more info about my interests and me, I would like to invite you to the rest of my blog, and maybe particularly to what I have tagged ‘AI‘ or ‘decentralized‘.

Posted in Ethics21 | Leave a comment

#ethics21 Hello

Today is the start of the new MOOC “Ethics, Analytics and the Duty of Care“.

For me, the most interesting part of a cMOOC (connectivist MOOC) is reading blog posts and comments. I am a slow thinker, so I prefer this asynchronous kind of activity to the synchronous video sessions — which do have their merit, at least as they provide the syncing or ‘clocking’ for the weekly topics and help avoid getting lost, in particular since these topics are not in a sequence of obvious dependence.

For me as the owner of a wordpress.com blog, it was easy to create a new category that will contain all my posts for this MOOC, and to submit its feed address with the form available from a green menu item in the lower right of the course pages, called “Submit Feed”. Submitting is easiest after you have published a first test post or introduction.

Partial screenshot of the course page menu, with the two items circled that are described in the text.

Since I prefer reading your blogs via my standalone RSS reader on my main laptop (a free tool called https://quiterss.org/), I am happy that the MOOC provides the list of all the other submitted feeds in the right format — it’s called “OPML” but you don’t need to know anything about this except that you can specify the file on the “Import” dialog of your RSS reader application. The file can be downloaded from the “Your Feeds” page, which is available via another green menu item in the lower right.

Looking forward to the coming 8 weeks.

Posted in Ethics21 | 1 Comment

Reclaim filing

An article in today’s The Verge caused much excitement saying that students do no longer understand how files and folders work on a computer. Much of the discussion is about whether they should know how apps work ‘under the hood’. I think this is an unfortunate distraction. What is really a shame is that modern apps and operating sytems keep them away from using files and folders — from using their own files and folders.

They rob modern users of so much useful functionality.

  • Most prominently, to browse through their own savings, with the affordance of ‘I know it when I see it’, i.e., without the need to specify any search words;
  • hence, to use sloppy, short or cryptic file names, rather than agonizing over meaningful names, because the file names only need to be recognizable within the local context of the respective folder;
  • hence, to speed up capturing notes (see a video of my own practice);
  • as a side effect, to reactivate the neighboring context, with possible serendipituous findings (like Luhmann);
  • moreover, to keep related stuff together, independently of the apps that created it, such as URLs and references, rough sketches and finished diagrams, short drafted text snippets and long refined writings (see some pictures of my own practice);
  • hence, to include extremely simple apps such as Notepad on Windows, for very quick capturing of ideas, and as a distraction-free preliminary format;
  • and to include shortcuts to other folders (which are very important for me, see #4 habit, and which BTW work much better in Windows than the ‘symlinks’ or ‘aliases’ of the competitors do, because these obscure their special role and hence obfuscate a clear structure).

Bloated applications push their own organizing structure into the foreground as if there were no alternative; they are addicting prostheses rather than empowering tools for work, and this is the no longer just patronizing, but gaslighting and stultification.

An icon of a folder containing icons of  a plain file and a Firefox bookmark URL.
Posted in Tools | Leave a comment

Teaching Machines

I just finished @audreywatters’s new book “Teaching Machines”, and my reaction is: Wow!

Cover of "Teaching Machines: The history of personalized learning" by Audrey Watters. A smiling school-girl from ca. 1950s pushes a button.

1. It is an important book, because without such a deep insight into the history of teacherless instruction, today’s new teaching machines are probably doomed to repeat some crucial errors over again.

It is enormously impressive to follow the naivety towards snake-oil promises through so many decades, and to see that it is almost exactly the same as today.

For me, chapters 1 through 10 were often resonating with my experiences in the university computer center. How cumbersome it often was to negotiate and convince some staff of even trying things out, when obviously they just were not able to imagine the academic affordances of some new tools.

Also, the book provides a vivid portrayal of the educational mood and hopes of the time of my own elementary school, the 1960s. So now finally I know more about the historical background of my favorite children’s book :-).

But, “the book is also about issues and events beyond the machines” (p. 16). From chapter 11 it gets even more interesting. Particularly, I liked insights such as

“despite all the talk of teaching machines enabling the individualization of education, programmed instruction was more apt to strip away student agency and selfhood.” (p. 226)

and I liked the wide diverse connections that are being considered, such as the Jetsons, the encyclopaedia salesmen, Summerhill, Bruner, Freire, Papert…

Wandering through the wealth of interesting details, the reader gets inspired to ask themselves what it is that makes these machines now appear so alienating, even ridiculous and embarrassing?

2. My take is as follows.

Associated with the image of a machine, are many discomforting and uncanny ideas,

  • that we are not in control but under remote control, surrendered to an unyielding (non-negotiating, mercyless, stubborn) mechanism that compels us to predetermined outcomes,
  • monotonous, repetitive button-pushing, and small (and context-less) steps,
  • maybe a general aversion and mistrust against an optimized, business-like, utilitarian solution, perhaps with external suspicious beneficiaries,

while a human teacher — exerting the same pressure, towards the same predefined goals — seems a lot more mitigating. (At least it seems so to the teacher him/herself — while the pupil may perceive this power as similarly scary as some critics perceive the all-too mighty programmer, in particular the ‘programmer’ of inaccessible AI.)

So, is the human teacher just a mitigated, inconsequential, ‘watered down’ version of the optimized machine teacher that has been prototyped during many decades? Or does our discomfort reveal something about the goal of the optimization itself — content memorization and retention — that is still largely unquestioned? To function optimally, the machines needed to focus on checkable, binary (true or false) facts for immediate reward feedback, and on atomized small steps within the isolated context of the predefined sequence. No ambiguity, no wider context, no ‘by-product’ of the content ‘McGuffin’. So our discomfort may also entail a subsurface intuition that something was wrong, was too narrow, and that an important part of education was missed.

For some, the discomforting association may also be the idea of working alone, without teacher and classmates, which is the inevitable tradeoff for the machine’s infinite patience and dedication to the pupil’s pace, and also the price for the perpetual opportunity of trying and asking, and in particular for an individual, reversed sequence of topics, guided by iterative curiosity and ‘navigating’ the connections, rather than a shared predefined path for the whole class.

A ‘shared gaze’ on the world’s topics is certainly the best option at the earliest stage of education when the infant discovers things through their parent. But if learning theories claim to apply to the whole range from this earliest learning, to the undergraduate in higher education, to the lifelong informally learning professional, they will account for different needs and preferences — and asynchronous, silent, individual work is certainly a preference of many students, as the pandemic has shown. (I, for one, loved the silent work in our two-room rural school, and I think I did benefit from, and still welcome, the independent style.)

Textbooks (and homework, which was waived in the Roanoke Experiment, p. 177) for silent, individual, solitary work can be seen as a precursor or variety of teacherless instruction, and ‘interactive’ electronic textbooks may be seen as precursors of the modern teaching machine. They are said to be more engaging, not only through immersive audio-visual material, but already through merely responding to interspersed questions. Currently in H5P open textbooks, the ‘branching’ content type seems to be the most advanced type of programmed instruction — Crowder’s “TutorText” (from 1958 !) called it “intrinsic programming”, see p. 139.

But the bulk of ‘interactivity’ consists of dialogs that just simulate the teacher, and even ‘intelligent’ textbooks are mostly limited to the paradigm of a single page (or popup) at a time, i.e., the sequential traversal of ‘programmed’ instruction. The unique affordances of independent student’s work such as juggling with concepts within rearrangeable contexts, or ‘talking to’ the text itself by annotating the striking passages with questions to the ‘later self’, are typically still omitted.

So, the limitations of the teacher-mediated instruction are carried over into the asynchronous solitary world — which seems to me like combining the worst of two worlds.

Posted in eLearning | Tagged | 1 Comment

Decentralized, Part 2

(For part 1, click here)

1. In “A Unified Theory of Decentralization” (via OLDaily), a pseudonymous author enumerates 9 problems, all of which he or she wants to be solved by decentralized solutions — a very puristic approach which I don’t find useful.

The first one is discovery and its solution is “an ongoing research topic”. For me, basic directory or registration service is a matter of a country’s infrastructure, and it is the task of a central but public operator. (A promising proposal within our current election campaign speaks of “öffentlich-rechtlich” (= governed by public law) for alternative platforms).

Maybe it is a cultural issue why some feel uncomfortable with the state maintaining a registry entry for each of its citizens and expecting them to carry an ID card. When I worked in early X.500 projects, there were different attitudes apparent, eventually prohibiting a profile of “residential person” in addition to “organisational person”, and what was left was the vacuous “internet person”.

For me, it is a given, and has been for the entire 50 years of my voter’s right, that the registry sends me, unsolicited, an Election Notification card that I can use (together with my ID card) at the polling station.

Colrored dots and arrows randomly connected, and an unconnected big black dot in the center

2. For my latest summary (see part 1) I used the imprecise title “decentralized knowledge” because I did not know a better catch-all term for what had engaged me recently. In the meantime, I read Ben Werdmüller’s post. It combines ‘centralized’ with ‘templated”, and it struck me how well these notions fit together. The latter impedes inner autonomy in a similar way as the former impedes us from the outside. And together, they cover better what I meant.

Furthermode, to think about centralistic ‘templates’ is also useful when imagining AI assistance for learning. Certainly, AI solutions based on big data can be able to align individuals with central templates such as common canons of memorized knowledge. But what about their personal goals for which there is much less data available? Still doubtful.

Posted in Social software | Tagged | Leave a comment


My imagination of the [McGilchrist’s] left hemisphere mode and its isolating and representing, is also associated with wrapping up and collapsing. Wherever we subsume several connected real-world items into a single category or module, under a handy label, we are using a kind of ‘handle’ to better grasp these parts like a single thing; we represent them as one thing to isolate and focus on, and I think that is where the left hemisphere is at work. And this mechanism is so pervasive because it can be repeated: each representation can be combined with others and be collapsed again into another thing on a higher level, until we get a deeply nested hierarchy.

But the wrapping-up principle already applies to much simpler things such as, for example, a footnote reference, or a hyperlink in the web, which harbour behind a single label, like a loophole, another cornucopia of more and more descriptions.

By considering the nested tree structure, one can also connect to Deleuze & Guattari’s concepts of arborescence vs. the rhizome. While today, networked thinking with large zoomable graphs are fashionable, it is often forgotten that purely hierarchical maps, whether radially or just linearly arranged, are still just about things rather than relationships, because every node on the map can be identified with the “road” used to reach it from its parent. So your [McGilchrist’s] point about “things” is also about trees vs. true, non-hierarchical networks and rhizomes.

I wrote the above in a personal letter in July 2020, but the whole idea of wrapped/ collapsed/ congealed may again seem like a rip-off:

In “The Master and his Emissary”, Iain McGilchrist wrote:

“Becoming is potential, and for Being to emerge from Becoming, it needs to be ‘collapsed’ into the present, as the wave function ‘collapses’ under observation” (p. 233)

I mentioned the notion of ‘collapsed’ in Wrapping and grasping

“much of our daily life consists of collapsing (“-“) nested logical containers, or expanding (“+”) them”

with reference to McGilchrist’s general idea of “one of the two fundamental ‘modes of operation’ of our brain” but without explicit reference to his passage which I had not read by then. It was only in the narration of the introduction to his forthcoming book “The Matter with Things”) that I noticed the term ‘collapsed’ (19:06 and 21:05).

Also, in a reader’s comment on the film producer’s site, in 2017, I wrote

“For me, for example, it was particularly striking how the left hemisphere’s grasping and capturing can also be seen in our habits of packaging, wrapping or bundling, and nesting our ideas, which also help isolating, encapsulating and referencing. And then the hierarchy of nested containers, in turn, can be thought of as a ‘tree’ — much like a computer filesystem explorer with its handles for collapsing and expanding.”

again without specific reference.

Similarly, I mentioned the notion of ‘congealed’ in section #4 of Recognizing

“the vast majority of knowledge of the ordinary kind, which McGilchrist would call fixed. (Marx would perhaps call it coagulated or congealed; and …”

again with reference to McGilchrist’s general idea of “fixed results of experiences” but, of course, without explicit reference to his passage in the forthcoming book (13:59).

Since the idea has interested me for so long, I am curious if or how much it might be elaborated in the new book.

Posted in Knowledge | Leave a comment

Limit case

Finally, we are able to catch a glimpse into McGilchrist’s new book (he reads out the first 30 minutes of the introduction on YouTube). And if this were not so new, it might look like another case where my blog had stolen an idea from him: the limit case.

During CCK08, I described hierarchical structures (trees) as border case of genuine networks (webs). And a bit later I compared the difference between connective knowledge and simple assertion knowledge to the difference between a parallelogram and a rectangle (the latter being a special/ border/ limit case of the former), and noted that its conceptual connection strength was exactly one, i.e. the limit case of the more general strengths varying between 0 and 1, in the central neuronal metaphor of Connectivism.

A rectangle and 3 parallelograms with width or height equal to the rectangle's, with different colors, and all aligned at the bottom.

Throughout the years that followed, I had this distinction in my mind’s eye when I read and wrote about the complicated vs. the complex, the linear vs. the nonlinear, or later about McGilchrist’s ‘left hemisphere’ mode of attention (fixed in time and isolated in conceptual space) vs. the more real-world ‘right hemisphere’ mode.

Now in his new book, McGilchrist applies the concept of the limit case to a large variety of relationships: isolation vs. interrelation, motion vs. inertia, thought vs. language, explicit vs. implicit, literal vs. metaphorical, order vs. randomness/ chaos, inanimacy vs. animacy, potential vs. actual, determinate vs. indeterminate, straight lines vs. curves, linearity vs. non-linearity, discontinuous vs. continuous, independence vs. interdependence, and most prominently, relationships vs. the things related (from minute 16:56). And it’s just fascinating!

Posted in Classification | Leave a comment