After my 19 blog posts for the cMOOC called “Ethics, Analytics, and the Duty of Care“, I need a summary — although this is almost impossible for such a massive course of approximately 790 slides/ 43 hours video/ 500+ pages spoken text (net weight, i.e. without the 23 technical and ~ 20 discussion videos).
The most important positive insight for me was that AI in education should and could mitigate vulnerabilities and oppression (457), in particular by applications similar to formative (not summative) assessments, and by relieving time pressure.
There was a lot in the course that I could easily agree with, in particular the idea that ethics is not something that can be generalized, deduced from rules, or programmed. I understand better now how the term ‘ethics’ is being used in this way that was alien to me (456), and why the consequentialist view is loaded with so much historical ballast (453).
The large module on Care Ethics was especially interesting. On one hand, due to the parallels with connectivism (455 and 461). On the other hand, it inspired more thorough thinking about vulnerability and dependence (457), and the delicate relationship between the one-caring and the cared-for (458).
It is here where my skepticism of AI starts, and it was good to engage with the topic and go beyond the mere unease.
One objection is that the relationship with a robot cannot be the same as with a human, even if it is a good fake, just because we interact differently when we know it’s a faked human (455, 460). (The tempting solution would of course be to betray the dependents about the robot’s true identity (450), and I think we must be very clear that this would violate any ethics that objects dishonesty as an abuse of the privileged position of being a better cheater.)
Another objection is about growing independent, as it is expected for higher ed students. This does not only require trust (above objection), but also learning to transfer one’s knowledge to different domains and to come up with associations beyond the narrow subject matter at hand. But realistically, AIs will be limited to one specialized domain each (445 , 458). Furthermore, students who avoid independent work might indulge in the comfort of the machine. Of course that’s just my speculation.
Finally, there was plenty of opportunity to think about the political dimension. The tree vs. mesh structure of society (444), the power on the labor market (447 ), the power distribution between end users and those who pay the development (462), and whether it will be the poorer students who will be fobbed off with faked teachers (460). All of this suggests that we should be very wary.
The course interactivity was a bit disappointing for me because there was almost no blogging and commenting which I would have preferred over the oral synchronous sessions.