After many hours of listening to the 9 videos of this module, the reward is a very satisfying, plausible and useful takeaway. AI in education should and could mitigate vulnerabilities and oppression. It is useful (sorry for the utilitarianism) because we can derive practical consequences for the design of the new systems.
1. When I think of vulnerabilities and dependence in education, I think foremost of grading and assessments, and I have blogged about this and I have received constructive agreement and disagreement in the comments here Ungrading, my take and here #EL30 Week 6: Automated Assessments. From the former, I learned that “[students] want to know how they compare to others in their cohort and beyond.” while from the latter, I was nicely reassured that “for the final summative assessments deciding about the future life of a human, such [problematic] algorithms are not acceptable.”
So if AI were restricing itself to formative assessments, can it avoid hurting the weaker students with fear while still let them know how they compare to others? I think yes, the advantage of the digital device over the physical classroom is that it can preserve anonymity. If the pupil wants to know, the system can tell him or her how the others are doing without naming them.
Another source of fear and oppression is time pressure. I keep noticing in this course how much the synchronous and the asynchronous styles differ, and I am happy that the possibility of blogging relieves me from the rapid-fire pressure of the live session.
2. There is also another takeaway from the thorough coverage of the relationship between the one-caring and the cared-for, in particular from Noddings’s work. The one-caring should not act without the expressed wish signalled by the cared-for. So this seems to be once again a matter of pull vs. push, which is so important in many tech-related issues. However, in the context of vulnerable and dependent persons, this interplay of request and response, poses yet another subtlety in that the cared-for may be hesitant or embarrassed to explicitly express a need, and so the task of the one-caring is even more difficult, to still recognize the wish by careful active listening, and still not overriding the other by preemptive patronization.
Maybe technical self-service can mitigate some of the embarrassing sentiments, too. When self-service supermarkets arrived to replace the mom-and-pop groceries, one of the success factors was that customers did not have to be embarrassed when they needed time to decide or if they did not know how to pronounce a product’s name. (I remember when Sunlight soap had a sun icon at the very spot that would have distinguished the English ‘g’ from the ‘c’ of the German equivalent, to mitigate the pronounciation problems.) So the reduction of human attendance had at least a tiny welcome flip side.
However, what the active listening of a one-caring can recognize as the unspoken wishes of a cared-for, is probably not always recognizeable by a machine, simply because the cared-for will approach the machine differently.