It is wonderful that in the #ethics21 MOOC, some important thoughts were really carried through. This made the limitation of AI much clearer for me.
One such thought was how algorithms would look like that provide care, integrity or trust, with inputs from a whole caring community. The other thought was whether a large scaled-up MOOC with many smaller groups could each have an “artificial Stephen” modeled after his many live inputs (see the Friday’s discussion at ~70 min).
Both felt somehow wrong immediately — I would not want to participate in such a MOOC, or to have a robot carer for solace — but why? To be fair, the scenarios don’t confirm the prejudices against AI as a mechanistic expert system, nor the misunderstanding of teaching as transmission. Still, there is an underlying pattern of Input – Output in the perceptron layers and the End-to-End Machine Learning Workflow, that doesn’t seem to leave room for something like reciprocity. Once the training stage is over, the AI won’t learn any more from the end users, even if it is continually tweaked by the ‘data processor’ stakeholders using new data from the ‘data subjects’ — if I understood this correctly.
Maybe my idea of Higher Education is too idealistic, based on Humboldt’s ideal (see an old CCK08 post) that university teachers and students should learn together, in a community of curiosity and the unity of research and teaching. Or Howard Rheingold’s long-standing practice as a “co-learner” teacher. I do think that even the unique questions, misunderstandings, or surprising/ outstanding elements for highlighting and annotation, of each new student generation, can influence the teacher towards new insight, even if just by a bit of questioning of old ‘matters of course’, or just a bit of refocussing.
Similarly regarding Care, I think that the careful active listening by the one-caring to the cared-for (see previous post) may indeed occasionally entail that the former learns from the latter, for instance the valuable perspective of a very old person.
This intergenerational mutual learning may be more or less absent in the typical K-12 environment where just centralized templates of content and skills are wanted, but in environments where fostering independence is important (infants, elderly, HE students, or general critical literacy), it may be rare but still crucial.
And beyond the intergenerational interactivity, I think every feedback has the (however small) chance to generate genuine new insights. However, this will probably not pertain to the specialized domain that the robot teacher is trained for, but probably rather come from distant associations, and I cannot yet imagine how AI would implement and handle such extra-domain input.
Unfortunately, I don’t have an idea, either, about how group-working MOOCs could be scaled-up without the abovementioned artificial Stephens. For scaling-down, I think this course has shown that too small a number is also difficult, at least the blog commenting suffers if there is not a critical mass.
Pingback: #ethics21 Module 7, more | x28's new Blog