In the center of this week’s topic is the conjecture that the rich data tapestry of student learning records, might get at a more accurate picture of whether the student’s abilities will meet the requirements of a certain job profile. The many data points might lead to a sort of ‘recognition’ of connecting patterns that is more appropriate to mental competencies than a few quantitative ‘measures’ and scores. Because knowledge, too, is such a recognition.
In principle, I find this conjecture plausible. And especially the corollary which links it to the distributed web:
“In the world of centralized platforms, such data collection would be risky and intrusive” (from this week’s synopsis).
But is the conjecture true for all types of assessments, and will it lead to more justice, and should we embrace machine decisions here?
For existing jobs it might perfectly work. But if the decision impacts 40 years of work life, I doubt that the criteria of future needs can already be sufficiently formalized. The training stage of the AI cannot be extended to 40 years. In particular, domain-specific aspects will not suffice, and the need for domain-general literacies is even more important, to be able to abstract from today’s situation and to transfer one’s knowledge to unkonwn futures. (And it is not a good idea to just increase the abstraction level of the subject matter to be learned.) So the criteria will be rather vague here.
Will automatic assessments be more objective, and will they distribute the scarce, best-paid, positions more fairly? If the higher salary is excused with the scarcity of the necessary skills, there will always be some unspoken, or maybe even unconscious, motivation to keep that skill scarce, rather than foster its development. So, designing vague criteria for this critical selection is not straightforward. In particular, if the fitting judgement is not just a matter of ‘sufficient’ skill (like, a professional ‘recognizes’ their new peer), but ranking, often composed from scores that are totally irrelevant but are just available from several years of accumulated assessments.
Algorithmic decisions are tempting because they also work with imperfect criteria, just looking at previous decisions. But they might not have a response when we ask them how they arrived at their decision, as Stephen and Viplav observeed in their Wednesday discussion. This is a severe violation of a demand that is emerging from the political discussions, for example by algorules (which I mentioned before), namely transparency.
I think, for the final summative assessments deciding about the future life of a human, such algorithms are not acceptable. By contrast, for the formative assessments throughout the study, they might be perfect. With human teachers, both types of assessments are equally costly, therefore we have too few of the latter and too many of the former. This may hopefully change now. And that’s why distributed storage is needed.