In Week 2, we will learn about Applications of Learning Analytics, and I am curious about a few aspects of AI that have puzzled me for a while.
I think I don’t underestimate AI’s eventual abilities, and how far some extreme thought experiments of ‘raising’ AI personalities can liken hypothesized machines to us — which I tried to describe in my EL30 post about alien AI intelligences. So I am not surprised when Stephen even ponders the possibility of an “artificial me” or a copy-and-pasted “connectome“.
But when he is so optimistic about “personal AI” (e.g. for selecting feeds for one’s reading or selecting the audiences for one’s writing, or how “algorithms should be owned and run by individuals”), I have doubts:
Will this satisfyingly work also with the small amount of data from a single person’s interests?
Certainly, AI solutions based on big data can be able to align individuals with central templates such as common canons of memorized knowledge. But what about their personal goals for which there is much less data available?
Furthermore,
how can AI get to new insights?
After all, AI learns from data of the past and ‘ruminates’ them.
And as far as I understand it by now, the basis of much of machine learning is the sort of relationships that combine concepts that somehow belong together, within a frame/ script/ scheme, e.g. by ‘co-occurrences’ of, say, a word on the same page. But can it also come up with types of links like metaphors, or by distant associations?
Also, novel data are probably available only in much smaller amounts — which combines both of my above doubts.