After 13 videos and more than 10 hours of watching I realized that I may have misunderstood who does the training of an AI model.
I thought that training an AI by supervising and reinforcing its learning and creating a model is one thing, and that using it by interacting with it is another, later, thing. Now I learned that there no such simple division of labor between developers and users, and that the end user’s specifications count as training, as well: for example giving Feedly’s Leo examples of posts that I liked to read.
But now I am left wondering how far my influencing the model may extend. There must be some limit somewhere? If I am being cared for by a care robot and tell him that plenty of sweets are best for me, will he believe this and bring me ever more sweets?
And I suppose that here is the border between a personalized service and a fully personal one, and here is also the response to my doubts in week 2, and similarly, the response to my suspicion of a One Way relationship that I raised here at the beginning of module 7.
What I particularly liked in the last video is that, again, a very extreme alternative thought was carried through, a scenario of a very mutual relationship between human and AI:
“if we treat the AI as, you know, a person that feeds back into the training of the AI, the AI eventually begins to regard itself as a person and treat itself as a person in its own decision making. So, I don’t think this is such a hard philosophical conundrum as it might seem” (1:18:59)
“interaction with artificial intelligence and analytical engines is ongoing and dynamic and doesn’t end and our major role in these interactions is to train them. If we train them, well, they will become reliable responsible, ethical partners.” (1:20:59)
Here, the ‘we’ seems to include both the developers and the end users, but I am not sure about their distribution of influence and power. Unless we get some sort of ‘Indie AI’, the capital paying for the costly production, will probably have more say.