#EL30 Week 7: Response to proposals

Thanks to Laura for the proposal which makes the consensus task stretching again, whereas Roland’s good proposal might have been just nodded through.

I think that a shared definition of such a complex term as ‘community’ is unnecessary and therefore impossible to settle on:

In a previous blog post I depicted many possible meanings of the term, see this image


(including some shared German synonyms). Much clearer than ‘community’ — at least in the context of people familiar with Downes’s writings — are the terms ‘group’ and ‘network’, where a group has a shared goal, and a network is not delimited by some borders. ‘Community’, IMHO, is a concept whose scope is rather fuzzily distributed between several such other terms (one might say, its ‘betweenness’ between the neighboring concepts is key, or its meaning is literally in the connections between them).

In particular, like a network, it does not have a predefined, fixed boundary (except in the special word sense of a geographical municipality) but its ‘membership’ is voluntary, as the blonde boy in the middle of Kevin’s bottom cartoon emphasises. And so, unlike the shared goal of a group, a possible shared goal of a community depends on a consensus. That’s why consensus building is a fairly typical task for a community — while of course I agree with Jenny that community can be much more than consensus, let alone consensus about some truth, or even about a technical status.

In an informal community, ‘membership’ may just be defined by each individual themselves, feeling as members or not, or maybe just as participants, for example in a MOOC with variable activity, with dropping in an out and perhaps lurking. If the community is not formally used for anything else, the consensus may even delimit the community: while it is desirable to achieve a broad consensus (of as many participants as possible), it needs to constrain to a minimally necessary level, or a least common denominator, to avoid that some participants may stop to feel, and self-declare, as members. And I was very reluctant to act as a community member, and my tolerance level is rather low.

I think Laura’s task is less ‘minimal’ than Roland’s, if it aims at a definition of a community in general. Maybe this community is easier to define, or the ad-hoc community whose purpose is just to complete the task.

Posted in EL30 | 12 Comments

#EL30 Week 6: Automated Assessments

In the center of this week’s topic is the conjecture that the rich data tapestry of student learning records, might get at a more accurate picture of whether the student’s abilities will meet the requirements of a certain job profile. The many data points might lead to a sort of ‘recognition’ of connecting patterns that is more appropriate to mental competencies than a few quantitative ‘measures’ and scores. Because knowledge, too, is such a recognition.

In principle, I find this conjecture plausible. And especially the corollary which links it to the distributed web:

“In the world of centralized platforms, such data collection would be risky and intrusive” (from this week’s synopsis).

But is the conjecture true for all types of assessments, and will it lead to more justice, and should we embrace machine decisions here?

Calipers

By Flickr user tudedude, CC-BY-NC-SA

For existing jobs it might perfectly work. But if the decision impacts 40 years of work life, I doubt that the criteria of future needs can already be sufficiently formalized. The training stage of the AI cannot be extended to 40 years. In particular, domain-specific aspects will not suffice, and the need for domain-general literacies is even more important, to be able to abstract from today’s situation and to transfer one’s knowledge to unkonwn futures. (And it is not a good idea to just increase the abstraction level of the subject matter to be learned.) So the criteria will be rather vague here.

Will automatic assessments be more objective, and will they distribute the scarce, best-paid, positions more fairly? If the higher salary is excused with the scarcity of the necessary skills, there will always be some unspoken, or maybe even unconscious, motivation to keep that skill scarce, rather than foster its development. So, designing vague criteria for this critical selection is not straightforward. In particular, if the fitting judgement is not just a matter of ‘sufficient’ skill (like, a professional ‘recognizes’ their new peer), but ranking, often composed from scores that are totally irrelevant but are just available from several years of accumulated assessments.

Algorithmic decisions are tempting because they also work with imperfect criteria, just looking at previous decisions. But they might not have a response when we ask them how they arrived at their decision, as Stephen and Viplav observeed in their Wednesday discussion. This is a severe violation of a demand that is emerging from the political discussions, for example by algorules (which I mentioned before), namely transparency.

I think, for the final summative assessments deciding about the future life of a human, such algorithms are not acceptable. By contrast, for the formative assessments throughout the study, they might be perfect. With human teachers, both types of assessments are equally costly, therefore we have too few of the latter and too many of the former. This may hopefully change now. And that’s why distributed storage is needed.

Posted in EL30 | 3 Comments

#EL30 Two tasks

This week’s task is to install IPFS

A screenshot of a command prompt window, showing a nostalgic style banner of IPFS. bannerand to create a content addressed resource. Here it is:

https://ipfs.io/ipfs/QmcJC9phgw2kkNuEmtBYHgvByAAZrP3wnGDkppm2KjuRZA

This week’s topic is resources, and this fits well to Jenny’s task for us, about Jupyter.

There are so many discussions and efforts about the logistical and legal frameworks of educational resources and the technological changes of these frameworks. So one might be frustrated that there is so little about new technological affordances of the resources themselves. Dominant theory, for example “Split Attention Effect” (from Cognitive Load Theory) is still mainly drawing upon the paper age where interactive resources were unknown. So it is refreshing that this course covered an interactive resource called Jupyter notebooks.

And Jenny’s task wants us to

“Explain your understanding of the Jupyter Notebook for four different people, none of whom have heard of Jupyter Notebooks before:

  • A 10 year old child
  • A 15 year old secondary school pupil
  • An undergraduate trainee teacher, specialising in Art
  • A University Lecturer working in the Educational Research Department”

So here is my attempt. (Disclaimer: I am not an educator!)

10 year old: Suppose you have a new cooking robot. And you have alien ingredients that you have never tasted before, and you don’t know any recipe about them. The robot understands written orders and it will quickly execute each step for you. If the result is not tasty, you can start over and modify your orders; the robot will patiently execute each step again if you tell it to do so. A Jupyter notebook contains all these orders, and a “Run” button for each step. Just too sad that the result is only on the screen and not edible.

15 year old: Same as above, but with data for a diagram instead of ingredients for a meal. If the pupil’s basic IT training has already covered databases (? I hope so), the ingredients are ‘JOINed’ rather than just mixed, and the orders to be modified will particularly be about ‘parameters’ to be varied.

The arts teacher-to-be: Same as above, but with a sculpture instead of a meal. And with additional explaining: Why does the robot have a command line interface, rather than visual user controls where I immediately see the effects (‘what you see is what you get’)? Well, some people want and prefer this linear style, and it is important for you to also understand those of your pupils who might not have chosen arts as elective subject. Furthermore, graphical interfaces don’t yet lend themselves well to such ‘scripting’. And finally, machine learning works very similarly, by such parameters that you have to tweak.

The educational researcher: Same as above, but perhaps more help is needed to overcome their traditional understanding about tweaking and fiddling: While such practical, quantitative bricolage may seem much less noble than theory, facts, and abstractions, the latter may soon fall prey to cognitive automation, and a skill of the former is also desirable, to cope with unknown futures.

 

Posted in EL30 | 3 Comments

#EL30 Week 4 Identity graph, 1st attempt

This week’s task is an Identity Graph. So I tried to guess what advertisers know about me.

1. The first source is the Twitter “Interests” which can be obtained via Settings > Your Twitter data > Interests and ads data. I depicted them in blue, and added guesses why they may have been added — click the yellow icons to read more on the right hand side. I arranged and linked the items according to wild associations — feel free to rearrange, it is just a copy in your browser.

Screenshot

Click for interactive version

Updated version here

2. The second source is my LibraryThing tags (red), because I think my clicking behaviour will roughly match these interests, in particular at Amazon where I browse for books before I order them online with a physical bookstore. (I find this useful because the index Stephen mentioned is ‘delegated’ from the libraries and bookstores to this platform.)

I did not fully understand the stipulation of not containing a root node “me” which I thought commercial personas are all about? but I’ll learn this by trying. (Updated 22:17 h: After Stephen’s explanation in the Wednesday live session, I understood it and added connections between the interests, in green.) Please comment what I missed. (If you want to arrange your own lists: I just dragged and dropped the marked text onto the canvas.)

Posted in EL30 | 3 Comments

#EL30 Week 4: The same

Digital identity is a big opportunity to confuse users — and to lure them into friendly services who ‘take care of’ all this impenetrable stuff. In particular, the term refers to two very different meanings.

1. Etymology offers a good inspiration for thinking about one of them: the same (Latin idem). If I am a user who has never logged in into some server, its ‘cookie’ just tells the server that I am the same one who visited the site before. Nobody knows my name at this time (and not even that I am a dog, as Peter Steiner’s cartoon of July 5th, 1993, said). And this is sufficient for many useful things.

2. Only when it comes to binding this digital handle to some real-world attributes — such as my name that my father registered in the registrar’s office of my birth village — it becomes complicated. A password ties a hash value on a server to some content stored only in my brain. And a ‘public key’ is tied to the ‘private key’ (a very long password) stored on a device that only I own.

The handling of all this is still so confusing that friendly platforms and browsers invented many methods to ease and accelerate it for the users — and patronize us ever more.

The VCard icon.

So when we want to get rid of the central abusive platforms we must make sure to also get rid of the danger of confusion and new friendly patronizers, to not ‘jump from the frying pan into the fire’.

The technical W3C draft tells me that we are not there yet:

‘Zooko’s Triangle: “human-meaningful, decentralized, secure —- pick any two”.’

Of course, they picked long incomprehensible strings, but

‘mapping human-friendly identifiers to DIDs (…) is out-of-scope for this specification.’

This potential source of new confusion seems yet unsettled.

 

Posted in EL30 | Leave a comment

#EL30 Technical: How I created my task

My task uses tables, views and a page on gRSShopper. If you want to try it yourself, here is my stuff: http://x28.privat.t-online.de/372/x28stuff.sql

  • go to cPanel > Databases > phpMyAdmin,
  • click your gRSShopper database
  • click Import, browse to the downloaded .sql file, scroll down and click OK. It should replace the tables x28term and x28week, and add the views x28term_html, x28week_html, and a new page. I prefixed everything with ‘x28’ to not get messed with your own stuff. Of course I tested it but still, I don’t fluently speak SQL and hope I won’t break anything. Proceed at your own risk!
  • go to the new page called “an x28map starter file” and publish it.
  • go to the new view “x28week_html” and change both occurrences of mmelcher.org to your own domain (because Javascript won’t load “cross origins”).

Unfortunately, it does not work with https (since the location of the javascript is on http). So, if you have a forced redirect to https in your .htaccess (which happened to me recently!), feel free to copy both the Javascript and CSS file to your own site. (But then don’t forget to watch my Github for updates.)

Hammer, screwdriver and a cog

Posted in EL30 | Tagged | Leave a comment

#EL30 Graph task

Task 2 in the Oct 30 newsletter said we should create a task for other participants of this course.

My task is related to week 3 about Graphs. While Frank Polster observed that for some graphs it might be a stretch to think of knowledge, the following example should make this easier: “think of knowledge as a graph”:

It involves some concepts from the synopsis texts, and you should connect and annotate them.

1. Click on a selected week from this list -1 Getting Ready, 0 E-Learning 1 and 2, 1 Data, 2 Cloud, 3 Graph, 4 Identity, 5 Resources, 6 Recognition, 7 Community, 8 Experience, 9 Agency, and you will find a list of terms.

2. Copy and paste them into a .txt file, and import them into a concept map application of your choice.

  • for Cmaptools by cmap.ihmc.us, I made a video instruction on Youtube some time ago;
  • or, for my own tool http://condensr.de/download-page/ you may just drag and drop the text into the map window that opens when you start the application.
    • or, you may just copy the text and paste it (Edit menu > Paste),
    • or, you may drop the icon of the .txt file from your file explorer into the map window;

    If you want help, don’t hesitate to call me directly!

  • For limited functionality, you may also use the demo version which should open when you click on “Preview” above the terms list.

3. Connect and annotate the terms.

The terms have been extracted from Stephen’s synopsis texts by a corpus linguistics tool called AntConc, and were loaded on my gRSShopper instance. If you want to view a sample of a map that I quickly completed for week -1, click this link: http://x28hd.de/demo/?el30sample.xml

Screenshot of a concept map

Screenshot

I did some annotation by inserting extra (red) items. In the full version of my tool, you can put annotations into the right pane (which is the biggest benefit of this tool).

Posted in EL30 | 2 Comments

#EL30 Week 3: Plumbing?

Because I’m an IT professional, it bugs me when some peers urge all the rest of the world to adopt our way of thinking. Probably they are confusing two important things which may indeed appear quite similar:

  • To be able to talk to the other side (IT staff, devices, artificial co-workers, …),
  • or to empathize with them and learn how it feels to “be” one of them.

While the latter is certainly a rich experience that I would recommend everyone to try out some time, it absolutely must not be required to accomplish one’s tasks. That is what good user interfaces are made for, and the division of labor between operator controls vs. the stuff ‘under the engine hood’. (If you think the reality of my own tool contradicts this aspiration you might be right but please do tell me when you find a flaw.) And it is not only ethically questionable to urge people into alien thinking, it is also tactically silly, because once the typical aversion and blockage is in place, the ‘getting into’ becomes just more impossible.

Perhaps a little comparison might illuminate for my peer nerds how difficult this ‘getting into’ mathematical thinking might be. Imagine you leave your keyboard and join some dancers or singers. The choir mistress starts the rehearsal warm-up with exercises for relaxing, breathing, and then perhaps nonsense syllables such as “bla ble bli blo blü” which you repeat many times. (Already rolling your eyes?) Then she says “The room is full of Ms flying around. Everyone catch one of them and hum it.” Are you able to let go of your reservations and engage with this foreign world? (Disclosure: I myself were not able in the case of dancing, just with singing.)

So when the graph theory nerds insist on their swollen terminology such as “vertices” and “edges” for their simple items and lines even when talking to other disciplines, this is IMHO a very unnecessary scaring and excluding, and it is not the only way to the future.

Some angulate tubes

By Flickr user naughtomation, CC-BY

On one hand I understand very well the future role of plumbing and fiddling and tinkering — e.g. changing parameters in Jupyter notebooks and trying again. But OTOH, this ‘plumbing’ should not literally mean that I have to repair the red or blue taps for hot and cold water, but rather, to use the water for cooking and then ‘change the parameters’ to experiment with new meals. Yes, plumbing where no algorithms exist yet, this will probably be the job left over for humans.

To co-exist with the algorithms, then, it is necessary to be able to talk to them and to their developers. Talk across the divide, not trying to blur the division of labor. Most prominently this means, to understand why, and get used to that, IT staff and devices are sometimes so annoyingly stubborn; for example, when end users talk about their colorful fuzzy subject matter, we insist in asking back until we can model their stuff into our rows and columns.

A simple example how the co-existence between human and machine could have worked, but failed, is just this unfortunate RSS from last week: It is written in XML which is intended for the machine and should work behind the scenes while users are reading their HTML pages. But by trying to blur this clear division (e.g. by applying fancy style sheets), gradually a total mess and confusion emerged, which, of course, helped the platforms to suffocate this dangerous democratic decentralizing technology, such that even a politically aware historian finally gave up.

Posted in EL30 | 2 Comments

#EL30 Week 2 Clouds and Jupyter

For this week’s topic, the Cloud, I like Stephen’s comparison: “Computing and storage as commodities like water or electricity”, although I have not been using much personal cloud storage, and no cloud computing at all.

I first encountered the idea of a cloud when we drew pictures of our computer networks where they were based on the public packet-switching telecom services. Unlike our costly leased lines, we did not exactly know what trajectory they followed, and the telecoms emphasized that this “doesn’t matter“, either, because they would be rerouted if one part of the network was congested. So we just drew the whole network as a cloud, with short access lines entering and exiting the service.

Then later the cloud was for me the aggregate of all the services to which I had uploaded my stuff, and I enjoyed how easy I could retrieve it — not particularly that I could access it from different devices, but because it was published and therefore somewhat curated, and distributed among specialized services for bookmarks, blog posts, pictures, library items, etc. The idea that I could make my private raw files accessible from two devices, was never too exciting for me, and the notion of ‘syncing’ seems to me mostly as a confusing distraction: either I use it for the simple transfer from A to B, or for a crippled, dependent device C. That’s why my cloud storage is minimal.

Now the week’s synopsis talks about “new resources [that] allow us to redefine what we mean by concepts such as ‘textbooks’ and even ‘learning objects’”, and the presentation 481 discussed living resources such as Jupyter notebooks which reminded my of the “Try it Yourself” tutorials of W3Schools, or in particular of a great demonstration by Bret Victor.

I have long been fascinated by this idea of interactive resources that don’t just show one page at a time, but (side by side) some control or context and some effect or details — which seems promising for learning. (Disclosure: My free tool does something like this.)

Anyhow, if I can try the Jupyter notebooks right on the web, I will probably soon consume more cloud computing power.

A planet with clouds as moons.

Posted in EL30 | Leave a comment

#EL30 Week 1 From documents to data

Documents to data — for me, this is not an easy farewell.

It is not that I conceived ‘data’ as a setback in the hierarchy of data – information – knowledge – wisdom. This meaning of data is just one of the two possible senses: the one that suggests that data is just the substance, the raw material that we ‘processed’ when IT was still called EDP. While the glimpse into Learning Analytics on Wednesday still sounded like they see data like the tons of sand that clueless gold miners sift through to find some nuggets, Stephen’s last video featured a very different sense of data: like ‘facts’ which are ‘linked’ in Berners-Lee’s semantic web, and which are shared via the various initiatives to open the results of publicly funded research data for reuse and validation. This is an approach that must be applauded, of course, despite there is a danger that unstructured (unlinkable) ‘data’ get neglected.

What worries me is that the addressability, manageability, and transparency of the document files and webpages is lost — and with it a certain level of autonomy/ majority. In the document model, when you were about to click a link, you were able to see an address of an html file, composed from the gigantic world-wide hierarchy of top-level domain, subdomain(s), hostname, folder, subfolder(s), and page file name. In modern page views, it is totally obscure from where the Javascript loads its tons of ingredients — which, of course, is intended by the platform owners to better patronize you in minority.

It reminds me of the time when library information or, say, chemical abstracts were retrieved from OPACs or other special database hosts through an X.25 session and a login window, rather than clicking directly. OK, database rows aren’t typically addressed separately. But letting go of files and folders, also my own responsibility of backup and restore becomes much more risky. While I use to just right-click and copy my .accdb MS Access database file before I start a risky operation, I don’t know how I would repair the MySQL database on Reclaim Hosting if it scrambled; even if I recovered the component files, I’m not sure how I would recreate the whole again (which is greater than the sum of its parts).

The typical cylindric database icon, as a jigsaw puzzle, with some of the scrambled pieces reassemmbled and some  not.

Scrambled database

OK, maybe I am more relying on my file system than a modern user would do (e.g. my notes are just tiny separate Notepad .txt files, and much of my ‘graph’ consists of simple folder shortcuts), but I find it regrettable that in modern apps, and especially on the mobile, the access to your single files is largely obscured — and I was already thinking that a ‘Reclaim your filesystem’ movement was due.

I understand the push from larger objects towards smaller entities. From a picture object to picture dots. From assignments to xAPI events. It might help to ‘see’ an emergent whole picture in the way human brains do when they let go of the typical isolating/ fixing habit. (I am not sure though, if Learning Analytics is not also used to isolate ‘trends’ and reduce students to measures.)

But the users’ handling of their data becomes more complicated, and I fear people are willing to delegate ever more of this hassle to the eager offers of ‘helpful’ platforms. The classical example for me was RSS where the simple thing of a feed address is so much obscured that standalone readers died and people were lured into the trap of the Google Reader.

So, yes, decentralization becomes more important with the data-based model — becomes very important.

Posted in EL30 | 4 Comments