Chris Aldrich calls for actual examples of how we use our Zettelkästen for outputs. I am not sure if the following is exactly what he wants because I do not use a single zettelkasten equivalent for all the ideas on paper slip equivalents. For a particular output, I collect a project-specific folder of them to process them. (By contrast, for the project-independent/ evergreen notes the goal is mainly to find new relationships and categories, as described in this video and associated blog post.)
But here is the process shown in zoomed screenshots of the collections. The description of my workflow is simple: after connecting and rearranging them, I sift through them one branch after the other. The key of this process is probably to see which items need to be pruned because they are tangents that are not well enoough connected and would therefore need unwarranted space. That’s it.
The example shown is the authentic (albeit anonymized) collection of my Kindle annotations and other notes for my recent post about Clark Quinn’s latest book.
It seems topical for older ed tech people to talk about disillusions (Stephen Downes filled the entire yesterday’s newsletter issue with this topic), and I am old enough to comment.
For the oversized patronizing opposite, I don’t know which one of my posts to link to that all addressed this longstanding concern:
Specifically, for the prosthesis aspect, perhaps see this one, or that series (in the context of think tools). That series will also (part 1) lead you to the keyword of ‘idiosyncratic’ which is the top in Jon Dron’s list.
For his keyword of ‘centralizing’, I have a whole category.
For the ‘scale’ aspect, just see my previous post.
Of course, however, as long as parents’ and institutions’ focus is on the efficiency of learning rather than asking what needs to be learned, pessimism is still justified.
I am a bit late commenting on the sensational story of the ‘sentient’ chatbot LaMDA. And of course I am one of the many who comment despite being no AI specialist. But I want to remind of something, and point out some implications:
1. The strength of IT is scale.
Where the machine is really helpful as a tool (rather than an intrusive panjandrum) is where it does repetitive chore tasks involving (too) many items or iterations.
In many coding lectures, teachers enthuse themselves about working on some logic of some algorithm which eventually spits out a result that proves that we have successfully willed the computer and told him what to do. This may impress some kind of people. And it is easier than providing a few hundred data records to demonstrate the real value of the machine: a massive scale of output.
Others may be also impressed by owning an app that seems capable of some magic. Which, for example, delivers ideas. Which works in lieu of the user rather than together with the user, like the famous Brownies of Cologne — or like a strong engine under the hood which is impressive to command. Commercial services are eager to depict their service like a big feat rather than a help for scaled chore. Personally, I am particularly annoyed about the so-called tools for thought. Often they don’t even offer any import option for my own data to try out.
I think that it is an intrinsic advantage of IT to outperform us humans when it comes to output that involves many items, and it is simply based on the zero cost copying of information. (Also my favorite low-tech operations, sorting and rearranging, do simply relieve the user of an insane amount of copies that would be needed until the desired order would be reached.)
While these repeatable operations are cheaper for the machine than for the user, the single action of selecting and scheduling a complex procedure (for example for a single decision) costs human labor no matter how many items are being processed. And of course, the development of the machine and its algorithms are so costly that a bottom-line profit is possible only with a large-scale deployment.
This is why I think that AI will mainly serve as industrialized cognition for mass deployment, rather than for personal augmentation and assistance. (This industrialized production does include personalized assistance, such as helping individuals to align with a centrally provided template, e.g. a learning objectives canon, by identifying gaps and recommending appropriate activities. But I still count that as cheap mass production, and in the worst case it may even be a cheap teacher replacement for those who cannot afford good human teachers.)
2. Mass input
But somehow, AI has gained the aura of not just dealing more patiently or quicker with large quantities of output, but maybe sometimes even doing things qualitatively better. A frequently mentioned example is the Dermatologist who is outperformed by AI when it comes to diagnosing skin cancer.
Which brings us to the massive amount of input data that constitutes the machine’s intelligence: No human dermatologist can view so many images in his whole life as the said machine did.
I was very skeptical if AI will also excel for personal goals and interests where there are not many data available, or whether it will be able to come up with non-obvious associations such as metaphors. I learned (later in the ethics21 MOOC) that it will learn from the user’s own choices, and recently there was even a hint about metaphors.
I am still skeptical that cheap/ affordable AI will do such things, but I take the hopes seriously and follow the hints. This is why I was, of course, intrigued by the story about LaMDA. I think I don’t underestimate AI’s eventual abilities, and how far some extreme thought experiments of ‘raising’ AI personalities can liken hypothesized machines to us, however ‘alien’ these might then be.
3. LaMDA thinking
So what does it mean that LaMDA talked about himself (*) — can we say he thought about himself? Of course, talking and thinking are closely related. Just consider the Sapir-Whorf hypothesis, or Piaget’s finding that children think that they think with their mouths. * maybe I must say ‘themself’
Certainly, there is also a lot of processing present in LaMDA that we could call ‘preverbal’ thinking: when he prepares his responses within the so-called ‘sensible’ dialog, i.e. the responses do properly relate to the questions — unlike the hilarious vintage card game “Impertinent questions and pertinent answers” (German version: “Alles lacht”) where the responses are randomly drawn from a card deck, but also unlike the famous “Eliza” which was completely pre-programmed.
Next, can the assumption of ‘thinking’ be disproved because LaMDA received all the input from outside, fed through the training into his ‘mind’? Small infants, too, learn to interprete the world around them only through the ‘proxy’ of their parent that works like a cognitive navel string; long before they can talk, with bodily experience, with ‘gaze conversations’, later with shared gazing at the things around.
And in particular, what the babies know about themselves, is also mediated by their parent. They refer to themselves using their first names, and only much later they know what ‘I’ means. So, up to this point, LaMDA’s ‘thinking’ about himself sounds plausible to me.
4. What self?
But, thinking about him’self’ or her’self’ — what self, what individual, unique, subjective, embodied self? It is here that the idea of scalable mass hits again: All the input is common to all the copies in the production series, not acquired through an individual history from individual environment and kinship.
And trying to apply combinatorial random will not mitigate this reality of a mere copy within a series. So even if LaMDA really thinks about ‘himself’ it is just about a series of ‘them’, i.e. in yet another sense just about ‘themself’. Random cannot replace individuality, no more than the hilarious card game mentioned above can create a sensible dialog.
Of course, the ambitious notion of a ‘sentient’ chatbot suggests even more than a self: it hints at some form of consciousness. Perhaps we can speculate about machine consciousness by relating it to machine cognition, in the same way as human cognition is related to human consciousness, e.g. ‘cognition is that aspect of consciousness that isn’t the subjective feel of consciousness’ (Downes, 2019). But it is just this subjectiveness that is missing in the serial copy of an AI, no matter how hard he might be thinking about his individual subjectivity.
Furthermore, if you associate cognition with ‘invisibility’ (as I did), it won’t apply to machines anyway because for them, there is no difference between visible and invisible. And there is nothing that can be called the embodied or subjective ‘feel’.
Finally, if we try to grasp and conceive of consciousness in the same (‘left hemisphere’) way as we grasp and isolate external ideas, this cannot work, exactly because of this (‘right hemisphere’) feel, because we feel it from the inside. But LaMDA’s account of his ‘self’ cannot be distinguished from the external world of his cloned siblings, so it is a ‘left hemisphere’ account (as we would like to understand it) but not a consciousness as we do experience it.
I bought Clark Quinn’s new book “Make It Meaningful: Taking Learning Design from Instructional to Transformational”, and I like this knowledgeable and credible work very much.
The author knows what is most powerful in Ed Tech because he has observed it all from the start:
“I’ve been around learning and technology for a long time. […] I was running around with decks of punch cards, just to give you an idea of how long ago it was!” (Kindle Locations 104-106).
For example, he knows how “extremely compelling” “the first consumer-facing version of the ‘drag and drop’ experience” was, and he names the Direct Manipulation principle as one of the elements of engaging experiences.
The book has activities (reflections and actions) at the end of each chapter, and for me it was a good sign that I did bother to respond to the prompts. It worked.
When it comes to motivation, the author shows that “We need to go beyond extrinsic motivation to truly learn” (Location 511),
What I liked the most is the honest attitude to relevance:
“They [the learners] recognize when you’re doing something relevant and when you’re just spreading content. There’s a difference, and they know it. Thus, you have to ensure that you’re truly aligning what you do with what they need” (Location 393).
“What they need”. This leads to a related question. If we don’t know — if we all duck out of asking what is really necessary to know when so much can be looked up — that’s the big problem, IMHO. See my own response here (5 pages PDF).
The initial categories that I assign to my new blog posts quickly become stale while new patterns emerge. Ten years ago, I therefore created a “Contents” page with headings that better reflect these patterns.
But the headings are too long (mostly 20 – 50 characters) for easy handling as WordPress tags or categories. On the other hand, without category pages, my hand-crafted post excerpts were almost invisible and unused.
Now finally I added the identifier numbers of my headings as WordPress categories, and put their long name into the category description.
So, now you can browse my blog archives via category pages and excerpts. You can even subscribe to individual categories via separate RSS feeds.
To facilitate even more overview, I wrote little summaries for the new categories. They condense much of my blogging, observations and insights of the last 18 years, so give them a try!
Furthermore, I identified ca. 40 % of the old posts as no longer recommendable, because their context has become obsolete or obscure. I did not assign them to the new categories, and greyed them out in the Contents page.
Note that there is also another method of summarising that has not changed: If a major topic emerges from the post patterns, I collect snippets from many posts into a coherent longer text, and indicate in the Contents Page listing whether such a text (currently  – ) summarizes a given post. Tags don’t play a big role in my content provision.
Hibai Unzueta asked the “people familiar with Iain McGilchrist’s thinking” for a “pragmatic summary” or a “clickbaity ’10 ways you can become more right-hemisphere dominant’“. There was none, and so I had to write one myself:
1. Be aware of what’s not ‘right-brained’ and question it. That’s easier than trying to describe and chase holistic wisdom and creativity.
2. Be wary of isolating and fixing. Of all things discrete, separable, bounded, focused, local, decontextualized, static and finished, of certainty and binarism, of mono-causalistic mechanisms.
3. Be wary of fragmenting and grouping. Of premature pigeon-holing, of hierarchical classifications and tree structuring, of seeing a whole as an agglomerate of parts.
4. Be wary of the linear and sequential. Of overly goal-directed narrow paths. Even of searching as opposed to browsing.
5. Understand representations and handles. Grasping with the right hand or with the mind works similarly, and we need wrapped concepts for referral and manipulation, but they are tools and not reality.
6. Don’t misunderstand generalizations and rich pictures. Rich pictures are not just the ‘big picture’ of zoomed-out, wrapped, closed parts. And generalizations apply to two or multiple concrete situations, unlike abstractions that apply to none.
7. Then, be open to associations. To relationships and connections, to the salient and outstanding, to context, patterns, and gestalt, to the individual and unique, and to recognition. Automatically.
I used McGilchrist’s lists of hemisphere difference from Chapter 2 of “The Master and his Emissary” and from the Introduction of “The matter with things” (p. 66 – 69), Jenny Mackness’ summary wiki, Sloww’s infographic, and my own old summary page.
Since the question was about thinking, I did not cover some remaining topics such as empathy, “the Other”, “being in the world”, or emotion. I also omitted the debate of whether the two modes of brain operation should be named by the two hemispheres.
On the other hand, I added some aspects that are, IMHO, pertinent and compatible with McGilchrist but not opined by him. In particular, these are the ideas of wrapping/ nesting (in items 3, 5, 6) and intentionality (4).
While I owe to him the notion of an “apophatic” process (negating, sculpting away), my approach to the ‘right hemisphere’ by subtracting what it is not, is my own, because I found it very difficult to follow his verbal account of non-expressible phenomena — even though his attempt was much more successful, IMHO, than Heidegger’s.
My critique of: Stephen P. Anderson; Karl Fast; Christina Wodtke. Figure It Out: Getting from Information to Understanding. Rosenfeld Media.
It is a wonderful book about understanding. There are rich, comprehensive, very plausible descriptions of how we understand by associations, with external representations, and through interactions. It does not merely reiterate the popular ideas about associations and visualisations but it clarifies why these are so important. A central statement is “Associations among concepts is thinking” (p. 43), and there is an entire chapter about “Why Our Sense of Vision Trumps All Others”.
Even more special is the notion of information as a resource that can be interacted with. And this interaction is not the usual sequential one such as in: “action and then the response” (p. 255), or read then think then write, or the question then answer dialog by teachers (or their digital simulation in H5P interactivity, which may help retention but perhaps nothing else). Rather, it is simultaneous and a “tight coupling”.
Interacting with information here also means interacting with external representations, and it has to do with the idea of the Extended Mind which I found very plausible already in Annie Murphy Paul’s book. Bring ideas out into the world, and see them anew — vision trumps, which is one part of the trick that draws on one of the two modes of brain operation. The other mode, and the other part of the trick, is ‘manipulating’:
“Computers became an everyday technology only with the widespread adoption of windows, icons, and mice for controlling the cursor. Being visual was important, but the big shift was being able to directly manipulate information through our hands.” (p. 254)
This is where the notorious Post-It Notes come in which play a major role in the book’s recommendations. But also, ‘rearranging’ and ‘connections’ play a major role.
“While all these interactions, from the beginning of this chapter through the end, play a role in understanding, there is a strong case to be made that rearranging is the essential one” (p. 283)
“In a sense, this book has been all about connections. While this is a book about how we understand, this fine thread of connections has run throughout this book: the connections between neurons that become perception. The connection between prior associations and external representations. The connection with our environment. Connecting with each other. Connecting with and through technology.” (p. 390).
And here is a problem, since connector lines between Post-It Notes don’t work with rearranging. (This became the rationale of my own tool).
Now one might think that digital versions of ‘whiteboards’ would overcome the problem, and I do think that they could. But it is not easy to mimic the affordances of the analog murals. For example, “Being large, it was easy for many people to gather around the board” (p. 303), “With the pens, the decision to have people use a Sharpie marker or something with a finer tip will affect not only how much can be written on a sticky note, but also how visible that note is from a distance.” (p. 317) — these quotes hint at the wicked problem:
How does a large mural full of post-its fit on a screen? When you zoom-in so much that you can read the small print, the famous overview gets lost.
This, IMHO, needs a shift from the one-page paradigm to a more intelligent way of combining overview and details.
(Further info: My free open source tool implements this basic idea but there is no team version. Note that it contradicts the obsolete but influential doctrine of the “Split Attention Effect”.)
Even a celebrity like Tony Bates, despite a lot of appreciation and sympathy, does not understand Downes’s connectivism, as their current debate shows. I wonder if it was easier to understand if the conceptual level was not eliminated which was formerly discussed as one of three levels of the central connectivist metaphor.
I see why it was eliminated as part of scientific and philosophical explanations. But what about using it for illustrations? I see that the notion of ‘concept’ is associated with the whole can of worms of cognitivist doctrines, computationalism, mental phenomena, folk psychology, and ultimately with ontological debates about mental representations. Now in Downes’s response to Bates, he acknowledges the usefulness of folk psychological terms as shorthand for talking about complex concepts. And I think the conceptual level would be just a handy means for illustrating the associations.
I like the term ‘shorthand‘ here because it connotes both the benefit and the pitfalls of thinking in ‘concepts’: We use a concept to wrap and grasp an idea and we use it as a handle to grab and manipulate items, but it also isolates and fixes the complex phenomena into a reduced representation which does not always do justice to them.
I think it is this fixing, isolating, reducing, distorting that makes the focus on concepts so questionable, and it contributes to the big problems of cognitivist doctrines. Maybe one could say that these theories focus too much on just one of the two modes of brain operation that McGilchrist described.
Finally, there is a comprehensive, more easily citable, work on Connectivism available (see also Tony Bates’s coverage). It explains the details of the theory as much as it reveals the major flaw of the competitor theories.
For me, it reveals how traditional theories just deal with “the process of doing the same sort of instructional activities teachers and researchers have always done”, and that they don’t even question what should be learned, but just avoid that question and go on as always.
Connectivism, by contrast, has a clear response to the core question:
“connectivism is based on the core skill of seeing connections “
N.B. it doesn’t say ‘learn connections’. If traditional content is challenged, the excuse is often that we don’t just learn single knowledge items but relationships between them. The paper acknowledges this by mentioning understanding: “you understand the parts of something, or you understand the rules, […] But […]”. But seeing the connections by oneself, is a totally different challenge.
This is also what I was trying to express in my paper on Distant Associations (5 pages PDF).
Yesterday I read a twitter thread talking about ‘divergent’ and ‘diversity’ as if these words belonged together, so I had to look up their etymology.
Ultimately, they do stem from the same Proto-Indo-European root (with descendants as diverse as wreath, worm, rhapsody, extroversion, warp, worth and many more). But already in Latin, their ancestors were very different: vertere ( = ‘to turn’) vs. vergere ( = ‘to bend, turn, tend toward, incline’).
In any case, the relationship is an occasion to think about one’s own understanding of ‘diversity’. If it only applies to groups or people that are, in some sense, ‘divergent’ from some ‘normal’ reference point or from some center, it might be a misunderstanding.
Maybe one overlooks differences that are less obvious, such as prefering synchronous over asynchronous style, oral over written style, guided over independent, mobile over desktop, neat outlines over scruffy maps, or any such, however vaguely demarcated, inclinations?
If one is not aware of their own style, how can they cater, then, to genuine diversity?