The Possible

The visual thinker Dave Gray (whom I have often mentioned) is currently founding a School of the Possible. Its courses will interest me a lot, because the idea of ‘The Possible’ is a fascinating one.

So here are some first thoughts about what possibilities are most important to me.

1. Technical

Too often in my long career with the Digital, I saw how the shallow, impressive, low-hanging fruit of the Profitable pushed itself forward and obscured the valuable affordances. In the ‘Contents’ of this blog, ‘affordances’ have always played a primary role.

Especially now after the big bang of AI, it is important to distinguish between technology that helps us to cope with nature, complementing nature to overcome some deficiencies, versus a technology whose ever perfecting effectivity tends to quickly take on a life of its own and strives to master and dominate nature and ourselves.

In particular, technology that relieves us from memorizing stuff, poses big questions and chances for the purpose (and philosophy of) education, and cognitive automation raises questions like “What’s left for humans” (Siemens), some of which I tried to answer here and here, respectively.

2. Skills

Just in the context of challenges from technology, there is great potential to further develop some skills. One skill related to my above-mentioned answer, is to notice things and see similarities.

An important skill, IMHO, is to notice, acknowledge and embrace the diverse styles of other people. Too often, diversity is seen as something that does not apply to ourselves because we are the normal, the privileged, and certainly those who are right and know the unique correct solution. (Teachers may be especially prone since their day job demands marking the right or wrong.)

Another important skill is to know whom to trust. I think this does not work by independently evaluating the binary truths of others’ statements, but through ripple effects like contagion across networks of trustworthy relationships. The trust, in turn, may lead to better cooperation which plays a big role in Howard Rheingold’s work.

3. Political

Democracy offers an immense possibility that seems to be often forgotten, or fading in the background behind noble abstract concepts. The political power of the Many could tame the economic power of the Few which is constituted simply by the scarcity of something, as Game Theory easily explains.

Close-up of a bud before solid blue background.
About to Bud” by Stephen Downes CC-BY-NC

More background.

There are some older episodes that seem not directly necessary for the new context but I found them interesting — although I do not share the cultural background of Star Wars. (I am so much older that my fascination with Science Fiction came from radio plays with no visualization.)

In numerous paragraphs, Dave recommends to “notice and wonder”. This does not only resonate with my own emphasis on distant associations (linked above), or with my longstanding interest in annotation. Wondering can be seen as the start and core of all philosophizing.

And another recurring theme is that we need to try out the possibilities ourselves, with our “unique position in the world”. This resonates with my understanding that the individual’s subjective experience and walk of life is what makes humans different from AI, and that this diversity is necessary for evolution and that “We can let the world choose”.

Finally, plenty of minor points in the various episodes made me nodding, such as thoughts about the industrial vs. digital economy, platform’s value, bureaucratic unfeeling predictability, architecture and infrastructure, Bismarck’s pension program, or consciousness as something that we cannot measure from the outside but only experience it from the inside.

Posted in Intuition | Leave a comment

Edge’s Split Screen

Slowly, very slowly, popular applications are starting to overcome the limitations of the One Page paradigm. Here is a small sign.

With Microsoft Edge v 114, it is possible to right-click a link and select the option “Open link in split screen”, and the result might look like this screenshot where a little bit of context on the left-hand side is visible simultaneously with some detail on the right-hand side.

Screenshot of two papers shown in the new split screen of the Edge browser. On the left is a table with one cell containing a hyperlink, on the right is the referenced paper. Texts are not relevant here but are about education by Homanova et al. and Horwitz et al, resp.

The advertising slogan is rather cunning: “Split your screen, not your attention”. It alludes to the archaic doctrine of the “Split Attention Effect”, so the Cognitive Load theorists can still say that they have always said that it is important to avoid split attention — despite the new feature does not at all do what they wanted: while they demanded that an annotation must always be close to its referent, it now appears in a separate pane. See this blog post for more.

It is a slow progress. Currently, you need to enable the option via edge://flags/#edge-split-screen and restart. The author of the HTML cannot yet specify a behavior such as target=”_split-screen”, nor can JavaScript.

As of yet, it was also quite difficult for me to find an authentic example where there is a hyperlink within some dense 2-dimensional context, i.e., a context that is not merely a linear list for a navigation bar. Current content is firmly bound to the One Page paradigm. (And to be honest, the link shown did not allow the described option because it was in PDF. And if you clicked it, it just teleported you to the references at the bottom of the page, as is expected in such a paper. Our tools shape us.) For possible uses, see my own examples.

Posted in About Links | Leave a comment

Who educates Emile the Bot

Increasingly, the continued discussion about the superiority of human intelligence is becoming a distraction from the real power issues.

Of course, people who have slumbered away the whole difference between GOFAI and machine learning, must hear an explanation about reasoning with concepts and schemas or recognition and contexts. But the danger is that the focus shifts to theoretical questions of what could be (in principle!) possible to achieve … if only AI were trained optimally. Instead of asking what will likely happen when the big invested money wants a return.

It is a tempting idea that we just need to give AI an optimal ‘education’, by giving it access to sufficiently many resources (which it currently has not, as Stephen Downes pointed out yesterday on Mastodon) and exposing it to many experiences, and then the AI will successfully emulate what humans do.

A robot with his name written on his chest: Èmile.

So what would be the ideal education for an AI bot named, say, Emile? (I thought of ‘Emile’ whose name means ‘eager rival or imitator’ and has the same word root as ’emulate’. There are more parallels: Rousseau’s pupil Emile was a son of a rich family, which can be said about AI too. And he was not taught according to some centralistic template or syllabus.)

Since at least #ethics21, I have much fondness for decentral learning from examples, via ripple effects, and perhaps like ebb and flow, which I see as part of connectivism. And I think this could work (in principle!) also for educating an AI. who would then become an individual whose weightings were grounded in his contexts of virtual neighbors and friends.

Contrasting this with the real chat bots, the main difference is that their sources are a tsunami of anonymous mass data, and if we tried to recognize something like an archetype from whom they have learned, this would be at best the absolutely non-individual average which is derived through statistical brute-force processing of the information deluge. OK it is not a central authority from whom they learn, but a universal average that can also be seen as a very centralistic template, IMHO. (Which might be perfect in many cases such as, e.g., foreign language learning where we try to emulate an average native speaker.)

So through this mass-oriented tuning, the bots successfully emulate those whom they are built to replace: the large number of cognitive laborers. While the few privileged will still get individual human treatment.

Posted in Tools | Tagged | Leave a comment

Polarizing topic AI

It seems to me that the attitudes towards AI are hardening and becoming polarized, and that is not a good thing.

Since the thunderclap of the sensational LLM, I have read many reactions that enthusiastically embrace them, and equally many who vehemently mock them or warn about them. Now after a few months, some reactions get deeper and more detailed, but the endorsements and rejections get only more determined. It is a question of ‘all or nothing’ and ‘either/or’, leaving no room for a ‘both/and’ to see which parts may be beneficial and which other ones should indeed be fought in their beginnings.

Two happy faces and two angry faces, and arrows between each pair with a green head and a red head.

I think this is a pity, although I respect that many people may be surprised by the power of deep learning which they may have ignored together with the old symbolic AI, and although I have many sympathies for those warning, especially alarming of possible capitalist abuse.

Trying to understand why the reactions are so far apart, I realized how cunning it was that AI’s big successes were first revealed with a conversational application:

With its language, an LLM can impress us in two different ways at once: (1) conveying knowledge about the world, and (2) appearing to be thoughtful and responsive to persons, because language is a proxy of both. So everyone can pick their favorite feature to be enthused, or spot a glaring deficiency to despise.

Most obviously, the two sides are mingled in the unfortunate discussion about student essays. While some use them to test ‘transmitted’ knowledge (1) and cry about cheating, others use them to practice and improve how well students can express their own ideas (2) and strengthen their linguistic ‘muscles’ — such that it makes no sense to replace the ‘stairs’ by an automated AI ‘elevator‘.

1. There are two kinds of knowledge ‘about the world’ that an AI can output. One (1a) is the traditional kind of factual knowledge that would fit into an expert system, or into a large knowledge graph with explicit statements in the form of RDF-triples (subject – predicate – object) from a huge ontology that represents a huge knowledge tree with countless ramifications. AI ‘gains’ this knowledge from ‘listening’ and recognizing all sorts of verbalized situations, and does not really distinguish it from the other kind.

The other kind (1b) is a type that has been underestimated and disrespected for too long. It is being derived from traces and outputs of tacit and implicit knowledge. Although this also includes speaking the language of the experts of a domain, it is more akin to less reputable practical and craft and apprentice’s skills that require quantitative eyeballing, experience with bricolage, trial and error, and a big internal ‘database’ of holistic pattern images, and so it has become less esteemed.

Now the former kind (1a) of exact knowledge is not as optimally extracted and processed by modern ‘statistical parrot’ methods as it would be by ‘good old-fashioned AI’ (gofAI) and the symbolic methods of expert systems. So the LLMs make big embarrassing mistakes which give its enemies plenty occasions to dissect them, even though they are doing a pretty good job according to the Pareto rule of 80:20. And the appeasing argument that they will improve over time and approach the 100%, will IMHO never mean that they eventually reach the 100% of accuracy.

But the pendulum has swung away from gofAI and nobody seems to care about it any more. I think it could now be revitalized and combined with modern AI. Maybe a possible division of labor could also be that some of the riddles of the ‘black box’ of how AI arrived at its results, could be solved (by explainable AI methods such as counterfactual analysis etc) and then could be fed back into the expert systems. E.g. what are the crucial salient features on a picture of skin cancer that led the AI to a better result than the human expert?

2. The other way that LLMs can impress us is personal assistence with our cognitive tasks. Talking is a proxy for thinking, and the LLMs impress our “linguistically-oriented minds” such that we expect a lot from them; as Helen Beetham (via OLDaily) said :

“LLMs produce new strings of data that mimic human language uncannily well, and because we are a linguistic species, we take them as meaningful”.

Even the conversation with a search bot appears much more promising and personal than a mere one-way specification of search terms (which I always struggled to come up with). While still very recently, I opposed to the idea that an AI should speak of himself as “I”, it was now natural to refer to my copilot as “him”, because he represented to me his big organisation and a specific style and appeared as their ‘voice’. And imagine how great it would be if “he” was so familiar with my unspoken needs that I could even regard him as a my ‘butler’!

But I am still waiting for a good example of how the personal tailoring to my needs would work. And I am very skeptical. Maybe because I have not yet found an explanation of how the second training (with my personal data) works. And more doubts: The tiresome routine work that makes up the bulk of my own daily work does not easily equate to the mass production that AIs are best tuned for, from massive data out there. My own need is not a selected subset of the typical common mass problems but it is often a deviation from these, on my erratic journey of solving my own problem. Once I had thoroughly misunderstood the typical use of a (coding) concept, I was not even able to convey my question to the search bot, even after several rounds of back and forth, and my copilot just ruminated my clueless attempts. I would probably be a difficult client also when my ‘butler’ tried to read my notes, since I make typos and sloppy cryptic wording and abbreviate so much that my notes almost contain a ‘private’ language (which I know doesn’t exist, according to Wittgenstein). Maybe, if the assistant had finally discovered a pattern of the readings that I often click, I would still be unhappy with his recommendations because I would fear that he led me into my own echo chamber.

So the hopes and doubts and dangers are much more complicated than a simple ‘either/or’ of enthuse or despise.

Posted in Tools | Tagged | Leave a comment

My test flight with Github Copilot

For the past several weeks I haven been busy rewriting parts my application from Java to Javascript, and I used the free trial of Github Copilot to get assistance. So here is my report about this adventure.

First I need to admit that even after my 50 years in IT, I was really astonished. Sometimes I almost had the impression: wow, he is reading my mind. At some point, it seemed like he came up with an analogy that he detected all by himself. As Clarke’s 3rd law says “Any sufficiently advanced technology is indistinguishable from magic”.

The user interface is great and simple: I type a comment, he writes one or more lines of suggestions in grey font, and I accept it with the tab key (turning the grey into black), or reject it with cursor down. There is even a limited sort of conversational mode, when he responds with a comment line and I can alter it before resubmitting.

Let me distinguish between cases when I knew what I would do, and cases when I did not. Since I am much less familiar with Javacript than with Java, there were many instances of the latter type. But just when I was very focussed on the difficult new things, I made lots of silly mistakes or ‘slips of pen’ in the former type, as well.

When I know what to do

The copilot saves me a lot of laborious typing work.

  • He is good at verbose comments.
  • Using ‘console.log’ becomes effortlessly detailed.
  • He adds similar variations whenever there is a pattern like ‘x’ and ‘y’, ‘save’ and ‘load’, ‘left’ and ‘right’, then he is quick to suggest the second one.
  • He makes additions, but he did not make the consistent systematic alterations that are so often necessary when adapting old code. For example, when I repeatedly had to change “translation.x” into “translation[0]”, he would not help me, only with the other half: .y to [1] as an added new line. OK the interface is for additions and great.
  • He did not help me against the silly slips mentioned above, or check for correctness and consistence.

Of course I have been pampered by my previous environment: Java with Eclipse instead of Javascript with VS Code. In that other language, little typos, omissions, inconsistencies could be immediately marked. The long chains of references and pointers could be checked, and if missing prerequisites were the cause, a one-click fix could be offered. And the ‘intellisense’ drop-down choice (after typing a dot behind an object’s name) could be shorter and more pertinent.

Of course that older language was even more laborious to type. But when a slip goes unnoticed, it may cost much more time to debug it. And the copilot does not always care for syntactically correct code. The code is just is similar to frequently used correct samples.

While the burden of typing is relieved, there is another, new, type of strain: Constant attention to the suggestions. There is a shift from an active mode to a re-active one whose consequences are difficult to judge. Maybe it will unbalance the two fundamental modes of brain operation — at least I personally hate it when I constantly need to surveil something instead of doing something, and the fatigue leads to mistakes.

But maybe it suits modern people, who may be more comfortable with to re-acting to, and distrusting, the busy input streams. Maybe it will strengthen our debugging capabilities, maybe even the collaborative capabilities and the coping with others’ code. (But as for the copilot’s verbose superficial comments, I doubt that it will contribute to the understanding of such code and not just please reviewers or even lead them astray).

The problem with the small slips and silly mistakes is that they are hard to spot in my own code and even harder to notice in the copilot’s suggestions, just because they look so similar to the correct code. And so it happened several times that I failed for some faulty code and then spent many hours to debug it.

When I do not know what to do

The other case is when I do not know how the correct syntax should look like, or when I do not even understand some of the concepts, which happened quite often with the less familiar programming language. Then, of course, I am grateful for any hint, and so sometimes the copilot’s sample code contained at least a variation of what I needed, and so I could continue trying and searching.

Here I need to grumble a bit (“everything used to be better” :-)) why this is not my favorite style of learning. I loved the combination of User’s Guide and User’s Reference. The former was a top-down overview, and the latter was a concise description for the individual building blocks, bottom-up, to look them up, just in time.

By contrast, an explanation by a runnable code-snippet is not always the best way to convey a difficult cryptic construct, with anonymous tons of brackets and braces, or with lots of surrounding extra code just to make it run in the browser. Some day we realized that, formerly we knew why something did not work, and now we do not know why it does work. With the pressure towards frameworks, this trend is accelerating. Now it is much faster to just try it out rather than first thinking it through. I admit I often find myself doing this, too, in particular when it involves just toggling a binary option. But I found that sometimes it costs much more time and effort to eventually understand such wicked constructs.

Copilot gave me multiple of these cumbersome experiences. Often I was not able to communicate to him what I wanted. (I also tried the new Bing Chat where the request could at least be refined, but their code snippets were rarely more useful.) The most frustrating thing was not even when the copilot started to confabulate and wrote line after line with, e.g. exotic options for a Redo manager. The most frustrating was when he just imitated and ruminated my own clueless attempts. The only advantage over a google search was that he brought his code snippet examples into my own context.

Personal context?

The interesting question is how much he learned to adapt to my personal context, vs. how much standard common patterns he used. But this is difficult to tell. He often reused my recently entered lines. My program contains ‘nodes’ and ‘edges’ which in the older parts were called ‘topics’ and ‘assocs’. I was surprised that he instantly complemented my ‘topics’ with ‘associations’. But then, other people call their stuff similarly. The trial is tied to my Github account, and I don’t know if he knows my other program versions up there in the repository which are not on my local VS Code. Indeed I do not know what he knows.

Similarly, how much did he do on my own computer, and how much did he do ‘at home’ on the giant machines? Below is a screenshot of my network statistics during average work with him but not explicitly prompting him. It seems quite a lot.

Network adapter statistics of 60 seconds. Approx. 12 peaks, 2 of them 100%

Conclusion

For me, one big benefit of the copilot is that he compensates for some shortcomings of other elements of our trade. For example, the difficulties of languages like Javascript, where he approximates the correct snippets by common, frequently used patterns. Or the lack of good atomic reference information, which he replaces by example snippets right into the context at hand. The other big benefit is to save typing time for the small share of very frequent patterns.

This is IMHO not worth the massive energy consumption. But I do see a potential in some use cases that could probably run on the user’s machine: For one, searching and adapting the user’s own similar precedence snippets. And second, tracing and following all the references and pointer chains to check if a code statement will meet the prerequisites or prepare them otherwise. But this would probably not need much similarity-based machine-learning. Mere similarity, IMHO, is at odds with correct code.

And the goal of the massive investments is probably not the support of individual needs but the reduction of costly humans by machines, who won’t strike.

Posted in Tools | Tagged | Leave a comment

Imagination and understanding

Once we consider how much chatbots understand what they say, we might ask once again how we ourselves do our understanding. Incidentally, there was also a recent discussion about creating imagination (more specifically, a mental imagery), and it occurred to me that there is a connection I had not previously noticed:

Translated to German, ‘imagine’ is the causative of ‘understand’.

  • ‘Understand’ is verstehen which comes from standing right in front of something. (English understanding even comes from inter, i.e. standing right among, in between.)
  • Now ‘imagine’ is vorstellen, and stellen ( = ‘to put’) is the causative word of stehen (= ‘to stand’) — I put something somewhere such that it will then stand there.

So, the active attempt described by Høeg, to “create”, “summon up”, “construct” a mental image, vorstellen, is linked to an ideal state of verstehen where the immediate presence of, or immersion into, a phenomeon yields a plausible, deep, kind of understanding.

Note that unlike ‘imagination’, the German equivalent is not tied to ‘image’ and the visual sense. But of course, with the visual impression of something right in front of you it is probably much easier to ‘see’ the connections between multiple items that appear all-at-once in a spatial view, than with sequential speech or text — at least for many people including me, and I think this is why is is often said that “Our Sense of Vision Trumps All Others”.

A man standing in front of a landscape with rainbow; next to his head is a thought bubble picturing his hand grasping the rainbow.
Imagination

Everybody has their own idea about understanding and the various meanings of the word, from mere acoustic and superficial senses, to mechanical ‘snap in’ or ‘fall in place’ senses, to an empathic sense and other deeper forms of understanding, including some kind of personal relationship through the “standing in front of”.

In the above-linked video about chatbots, the criterion of understanding is whether they can apply it to similar examples. I think this is still a superficial sense, although it certainly fits into the educational context of proving and assessing one’s internal state of understanding which is impossible to tap more directly.

(But, just like the unfortunate discussion about cheating in essay writing, it is a pity that the focus is so much on competitive assessing and suspicion, instead of a bit of trust that, immersing a student for years into a simulation of what they want to become, will in most cases grow sufficient understanding for responsible work, and that total failures may be detected much easier and earlier, ideally by the student themselves.)

Caveats: 1. It is always problematic to draw conclusions from etymology, and I may have even misrepresented the actual case here. And if it’s correct here, someone else has certainly written about it whom I failed to read. 2. As often, the post draws on ideas from Stephen Downes, in particular re superficial and deeper, which I may not have sufficiently understood, either.

Image is a Remix of Reaching for Rainbow and Centered Under the Arches by Alan Levine (public domain), plus MS Office stock photo

Posted in Visualization | Tagged | 1 Comment

Distinctively human, now

Everyone needs to find their response to the question of what humans can do better than an AI. Here is mine.

Until recently, comparing human intelligence with AI typically involved pointing to some bigger feat that machines could not perform: some ‘higher order’ thinking such as coming up with an idea, understanding and solving a problem, or generating a creation. But after the shock by the famous chat bot, the complacency has been shattered, and bemusement and cluelessness are barely hidden behind badmouthing and belittling, or quickly jumping on the bandwagon.

I think the fixation on the qualitative degree of a feat, was distracting and misleading. Now that the principle and the building blocks of the human archetype of intellectual activity are being understood and copied, no degree/ amount/ extent of human performance can remain unchallenged.

Instead, the distinguishing feature is the subjectivity and individuality, grounded and developed on a unique personal background, that makes human thought so valuable to others.

Pictograms of a fingerprint and a QR code, alternating. Annotation is "Human" and "Artificial".

This difference largely determines what I will expect from AI.

There are cases where people do not need or want a personal counterpart to engage with.

  • I might be perfectly happy doing self-service with an anonymous agent whose agency is only optimized to get me in some target state such as having some predefined knowledge or skill. Note that I may select a personal subset that interests me, or my personal gaps get individually detected and filled by a personalized service, but the target is modelled after a centralized template and there is no mutual influence possible or necessary, and no emphasis and weighting or bias of what is particularly relevant from either point of view is desired. Acquiring ‘objective’ knowledge would be an example for this one-sided case of personal activity.
  • Another case of one-sided activity is the magic of the Brownies of Cologne where the AI does all the thinking for me. Some hyped prostheses called ‘tools’ for thought are expected to deliver that magic.

In most other cases, however, an anonymous unpersonal automat is just frustrating, even though this might be not noticeable from the start.

  • Generative creative arts delivers impressive results that may even be unique, by applying random and combinatorics. Like human arts, it may tickle the sense of surprise and the desire of novelty and help appeasing boredom, for example by juxtaposing unexpected elements. Drawing from a vast anonymous mass of sources, however, it creates anonymous mass products, comparable with the ‘Belling stag’ or the ‘Mediterranian with jar’ from the department store. And from the multiple random variations one cannot recognize a pattern that could reveal something about the artist.
  • Other kinds of creativity, like finding solutions for problems, will also often involve a new juxtaposition of concepts from different domains, a distant association. And it may be tempting to apply AI and random to produce such new combinations. But without an individual sense of relevance, the raw mass of combinatorial links is just futile to sift through. And no, it is not possible to communicate one’s personal weightings to the AI via verbal prompts that are limited to explicit ideas and exclude the tacit knowledge.
  • There are many cases that only work with a genuine human. Learning by early imitation, shared gazing and trust, relationships of care and coaching and fostering independence, are all relying on that the other is a genuine human as well. And genuineness is typically recognized from an individual personality as opposed to a templated automated mass instance (unless betrayed, which is why mandatory labeling is the most important part of AI transparency).

Finally, the question might remain: why can’t robots be subjective and individual? I think, theoretically and in some extreme thought experiments, they could indeed be. Of course it would not suffice to apply some randon generator for many single features of their ‘personality’ because this would make just confusion and not a consistent whole like a human. Rather, they would have to be ‘raised’, and grow from a trusted grounding into diverse individuals. Of course, it is difficult to imagine how an AI could be subjective and — without embodiment into meat — be sentient and have passions or even empathy. Therefore, it might be useful to think of them as alien intelligences: as if they were from a different planet. These intelligences are equally difficult to imagine, but very probably do exist nevertheless. Supposed that AIs grow similarly as humans, and since they share the network properties of our own neuronal networks, it is then no longer absurd to suspect that some subjective and passionate forms might be constructed. Not absurd, just very alien.

But probably it would be much too costly for the investors to produce such devices, and they would certainly not fit ideally into the economic model of mass production.

Posted in Uncategorized | 2 Comments

Visualizing Complexity

Howard Rheingold asks about “Tools for thinking about knowledge?”, and this is a great prompt to clarify my own imagination.

I responded on Mastodon: I can best visualize thinking/ ToolsForThought as a large table while knowledge/ PKM is a big filing cabinet. Put snippets from the drawers on the table and rearrange them.

And therefore, musing about thinking can be facilitated by a modern mapping application. When I played with a puzzle like the famous planarity.net I could wonderfully philosophize about how a dauntingly complex network of associations could eventually be disentangled, and I continued with my own puzzle.

Such a map depicts what is IMHO the most important ingredient of thinking, associations. And it provides a palpable experience (think Murphy Paul) of the distance of the associations. Distant associations are the core of innovative thinking, and the gradual untangling of a complex map involves just the reducing of the very distances of related things, to see new connections.

(A simple map is, of course, not such a lucrative product to sell as an all-in-one TfT app, and my tool is not for sale. It doesn’t even require any installation or registration; it can be launched just from the Downloads folder. Then find the relaxing puzzle under Advanced > Miscellaneous.)

Maybe you also see another feature of a finished map such as my ‘after’ version above. There are parts that are merely hierarchical and do not add to the complexity and overlap problems. They are just ‘complicated’ (from Latin complicare “to fold together”), not ‘complex’ (from Latin plectere “weave, braid, twine”). As you rearrange the map, these parts gradually take their shape and gestalt by separating and isolating items and structures. (Which reminds us that the brain is a distinction engine, and one part of it is particularly good at isolating and focussing on the narrow contexts of deep knowledge about a frame of topics.)

The rest of the map cannot be reduced to such trees, but consists of associations connecting multiple topical areas. (This reminds us of metaphors which were key to language development, and which are the strength of the opposite part of brain operation.) These true network structures cannot be reduced to trees and sometimes they cannot even get rid of their overlaps. Because complexity cannot be simplified without adulteration, it can only be made clearer by rearranging its map.

Posted in Visualization | Leave a comment

Technically confirmed authorship?

Not only the current unfortunate discussion about student essays is concerned about faked or confirmed authorship. I too want to know who thought up what I am reading. The only way to ensure it was a human, is via successful networks of personal trust.

We are used to devise technical solutions for many authentication problems. To identify the human associated with a user ID, we tap their brain where the password lives (or sometimes just a pass phrase for unlocking a cyptographic private key which is a password too long to remember, and hence stored). But how to tap the thoughts within the brain, when thinking is now thoroughly separated from writing, and can be delegated or substituted?

I think the problem cannot be solved by adapting the citation practice, which was focussed on guaranteeing validity, bibliometric merits, and copyrights, rather than honest attribution of ideas, inspiration, and pointers leading to relevant sources like this. By contrast, early blogger networks shared stuff from other authors that was not only trusted to be truthful but also trusted to be sufficiently relevant. And I think the same ‘ripple’ networks might also guarantee human authorship.

A decentralized network topology, with nodes in the proximity of one red node having icon colors and connector colors in tones of decreasing warmth.

I am repeating my picture of such a decentralized ripple network from a previous post from the #ethics21 MOOC, because I do think that the way trust spreads is essentially the same that works for the learning of a personal ethics.

Via such trusted links, authorship might be sufficiently identified, to a reasonable extent. Achieving 100% human authorship will be similarly diffucult as avoiding GMO food, where one field could always be influenced through the wind blowing from a neighboring field. Similar pollution by sources that are not 100% human, will gradually increase.

What will also increase is our tolerance of what we expect. For example, I have always valued and admired the capability to find what is salient among a vast deluge of sources, and good pointers were equally valuable for me as good writing. I was never able to do this with e.g. too many twitter followees, and I had to limit myself to 100 because I set my timeline to “Latest” to avoid the patronizing algorithm.

But there will soon be more AI involved. I still have not tried Feedly’s personal AI recommender Leo, because I don’t like to channel all my clicking activity through their interface just to enable them learning my preferences. This would feel constraining to me, a bit like a proctoring app. Furthermore, I still doubt if it could really recommend me serendipitous, distant resources for diverging thoughts, in addition to converging on my observed click pattern. My rule of thumb for AI is: let it sort, not rank, so I would prefer an app that does not filter my stuff but just categorize it.

So, practice may be changing. But of course, trusting the sharing authors also involves trusting their use of various tools.

Posted in 57, Tools | Tagged | Leave a comment

No prediction

This end of year, I cannot dare a prediction of what will change, just that much will change. Big changes are impending because the development of ChatGPT has certainly marked a big watershed: Like “the Pill” has separated sex from reproduction, AI has now thoroughly separated talking from thinking. But the consequences might lead into either of two very different directions:

  • either, bullshitters and gaslighters will be rumbled and will have a harder time, because now everyone can see that there is no substantial thought behind their eloquent babbling which the Large Language Models can now do equally well,
  • or tech will lure us even more effectively into relying on patronizing prostheses, instead of finding a reasonable division of labor, for cooperating with the tools, and towards the Augmentation of Human Intellect that Engelbart dreamed of.

Technology has rarely contented itself with helping us to cope with nature. Instead, its ever perfecting effectivity tends to quickly take on a life of its own. Instead of complementing nature to overcome some deficiencies, it quickly strives to master and dominate nature.

(Tech tools being used for dominance and power are not an incidence, because of their ownership: they are mostly associated with investment or ‘capital’ of some sort, whose scarcity constitutes economic power, as game theory explains.)

The common theme of tech dominating nature also extends to thinking. It is difficult to escape the commercial pressures and find or promote tools that honestly complement human thinking (cooperating with nature) rather than trying to outperform it (competing, and seducing and substituting). I have been observing this for quite a while within the segment of Tools-for-Thought. Even though few users will admit it, there is a tacit hope that they will get smarter without much effort because somehow the tool will do most of the thinking. (And paying for the hyped tools fosters an entitlement attitude.) Perhaps one possible prediction for next year is that this hype is heading for its trough of disillusionment…

So far, however, human thinking has only been dominated by a human tool: language, which has often been conceived of as a tool (a ‘technology’) for thinking, not at least because it is controlled from a brain area right next to the area that controls the ‘grasping’ and manipulating right hand. But with the separation of thinking and talking (human thinking and artificial talking), this relationship is profoundly shaken and shattered, with big consequences difficult to guess.

Man scratching his head, sitting beneath two diagrams, one showing a curve with ascending slope and one with descending slope, and a big question mark in between.

Incidentally, I was just learning about the philosophy of technology and its relationship to nature, when Jenny Mackness recommended a book about a human way of cooperation with Nature instead of competition and exploitation. And it became clear to me that we are very bad at cooperation between technology and (human) nature, and need to get much better.

Posted in 57, Tools | Tagged | Leave a comment