Technically confirmed authorship?

Not only the current unfortunate discussion about student essays is concerned about faked or confirmed authorship. I too want to know who thought up what I am reading. The only way to ensure it was a human, is via successful networks of personal trust.

We are used to devise technical solutions for many authentication problems. To identify the human associated with a user ID, we tap their brain where the password lives (or sometimes just a pass phrase for unlocking a cyptographic private key which is a password too long to remember, and hence stored). But how to tap the thoughts within the brain, when thinking is now thoroughly separated from writing, and can be delegated or substituted?

I think the problem cannot be solved by adapting the citation practice, which was focussed on guaranteeing validity, bibliometric merits, and copyrights, rather than honest attribution of ideas, inspiration, and pointers leading to relevant sources like this. By contrast, early blogger networks shared stuff from other authors that was not only trusted to be truthful but also trusted to be sufficiently relevant. And I think the same ‘ripple’ networks might also guarantee human authorship.

A decentralized network topology, with nodes in the proximity of one red node having icon colors and connector colors in tones of decreasing warmth.

I am repeating my picture of such a decentralized ripple network from a previous post from the #ethics21 MOOC, because I do think that the way trust spreads is essentially the same that works for the learning of a personal ethics.

Via such trusted links, authorship might be sufficiently identified, to a reasonable extent. Achieving 100% human authorship will be similarly diffucult as avoiding GMO food, where one field could always be influenced through the wind blowing from a neighboring field. Similar pollution by sources that are not 100% human, will gradually increase.

What will also increase is our tolerance of what we expect. For example, I have always valued and admired the capability to find what is salient among a vast deluge of sources, and good pointers were equally valuable for me as good writing. I was never able to do this with e.g. too many twitter followees, and I had to limit myself to 100 because I set my timeline to “Latest” to avoid the patronizing algorithm.

But there will soon be more AI involved. I still have not tried Feedly’s personal AI recommender Leo, because I don’t like to channel all my clicking activity through their interface just to enable them learning my preferences. This would feel constraining to me, a bit like a proctoring app. Furthermore, I still doubt if it could really recommend me serendipitous, distant resources for diverging thoughts, in addition to converging on my observed click pattern. My rule of thumb for AI is: let it sort, not rank, so I would prefer an app that does not filter my stuff but just categorize it.

So, practice may be changing. But of course, trusting the sharing authors also involves trusting their use of various tools.

Posted in 57, Tools | Tagged | Leave a comment

No prediction

This end of year, I cannot dare a prediction of what will change, just that much will change. Big changes are impending because the development of ChatGPT has certainly marked a big watershed: Like “the Pill” has separated sex from reproduction, AI has now thoroughly separated talking from thinking. But the consequences might lead into either of two very different directions:

  • either, bullshitters and gaslighters will be rumbled and will have a harder time, because now everyone can see that there is no substantial thought behind their eloquent babbling which the Large Language Models can now do equally well,
  • or tech will lure us even more effectively into relying on patronizing prostheses, instead of finding a reasonable division of labor, for cooperating with the tools, and towards the Augmentation of Human Intellect that Engelbart dreamed of.

Technology has rarely contented itself with helping us to cope with nature. Instead, its ever perfecting effectivity tends to quickly take on a life of its own. Instead of complementing nature to overcome some deficiencies, it quickly strives to master and dominate nature.

(Tech tools being used for dominance and power are not an incidence, because of their ownership: they are mostly associated with investment or ‘capital’ of some sort, whose scarcity constitutes economic power, as game theory explains.)

The common theme of tech dominating nature also extends to thinking. It is difficult to escape the commercial pressures and find or promote tools that honestly complement human thinking (cooperating with nature) rather than trying to outperform it (competing, and seducing and substituting). I have been observing this for quite a while within the segment of Tools-for-Thought. Even though few users will admit it, there is a tacit hope that they will get smarter without much effort because somehow the tool will do most of the thinking. (And paying for the hyped tools fosters an entitlement attitude.) Perhaps one possible prediction for next year is that this hype is heading for its trough of disillusionment…

So far, however, human thinking has only been dominated by a human tool: language, which has often been conceived of as a tool (a ‘technology’) for thinking, not at least because it is controlled from a brain area right next to the area that controls the ‘grasping’ and manipulating right hand. But with the separation of thinking and talking (human thinking and artificial talking), this relationship is profoundly shaken and shattered, with big consequences difficult to guess.

Man scratching his head, sitting beneath two diagrams, one showing a curve with ascending slope and one with descending slope, and a big question mark in between.

Incidentally, I was just learning about the philosophy of technology and its relationship to nature, when Jenny Mackness recommended a book about a human way of cooperation with Nature instead of competition and exploitation. And it became clear to me that we are very bad at cooperation between technology and (human) nature, and need to get much better.

Posted in 57, Tools | Tagged | Leave a comment

Twitter Exit

It is time to stop most activities on Twitter, so I had to get serious about it.

I don’t like Mastodon, either, and I won’t discuss some discomfort with its specialities now. The main reason is that I dislike the stream of any microblogging platform. I think its daunting linear organisation is optimized for overwhelming and confusing users so much that they become dependent on recommendations, by algorithms, if their number of followings increases.

I have limited my consumption to 100 followings, and my active tweeting was mostly a poor surrogate of RSS, for the people who, for whatever mysterious reason, still don’t have a feed reader to read my feed.

So, the easier part of the migration was to process my own 421 tweets, replies, and retweets from the downloaded archive. From the resources that I myself wanted to recommend, I created 46 new entries on my revitalized Social Bookmarking account at Diigo, 12 of which emerged from the Reply tweets where I copied some of the conversation into the descriptions; about 10 tweets needed an archive.org entry which sometimes took a few minutes because of the current overload.

Twitter has largely destroyed the use of curated social bookmarking. So I, too, have neglected my Diigo account. And over the years since 2009, many twitter .URL files have accumulated across my file system: 480 in 270 folders, plus 30 old .website files in 21 folders! Because it was so easy and convenient to just drag and drop the icon from the address bar into a topic folder, which could bring me back not only to the resources that someone recommended but also to their comments about it and sometimes right into the wider context of several replies.

To find and process these scattered bookmarks, I created a lttle utility in my own tool for thought. Read what it does in the explanation window after choosing Advanced > Miscellaneous > Utility for Twitter Exit.

Screenshot of an options dialog, containing three paragraphs and three bullet points of explanations, three input fields, and two radio button.

My takeaway from the process:

  • Please use RSS,
  • please use social bookmarking
  • please don’t delete your tweets yourself, as long as other users might have bookmarked them,
  • and please subscribe to my feed 🙂

Update 2022-12-18: For a very different approach, see Cogdog’s post .

Posted in 55, Social software | 1 Comment

Visualizing the “shoulders”

(Skip to citations)

We are “standing on the shoulders of giants” who, in turn, were leveraging the work of those who have gone before. And if their incremental contributions are put into atomic, tweet-length sentences, we can visualize these relationships in various ways, most impressively as done by Deniz Cem Önduygu in his History of Philosophy.

Incidentally, Ton Zijlstra mentioned the shoulders, and Chris Aldrich noticed the atomicity (zettelkasten structure) of Önduygu’s philosophy presentation, just when I was trying to assemble the philosophers who influenced my own thinking.

Kindly, Stephen Downes released some short versions of his core ideas:

Downes says

“Cognition is based in similarity, and network semantics are similarity-based semantics.

Learning is to practice and reflect, and it is essentially the same as association mechanisms of machine learning: Hebbian and Boltzmann, respectively, plus back propagation.

A new theory of human learning ought to be informed by ‘learning theory’ in machine learning (connectionist artificial intelligences, today known as “deep learning”).

Neural and social networks are similar; successful ones may have a a Boltzmann-like mechanism that may be related to sensations of peacefulness, and the learning of ethics.

Not all connections are of equal strength in the neural metaphor of Connectivism.

Relevance and salience are based on context-dependence and a network of similarities.

Knowledge is having dispositions, based on recognition, i.e. being organized in such a way as to respond in an appropriate way to a partial pattern.

What we know about the world is irreducibly interpretive; our thoughts about objects are not representations of the external world, they are not inferred from experience.”

Of course, the reduced listing of these sentences does hardly do justice to their full power. But imagine the rich connections of disagreement, contrast, refutation, and agreement, similarity, expansion, to previous sentences by other giants, and how much work this involved! (Again, the links seem more interesting than the isolated nodes, as I learned from Connectivism.) Unfortunately, having no philosophical training, I am not able to identify most of these connections except a few most obvious ones.

But I created some new functions in my tool to make it technically easier to add such links and format them as the arcs in Önduygu’s linear (chronological) representation, in addition to my own favorite 2D format (allowing topical gestalt), if you want to link the above sentences, or your own ones, to all the relevant predecessors, and display them in one of two interactive visualizations.

Screenshot of a linear, chronological presentation of philosophical statements, with connections as green and red arcs
Screenshot of a zoomed-out 2D representation of the same connections as in the above image
Posted in 41, Visualization | Leave a comment

Future

I watched Stephen Downes’s beautiful slides (99 MB) about “The Future of Learning Technology”, and here is what I noted.

The title slide saying "The Future of Learning Technology: 10 Key Tools and Methods", "Stephen Downes" and "Contact North", before a beautiful creek landscape.

“9. Agency”

What we learn depends on why we learn“. I think curriculum makers still don’t ask why stuff is still required to memorize which can be looked up.

We can think of this as the attractor as opposed to a driver.” Ha, pull rather than push. Great.

Agency – […], for better or worse – is impacted by technology.” That is exactly my fear: that patronization by prosthesis technologies will become even worse.

“7. Creative Experiences”

The dialogue and interactivity that takes place sets the work into context and enables learners to see it as a process rather than an artifact.” I like the link between activity and context. The limitations of “knowledge transfer — reading, lectures, videos” (slide 45) cannot be overcome just by alternating between reading and subsequent writing of responses to the questions, nor by creating an isolated artefact. Only by mixing passive and active modes.

This is what “allow[s] us to redefine what we mean by ‘textbooks’ and even ‘learning objects.’” (slide 17). And I don’t think just of high-tech solutions like Jupyter textbooks in the cloud (slide 17), but also of simple things like annotation and rearranging that is enabled by externalizing to the ‘extended mind‘.

Apprenticeships” (slide 47) — this might sound as if it was limited to vocational training or practical tasks. But it could also be scientific apprenticeships — if only the ‘masters’, too, started to leverage the digital affordances of ‘extended mind’ externalizations (just mentioned), instead of insisting that what comes between literature review and the paper outline must be done entirely in their big head.

“5. Consensus”

In a democracy, many of these decisions are made through some form of majority rule.” Probably I haven’t yet fully grasped the importance of ‘Consensus algorithms’. For me, majority is not a poor surrogate of what the consensus of the wisest big heads would achieve. I think majority is often about economic vs. political power. Ecomomic power (of the few) is mostly derived from the very fact that they are few who are controlling some scarce resources. And this is just because, for an economic transaction between two subjects to take place, each one has the ‘veto’ right to refrain from the interaction, so, consensus about the price etc. must be achieved. Unlike economic power, political power is the power of the many, who could override and confine the former (at least in a democracy). So, consensus does not seem only positive to me (let alone the one needed by the veto right in the Security Council).

And economic power is what hampers many of our hopes for the future of ed tech, such as “Open Data” (slide 10), or IMHO also the great idea that “The credential of the future will be a job offer” (slide 54), or the whole idea of decentralisation against the monopolies.

Posted in 58, eLearning | 2 Comments

Minimal GitHub for reassurance

The multitude of GitHub terms and command line options has always been so dizzying for me that its main objective has never been met: reassuring me a safe way back from trying out changes of my code. Finally I have found some minimal procedures for my simple use case.

At some point of trying out changes, I give up keeping track of the many interdependent changes (because this feels like herding cats) and it is no longer possible to revert all my changes with manual methods such as Undo/ Redo, comment-out/ uncomment individual lines, or going through all the red/ green marks from the Changes display. (Comparing this with knitting, I have to rip it all up and start again.) This is where GitHub offers a great affordance: In the Changes navigation pane on the left, right-click the file and then click “Discard changes…“.

But sometimes there is some partial success that I do not want to discard along with the subsequent changes. This is what the “commit” command is all about. But what if I am not sure yet if the partial success will lead to a final succes, or if is must eventually be discarded, too? This is where the “branch” concept comes in. Yet, creating a Branch seemed to be a nightmare:

Once I switched to that new branch, all the code in my Development Environment (Eclipse) was silently changed, and I did not know how I would get it back. Even worse: previous experience suggested that not all files were correctly restored, or not immediately, not even when I clicked them. (Only after I found and used Right-click > Refresh on the project entry). This was not the safety I needed when confusion and wreckage had happened, and I avoided branching. The many descriptions available on the web (one nerdier than the other) just made me feel like it is me who is the “git”.

Finally I learned what the unfortunate default of “stashing” means, and that it is the opposite alternative that does exactly what I needed:

A screenshot from a GitHub dialog, offering two options: "Leave my changes on master", and "Bring my changes to [the new branch]". The former one is shown as default, but I crossed it out and circled the latter.

Now I can “commit” my partial success “to” the risky branch. Then the Changes display is reuseable again for new red and green marks which normally make life so much easier:

  • trying small things out until something works,
  • then revisit the few logical bunches of changes (I love to see how the changes belong together),
  • cleanup what would be embarrassing to publish as open source,
  • “commit” within the experimental branch,
  • and move on to the next challenge on a fresh cleared display of changes.

Once I decide to ‘rip it all up’, the only thing I need is strong nerves to trust that switching back to the “master” branch will indeed restore all my work. (Which I have saved by Right-drag > Copy-here of my entire source folder within my desktop, anyway, just in case).

Alternatively, if I decide to keep all the changes, the “merge” procedure gets again a bit more adventurous: I follow the dialogues on the GitHub web page to create a “Pull request” and to commit it:

  • Upon clicking “Branch” > “Create pull request” on my GitHub Desktop, it says : “Your branch must be published before opening a pull request.”, so I press the Blue buttton called “Publish branch
  • which leads me to a confusing GitHub web page titled “Open a pull request” and it has a green button called “Create pull request“;
  • On the next page, I have to scroll down to a green button “Rebase and merge“;
  • (a drop-down arrow next to the button offers two alternatives that are equally mysterious and unexplained as “Rebase”, namely “Merge commit” and “Squash and merge”, so I stick to default option, Rebase);
  • another green button asks me to “Confirm rebase and merge“, and pressing this leads to a success message “Pull request successfully merged and closed”.
  • After the next automatic “Fetch origin” on my GitHub desktop, it shows the commits that I had sent to my experimental branch just like the normal commits in my master branch.

(It was unexpected and a bit uncanny that my commands on the github.com web site, directly changed the code files on my local Eclipse IDE (right ‘under my feet’). But I did not find a procedure on the desktop that was made for graphical GUI rather than command line CLI. Probably, the GUI and CLI camps fight and see each other as ‘gits’ instead of helping each other).

Frankly, I cannot understand how it has become fashionable even for the plain texts of humanities scholars to just store them on GitHub, without any backup intent. Probably it shall prove digital literacy and nerdiness to fumble with fancy command prompt incantations. But I suspect it may rather push off traditional scholars and widen the gap.

Posted in 17, Tools | Leave a comment

Conversations with tools

Ton Zijlstra commented a great post by Henrik Karlsson about the large language model GPT-3, which caused me to finally try it out.

My first impression is similar to theirs: “Just wow”, and it took me quite a while until I reached some limits (in particular when asking GPT to “Write a fictitious debate between xxx and yyy about zzz.”)

One undeniable affordance, however, of the machine’s responses is to get inspirations and stimulation for consideration. This is also the big topic of the note-takers and zettlekastlers crowd, for example using the autolinking of “unlinked references”. And I am noticing that it is probably a matter of taste and preferences, or perhaps even a matter of different working styles: If I am permanently working at my limits there is no room left for organic associations, and then I might be more impressed by an abundance of ideas and artificial creativity?

Perhaps I am too much of an ungrateful grumpy killjoy, but the abundance of artificial serendipitous stimulations makes me think of how onerous it will be to sift through them all to find out which ones are the most relevant ones for me.

Let’s contrast this sort of inspiration with the sort that comes through blog reactions. Karlsson explicitly compares blog posts to search queries and to the new kind of ‘conversations’ that we can have with GPT-3, and I think it is indeed very appropriate to see the interaction with these tools as a ‘communication’. Also Luhmann used this metaphor for his Zettelkasten, as Ton points out, and when we use GPT, the back and forth of ‘prompts’ and ‘completions’ is a dialog, too. So there are many beneficial similarities to blog comments and trackbacks.

Depicts a user, a smartphone displaying the OpenAI logo, and a Zettelkasten, each sitting on an armchair in a club.
Image based on https://www.flickr.com/photos/jennymackness/32921358238/ (CC-BY-NC-SA)

However, blog respondents are not anonymous mass products. They have a background. They care about the topic I write about, and I care about theirs. I subscribe to people whose interests are not always the same as mine but often still close enough to be inspiring. And I trust that it is relevant what they are writing. (Formerly, we talked about bloggers as ‘fuzzy categories‘ and about ‘online resonance‘ and about the skill of ‘picking‘ from the abundance.) The grounding in a shared context and a known background, makes it easier for me to understand, and benefit from, their reactions, probably in a similar way as neural ‘priming’ works.

This is all missing when I process suggestions from a machine that does not know me and that I don’t know (I don’t even know what it knows and what it merely confabulates, and at what point its algorithm switches to the live web to look up more). It is unpersonal — even if it may impersonate Plato in a debate.

Posted in 24, Tools | Tagged | 2 Comments

Pruning for Output

Chris Aldrich calls for actual examples of how we use our Zettelkästen for outputs. I am not sure if the following is exactly what he wants because I do not use a single zettelkasten equivalent for all the ideas on paper slip equivalents. For a particular output, I collect a project-specific folder of them to process them. (By contrast, for the project-independent/ evergreen notes the goal is mainly to find new relationships and categories, as described in this video and associated blog post.)

But here is the process shown in zoomed screenshots of the collections. The description of my workflow is simple: after connecting and rearranging them, I sift through them one branch after the other. The key of this process is probably to see which items need to be pruned because they are tangents that are not well enoough connected and would therefore need unwarranted space. That’s it.

The example shown is the authentic (albeit anonymized) collection of my Kindle annotations and other notes for my recent post about Clark Quinn’s latest book.

Screenshot of a large map of notes.
Raw
Screenshot of a smaller map, with labels and details all anonymized by a-z to x.
Pruned
Posted in 17, Personal Productivity | 2 Comments

Edtech weariness?

It seems topical for older ed tech people to talk about disillusions (Stephen Downes filled the entire yesterday’s newsletter issue with this topic), and I am old enough to comment.

A glass half filled, with annotation 'Pessimism?" at the emptied half and "Optimism?" at the filled half.

For me, the distinction between my optimism and pessimism is rather simple: I believe in simple, low tech tools that are really tools and not patronizing prostheses. So I am pessimistic about all sorts of spectacular, shiny, oversized toys, and I am still very confident that, in the long run, the simple natural affordances of IT will bear fruit even in education. I expressed that view with a simple little JavaScript example in 2001.

For the oversized patronizing opposite, I don’t know which one of my posts to link to that all addressed this longstanding concern:

  • Specifically, for the prosthesis aspect, perhaps see this one, or that series (in the context of think tools). That series will also (part 1) lead you to the keyword of ‘idiosyncratic’ which is the top in Jon Dron’s list.
  • For his keyword of ‘centralizing’, I have a whole category.
  • For the ‘scale’ aspect, just see my previous post.

Of course, however, as long as parents’ and institutions’ focus is on the efficiency of learning rather than asking what needs to be learned, pessimism is still justified.

Posted in 58, eLearning | 1 Comment

Cognition at scale

I am a bit late commenting on the sensational story of the ‘sentient’ chatbot LaMDA. And of course I am one of the many who comment despite being no AI specialist. But I want to remind of something, and point out some implications:

1. The strength of IT is scale.

Where the machine is really helpful as a tool (rather than an intrusive panjandrum) is where it does repetitive chore tasks involving (too) many items or iterations.

In many coding lectures, teachers enthuse themselves about working on some logic of some algorithm which eventually spits out a result that proves that we have successfully willed the computer and told him what to do. This may impress some kind of people. And it is easier than providing a few hundred data records to demonstrate the real value of the machine: a massive scale of output.

Others may be also impressed by owning an app that seems capable of some magic. Which, for example, delivers ideas. Which works in lieu of the user rather than together with the user, like the famous Brownies of Cologne — or like a strong engine under the hood which is impressive to command. Commercial services are eager to depict their service like a big feat rather than a help for scaled chore. Personally, I am particularly annoyed about the so-called tools for thought. Often they don’t even offer any import option for my own data to try out.

I think that it is an intrinsic advantage of IT to outperform us humans when it comes to output that involves many items, and it is simply based on the zero cost copying of information. (Also my favorite low-tech operations, sorting and rearranging, do simply relieve the user of an insane amount of copies that would be needed until the desired order would be reached.)

While these repeatable operations are cheaper for the machine than for the user, the single action of selecting and scheduling a complex procedure (for example for a single decision) costs human labor no matter how many items are being processed. And of course, the development of the machine and its algorithms are so costly that a bottom-line profit is possible only with a large-scale deployment.

This is why I think that AI will mainly serve as industrialized cognition for mass deployment, rather than for personal augmentation and assistance. (This industrialized production does include personalized assistance, such as helping individuals to align with a centrally provided template, e.g. a learning objectives canon, by identifying gaps and recommending appropriate activities. But I still count that as cheap mass production, and in the worst case it may even be a cheap teacher replacement for those who cannot afford good human teachers.)

2. Mass input

But somehow, AI has gained the aura of not just dealing more patiently or quicker with large quantities of output, but maybe sometimes even doing things qualitatively better. A frequently mentioned example is the Dermatologist who is outperformed by AI when it comes to diagnosing skin cancer.

Which brings us to the massive amount of input data that constitutes the machine’s intelligence: No human dermatologist can view so many images in his whole life as the said machine did.

I was very skeptical if AI will also excel for personal goals and interests where there are not many data available, or whether it will be able to come up with non-obvious associations such as metaphors. I learned (later in the ethics21 MOOC) that it will learn from the user’s own choices, and recently there was even a hint about metaphors.

I am still skeptical that cheap/ affordable AI will do such things, but I take the hopes seriously and follow the hints. This is why I was, of course, intrigued by the story about LaMDA. I think I don’t underestimate AI’s eventual abilities, and how far some extreme thought experiments of ‘raising’ AI personalities can liken hypothesized machines to us, however ‘alien’ these might then be.

3. LaMDA thinking

So what does it mean that LaMDA talked about himself (*) — can we say he thought about himself? Of course, talking and thinking are closely related. Just consider the Sapir-Whorf hypothesis, or Piaget’s finding that children think that they think with their mouths.
* maybe I must say ‘themself’

Certainly, there is also a lot of processing present in LaMDA that we could call ‘preverbal’ thinking: when he prepares his responses within the so-called ‘sensible’ dialog, i.e. the responses do properly relate to the questions — unlike the hilarious vintage card game “Impertinent questions and pertinent answers” (German version: “Alles lacht”) where the responses are randomly drawn from a card deck, but also unlike the famous “Eliza” which was completely pre-programmed.

Next, can the assumption of ‘thinking’ be disproved because LaMDA received all the input from outside, fed through the training into his ‘mind’? Small infants, too, learn to interprete the world around them only through the ‘proxy’ of their parent that works like a cognitive navel string; long before they can talk, with bodily experience, with ‘gaze conversations’, later with shared gazing at the things around.

And in particular, what the babies know about themselves, is also mediated by their parent. They refer to themselves using their first names, and only much later they know what ‘I’ means. So, up to this point, LaMDA’s ‘thinking’ about himself sounds plausible to me.

4. What self?

But, thinking about him’self’ or her’self’ — what self, what individual, unique, subjective, embodied self? It is here that the idea of scalable mass hits again: All the input is common to all the copies in the production series, not acquired through an individual history from individual environment and kinship.

And trying to apply combinatorial random will not mitigate this reality of a mere copy within a series. So even if LaMDA really thinks about ‘himself’ it is just about a series of ‘them’, i.e. in yet another sense just about ‘themself’. Random cannot replace individuality, no more than the hilarious card game mentioned above can create a sensible dialog.

A series of identical robot clones standing in a row, numbered by unique serial numbers

Of course, the ambitious notion of a ‘sentient’ chatbot suggests even more than a self: it hints at some form of consciousness. Perhaps we can speculate about machine consciousness by relating it to machine cognition, in the same way as human cognition is related to human consciousness, e.g. ‘cognition is that aspect of consciousness that isn’t the subjective feel of consciousness’ (Downes, 2019). But it is just this subjectiveness that is missing in the serial copy of an AI, no matter how hard he might be thinking about his individual subjectivity.

Furthermore, if you associate cognition with ‘invisibility’ (as I did), it won’t apply to machines anyway because for them, there is no difference between visible and invisible. And there is nothing that can be called the embodied or subjective ‘feel’.

Finally, if we try to grasp and conceive of consciousness in the same (‘left hemisphere’) way as we grasp and isolate external ideas, this cannot work, exactly because of this (‘right hemisphere’) feel, because we feel it from the inside. But LaMDA’s account of his ‘self’ cannot be distinguished from the external world of his cloned siblings, so it is a ‘left hemisphere’ account (as we would like to understand it) but not a consciousness as we do experience it.

Posted in 57, Tools | Leave a comment