Grading is currently discussed by several bloggers, e.g. here, here and here. My take is unprofessional but by a pupil of the 60s. I wonder why simple pass/fail results should not suffice in most cases.
Except for vanity ranking, the biggest role of grading is probably: to counterbalance a failed subject with a passed subject. This creates a lot of misdirected efforts and stress — but it allows education politics to duck out some important questions:
1. How much early and radical specialization on a few focussed subjects is desirable today? Schools just let it happen, and pupils and parents are happy to dismiss the failed subjects. And what criteria should be used to make the choice: preferences (or — heaven forbid — styles) ? Nope, just ‘ability’, measured by the questionable proxy of grades.
2. On the other hand, there seems to be still some concern for breadth, or ‘general’ knowledge and understanding — which I appreciate. For once, some important skills are often fostered by distant topics rather than just the narrow frame at hand. Moreover, experts of one discipline should be able to talk and listen to some expert of another field, not just try to sit somewhere in between. Do these benefits still occur, perhaps to a lesser extent, if the subject is failed? I cannot believe that. So, how many diverse subjects should minimally be passed, and how much should the disliked stuff be enforced? Again, the answer is escaped.
3. According to Stephen Downes, education should help the student to “become the kind of person they want”1). IMHO this also means to help them that they don’t waste time and energy with work that they thoroughly and consistently fail. Instead, there is the chance that the student themselves readjusts his or her wish about what they want to become. Perhaps a work life sitting all day behind texts and information or agonizing over decisions, is not exactly what they imagined? Or if the student’s wish was just to be a rich person without much effort, and now they see that this also entails cultivating recklessness and unsocial skills, which is not what they really want, either? Then a radical and timely reorientation might be a blessing, even though the idea of ‘drop-out’ seems to be a taboo in some cultures, especially where one has to pay a lot for education. So: what extent of mastery is minimally needed unless a failure must suggest a reorientation? By counterbalancing the passed and failed subjects, this decision can be ducked out, too.
All this ducking out, ultimately, relates to the unanswered question of what knowledge, skills and understanding are still necessary in the age of googling and AI, which I addressed in my recent post Distant associations. Spoiler: I did not duck out.
Note1) Cannot find the reference, closest match is here.
My latest read is Annotation, a great book by Remi Kalir and Antero Garcia. MIT Press 2021.
And here are some of the things I learned:
1. More on Social Annotation: “We can think of an information infrastructure as part technical system and part human network” (p. 165), which reminded me of my PLE and PLN. “[A]n author’s word is not final, and readers respond by speaking back to and constructing idiosyncratic or shared meaning within and about texts” (p. 19).
This social aspect is an important complement to my past experience with annotation tools and practices which I mentioned yesterday
2. Some detailed distinctions, e.g. “different types of annotation like marginalia, glosses, and rubrication have historically appeared as notes within books” (p.21), and “glossaries are a curated list of annotations.” (p. 30),
Furthermore, the authors “distinguish annotation from commonplaces, a related act of meaning making whereby personal notebooks record thoughts, musings, and other information often when reading another book.” (p. 20-21).
This made me think of my own practice which is probably a mixture between the two because often, my notes are not anchored to a single text passage and I “want to revisit annotation and make use of this cumulative corpus” (p. 164), by exporting and mapping them, such that my divergently sparked associations might finally converge into meaning.
in a 1994 circular to history librarians, I likened email quote comments to the margin notes used in monastery libraries.
Moreover, annotations also played a role in descriptions like The Zoo of collaboration/ personal productivity tools (2006, comparing them to comments in text processors’ Review function, in blogs, social bookmarking etc.) and Cogged PLE’s (2010, about a collaboration on a wiki where we indented them just like added list items).
Then there is another kind of annotations: those which are anchored in an image or a map. These occupied me when I tried out the H5P framework which makes educational resources more interactive and engaging (see a demo Connected H5P hotspots and an app feature Export some interactivity, 2020).
However, the interactivity of H5P does not mean something like the students’ activity of creating their own annotations. Rather, it mimics the dialogical interaction between teacher and student, with questions and answers. And the annotated image can be exploratively consumed by clicking on hotspots.
In a recent comment, Stephen Downes tackled the question of “What’s the purpose of schooling?”, and he took issue with the “view [of] education as something we do to someone else that turns out to be for our benefit“. A few days before, I tried to answer a part of that question myself and, yes, I had also society’s benefit in mind — probably because my experience was shaped by my education paid largely by our country.
What I am trying to consider here is: What kind of knowledge, understanding, and skills is necessary for people, jobs, and society, given that technology takes giant steps to compete with us in ever more aspects.
2. Distant associations
What is necessary, in particular for insights and innovation, is the ability to come up with an associative connection between distinct areas. Insights, here, somewhat differs from understanding, and innovation differs from creativity — more on that later. And distinct areas is not just the same as in domain-independence of the competences often called critical thinking. For explaining what I mean by distinct areas to be linked I need to go farther back.
In connectivism’s neural metaphor, “not all links are of equal strength”. In particular, some links have strength = 1 and connect a concept with a unique ‘parent’ concept. For example, ‘wheel’ belongs to ‘vehicle’. In fact, a vast share of our knowledge consists of such hierarchical (arborescent) links. Even if the hierarchy is sometimes exchangeable (e.g. Diderot > France > 18c, or Diderot > 18c > France, i.e. multi-faceted), all linked concepts belong to the same frame (or ‘script’, or ‘scheme’), such as doctor, nurse, pill. This sort of relationships is the basis of much of machine learning, by ‘co-occurrences’ of, say, a word on the same page. Although the idea is still a bit vague in my head, Lakoff’s frame concept is probably the closest match.
Now contrast these types of links with Lakoff’s metaphors where things are not ‘related’ in the way family members are relatives, but just similar in some way, maybe in some formerly unnoticed way, or by any other associative thought. These links are not arborescent, but rhizomatic, “see also” links. As I understand McGilchrist, they need the right hemisphere mode, while the left hemisphere mode is happy to focus and drill down within ever more specialized and isolated frames and expertise areas. In a 2019 reading group, we were pointed to an article which even considered different distances within the brain: “Local efficiency [among] nearest neighbors” vs. flexibility by “connections between physically distant regions“, here.
You probably want an example for the distant associations. Examples are always difficult for me, but I’ll try: while innovation is akin to creativity, and insight is akin to understanding, innovation is distant from, but similar to, insight in the respect I am trying to describe.
Now understanding, as it happens in school after teaching and explaining, is basically making links within a frame of how something works. Downes described it as “the last piece has fallen into place” here. But for the society as a whole ‘learning’ new knowledge, there is more needed: insight across distinct areas, i.e. distant links.
(This description might sound a bit oversimplified. Pieces falling into place sounds like links of all strength 1 and like a mechanic appliance starting to work, while for me, full understanding often feels more holistic, like “standing right in between” (like the root ‘inter’ of English under-standing) or right before (like German ver-stehen). And for children, learning relationships that are new to them might feel like rather distant links that will only later form a topic frame.
But for the quick and effective mediation of understanding, small links are typically used which connect to the stuff already known, while insight in new knowledge (new for everyone) involves more distant links.)
There are two important requirements for coming up with new distant links: imagination, and independence (and these two are the central sections in the summary of my blogging of the last 5 years).
Much of human cognition has to do with the invisible, and the future is always invisible, so it must be imagined by our brain whose main function is that of a ‘prediction engine’, and each innovative idea and plan must be able to picture the intangible. (Also abstraction, albeit more akin to the left hemisphere mode, involves imagination, if we look beyond the generalisation as something removed, as a value in itself, modality-free, towards something that helps transferring practices into a second context, as an indirection/ a detour, cross-modal, much like a metaphor.)
Now what has technology done to imagination? I hate to sound like Nicholas Carr, because simulations for quickly acquiring critical skills are certainly a blessing. But on the occasion of the pivoting to online schooling I became shockingly aware how stupidly the affordances of New Media have been selected during the last 20 years: Instead of full New Media (including overcoming the limitations of pages), the fascination has been mostly about multi-media, and about bringing an abundance of pictures, videos and talks ever closer, to make every experience more colorful and louder, more lively, more immersive, from remote and ancient cultures to microcosmos to macrocosmos — at the expense of the need for imagination. The threshold of my frustration was reached when I learned from H5P’s interactive textbooks, that even interactivity is only meant as simulating the teacher – student interaction (with questions and quizzes). Apparently, the limitations of the paper page are not even noticed, so eager are we to mimic traditional writing on our computers — which is mostly still a typewriter, just with built-in whiteout.
The other big prerequisite for coming up with solutions for future challenges, is independence in learning and thinking. There is the opportunity to skip a teacher and just ask the internet. But I have gradually become aware that the type of questions is changing, and with this, also the information offerings have changed.
I have noticed this in an especially negative way with questions about programming (on the forums like Stackoverflow, which I have encountered as a place of arrogant ‘meritocrats’ who talk down newcomers). Before a long hiatus in writing code myself, I used to obtain the information needed in Reference Manuals, where I could look up the individual elements as building blocks and use them. Now ever more software publishers don’t offer such manuals anymore, but just send the users to the forums of other users. Superficially, it sounds like a progress that I can now ask a question for my concrete specific problem, and may get a bespoke ready solution, instead of having to build the solution myself from building blocks. The downside, however, is that I can’t get information about building blocks anymore, and so I am dependent on finding a fitting case or at best an FAQ.
(IMHO, this trend has started already much earlier in a very different environment: in the library help-desks and catalogs. When lazy researchers asked the help staff for literature, the traditional browsable classification catalog was not as helpful to the general staff as it would have been for the researchers themselves. The latter would navigate the special subsections without many search words because they knew their stuff when they saw it. The general staff, by contrast, who did not know enough specifics, were happier about a keyword catalog. And then the libraries ‘delegated’ their work to Google altogether.)
Now it seems to be easier to get a ready response, but if it is about an innovative problem that nobody has been asking about before, it is in fact much more difficult now that Google is optimized for the lazy usage. The offer of full-text search in every software has more or less halted the development of more sophisticated organising tools; not even the shortcut to a folder (the equivalent of the “see also” link) is sufficiently usable. Everywhere I look, the impatience grows to get a result fast and without effort — for example also in the ‘tools for thought’ business where the desired products are not really tools but prostheses that promise to do the work for us.
Now the online schooling has revealed how big the problems are that pupils have with independent work. I have always believed in the saying that there are no stupid questions, but I have become aware that some pupils are so much pampered with readymade answers, solutions, and ‘walkthroughs’ that parents and teachers have become reluctant to accept all of their kid’s asking for help. Which, in turn, causes other children to hesitate to ask and feeling dumb about it, as a relative of mine was told in the 50s that if she had listened, she would not have to ask. A tweet thread that shocked me was this:
“[…] students who will struggle silently and cry rather than indicate they need help. They actively HIDE their struggle, so that I have no idea there’s a problem until they’re melting down.” and “I have students like this and it breaks my heart every time. It takes a LOT to earn their trust, and next to nothing for another adult to shatter it again.”
How much help is okay — this seems to be such a difficult question that only a human teacher/ coach familiar with the child can appropriately know it. It is such a wide spectrum from independence to getting help to getting the task done by someone else. In sports, I never succeeded with the upstart exercise at the horizontal bar — always the two helpers had to lift me around. And similarly, if every abstract concept is immediately dissolved by an example, the purpose of the whole exercise (to practise one’s independent imagination) is missed.
Now how is this all related to acquiring and retaining knowledge about the stuff from the curriculum? At least in my country, school administration insist stressing that it is no longer about retaining knowledge, let alone rote memorizing or mere factual knowledge. They call the target “competencies”, but often these seem like just a disguise for memorized stuff. For example, “Pupil can point out/ expound/ state” something — which for good measure is even biased against the shy and in favor of the loudmouths. I think the focus is ever more on stuff, this has become particularly apparent when they refused to cut some of it during the pandemic.
The reason for this is, IMHO, that it is what can be assessed in the easiest way, and in the most fine-grained and accurate and, yes, ‘just’ way. I think this focus on justice of assessments is derived from laudable motives but has now utterly went awry. When the measure is mistaken for the thing being measured (the grades for the abilities) then the abilities are eventually harmed. Parents plague the pupils until some results are filled in on the homework assignments, or until a procedure for calculation is finally brought to an end, no matter if any understanding or skills have grown. In my opinion, much less of justifiable exactness, and more discretionary and perhaps a little biased judgement by seasoned teachers, could indeed be ethically superior, when I understand John Rawls correctly: even those disadvantaged by the inexactnesses would eventually be better off. The obsession with verifiable, bullet-proof grading results may have been increased by the schools’ fear of being sued by influential parents of dumb children, such that there is no leeway left for teachers’ human judgement. The assessment system already works like a machinery with industrial precision, like automated — much like artificial intelligence already.
So, it seems the entire society, like paralyzed, shies away from thinking about the simple question what do we really need to know by heart, when every fact, procedure, explanation can be looked up in Google and YouTube? What concepts do we need to have “down pat”/ “at the ready” within our brain? Apparently, many have a hunch that the answer would be: almost none, but that the answer does not feel right because experience says that concepts at the ready feel so useful? I think this is where the desired distant association comes in: To come up with such a link, we need several memory contents simultaneously available. So, we do need to learn how to, and get used to how it feels like to, have things ready in the mind. But not for the sake of that stuff itself. Like the McGuffin, any exemplary content for in-depth exploration will do.
7. Conducive circumstances
The distant link does not form suddenly, in a Eureka fashion. It is just the sudden awareness about it that feels like the Ah-ha moment, after a long time of gradual emergence of simmering and vague hunches, from a rich background picture, via intuition rather than inferring and reasoning, often during a break, on a walk, away from the papers (having things in mind), or while doing unrelated work, maybe household work. This is, IMHO, the big difference from same-frame knowledge which derives from focussing and isolating. This is where McGilchrist’s modes of attention come to play: Distant associations emerge from a multi-point background picture, rather than isolating focus. And this is why such mental activity has become less esteemed and is more akin to less reputable practical and craft skills that require quantitative eyeballing, experience with bricolage, trial and error, and a big internal ‘database’ of holistic pattern images. I think it is an important takeaway from McGilchrist to education.
Creativity is often seen as the main distinguishing superiority of human cognition over machines. And indeed, it often involves the same kind of novel link as described above. Often the heart of an artwork is the unexpected juxtaposition of two very different things. But there is a caveat which McGilchrist has pointed out: the novelty that is born from boredom. If the purpose of a creation is just a stimulus against boredom, it can be generated by machines just by applying combinatorics to the juxtapositions. A genuine human’s work, by contrast, is unique via a personal, subjective individuality, as far as I understand it.
A similar desire is, IMHO, the curiosity that is often praised as indicating forthcoming STEM researchers. When a child is fed with astonishing stories about nature, he or she will naturally consume them as happily as candy, because in any case the stories are much better than boredom. Similarly, at a certain stage he/ she will have learned that the question of “why” will yield the longest and richest responses, and will love those narratives, without primarily being interested in the actual causal relationships. And furthermore, science stories often involve novel superlatives that impress the young mind.
So, the curiosity and creativity needed for useful innovations and insights, is not necessarily fostered by consumption against boredom.
9. Teacher vs. AI
So, what might be a useful division of labor between teacher and AI?
(A recent trigger was the question of how AI teacher machines might even motivate students. I think I don’t underestimate AI’s eventual abilities, and how far some extreme thought experiments of ‘raising’ AI personalities can liken hypothesized machines to us — which I tried to describe in my EL30 post about alien AI intelligences. But I still think that the awareness of the humanness of our counterpart is a crucial requirement.)
Since I think the personal, subjective individuality is a necessary ingredient for “modelling and demonstrating”, I don’t believe that AI would be able to do the motivation job.
There is also a big fuss about automatically identifying gaps and suggesting the appropriate resources. But I think the charm of this idea is just the affordance which is already present in the flipped classroom: by making transparent what is required for the next lesson, rather than silently providing the scaffolding steps, the students may be better motivated to independently explore the prerequisites.
What remains for me, is the coaching role: with a high sensitivity towards every single child, the teacher can guess if the child needs help, or if there is another chance for independent work. I wonder if one day machines will be able to guess this from objectively observable cues. I doubt it.
As I did twice before, I assembled many of my blog posts of several years into a curated summary which is now available from my Contents page. I called it “Decentralized Knowledge” although this is definitely not a correct title — if you have an alternative idea, please tell me.
There is certainly much more to be said about decentralization that I have not covered, in particular, the question of servers, addresses and hosting infrastructure. If you look at a web address there are many parts where you can have decentral instances instead of central ones. Compare
My first homepages were of type (1) with the university and then with the formerly state-owned telco when there were no alternatives available, and I still have one of type (2) there. My first blogs were of type (1) and then type (2) at the university. Yes, corporate vandalism killed some addresses. This blog is of type (4) with wordpress.com, and I think this feels much more decentralized than sites on medium.com or hypotheses.org of type (3). Currently, I also have two of type (5): one on Reclaim Hosting and one with a German provider, with a self-hosted wordpress.org — but it’s still a bit centralized because I use Jetpack from wordpress.com, for services like trackback. Furthermore, the web server is shared and the database is on a central server — but the ‘server’ is probably on a cloud service which is probably distributed but maybe belongs to a monopoly kraken service.
(The little “s” in https is a hidden place for another hierarchical / central element: the security certificate must be signed by an authority that is recognized by the big browsers — and I still remember what trouble the German edu community had to get theirs acknowledged by Firefox. Not to mention our wires on the lowest level, which almost all meet at a big CIX center in Frankfurt, and the fat pipes across the Ocean.)
For me, the level of the site is crucial: if it is me who decides what to post, then it is decentralized.
IMHO, the bigger challenge is how all these addresses are communicated. Since the academic and public libraries have elegantly ‘delegated’ their task of resource cataloging to Google, we have now a monopoly. Similarly, for email and texting among friends and wider family, people’s addresses are no longer registered in telco’s directories, but another monopoly has built on that gap.
Such directories and registries cannot be decentral. But they are an essential part of the infrastructure, and hence should be publicly owned.
Again, the biggest problem is not on the server side but on the user side. And while the Google Reader gave access to decentralized blog resources, the Reader user interface was a centralized web application that was easy to kill, such that the promising many-to-many RSS topology is now almost dead, as well, or depending on broadcasting any new posts’ addresses via the centralized Twitter.
Maybe I have totally misunderstood WebMentions. But in the IndieWeb plugin that I tried, it appeared to me so confusing that I never really got it working for me — never had the patience.
After reading Downes’s critique, I understand now that a big part of the problem is their concept of a post.
They have plenty of distinct ‘moulds’ for posts (e.g. checkin, event, presentation, bookmark, jam, read, watch, review, listen, collection, venue). This offering style is unlike gRSShopper’s generic malleability, and unlike the generic “items” in my own Condensr.de, and it is also what I disliked in a previous post.
What I would want, instead, is a distinction between a post and a comment. I’m now a user of the “silo” of WordPress.com, and in this ecosystem, the (proprietary) Pingback works great. I don’t know how I can find the address for the more general trackback on other people’s Blogspots, or even whether my WordPress would still support it, despite it seems to be so unpopular.
For me, the great thing of blogging is that it preserves a sense of place, i.e. the idea that I am visiting someone on their (decentralized) front porch, rather than on the central market place, and that I can feel as a guest when I leave a (moderated) comment.
I chose wordpress because it was the easily available service for anyone who wanted to participate in the “conversation”. Before, in 2004, I hand-crafted the XML for my RSS by myself. But despite it was valid RSS, it was not until I switched to WordPress that I was discovered and started to receive comments.
I still believe in the idea of the “conversation”, despite many Social Media users now think that consuming content is the important thing here — and hence go to the large sites of the power law distribution where this consuming is easiest.
The easiest entry (or re-entry) into the conversation, however, would be to create a site on whatever platform, not immmediately a self-hosted thing or a server
My prediction for the upcoming year is that Luddism will gain traction. And we should take it seriously.
Up to now, technophobes tended to be quiet in a niche trying to conceal that they were apparently more IT illiterate than their acquaintances. Now it becomes clear how far left behind many others are, too, most notably the schools who have practically ignored and slept away the development of the last 20 years.
Many have never been comfortable navigating the web until the big platforms prescribed them what to see in their stream. Many have never been comfortable using a desktop computer for personal exchanges until Whatsapp offered them a crippled form of communication on the mobile.
Despite the plague, they just wanted to carry on as usual as much as possible, when they were suddenly forced to have the classroom right within their living room, and the crippled systems replaced big parts of their usual reality. Of course this makes angry. And it made them notice that ‘virtual’ is different from authentic and real.
Popular IT offerings have often focussed on noisy and colorful sounds, pictures, movies, or VR, to create a lively immersive experience similar to TV consumption. (Which was good for commerce but was not what the unique novel strengths of IT are: for example, simply sorting and rearranging large data sets, or overcoming the book’s limitations by simultaneously showing overview and detail.)
Now after many hours of zoom, people are craving for the real, the genuine and the authentic.
(A few avowed technophobes have always made a cult out of their craving for more haptic devices, such as their beloved fountain pens and their good-smelling notebooks. But the general trend has been even leading away from haptic affordances. For example, the very efficient ‘direct manipulation’ methods like drag and drop (e.g. dragging a file onto different apps) have largely been lost on the mobile where fingers cannot perform the same fine motor-skills as a mouse pointer.)
Now, the idea of the authentic and genuine is very important when we consider artificial intelligence. Yes, AI might eventually behave very much like a human, even create unique, individual artworks (simply by leveraging a random number generator.) This might even satisfy some desire for novelty, to overcome boredom, e.g. by juxtaposing a surprising combination of objects. (See some quotations of what McGilchrist writes about boredom and novelty here).
But some features of the human mind require authenticity. For example,
“Children eagerly imitate other human beings, but do not imitate mechanical devices that are carrying out the same actions.18 This is like the finding in adults that we make spontaneous movements signifying our involvement in events we are watching evolve – so long as we believe them to be the result of another’s action. Such movements are, however, absent when we believe that (in other respects identical) results have been generated by a computer rather than a living being.19” (McGilchrist p. 249; footnotes point to 18 Meltzoff, 1995, and 19 Prinz, 2005a, 2005b, see bibliography)
When we notice that we are being tricked by a fake, the functions are blocked, much like we would reject counterfeit money (my interpretation). So, we need to timely consider required legislation, of transparency and e.g. a mandatory labeling of artificial communication partners.
I think we should not just dismiss the concerns of upcoming Luddists as primitivist. Least of all the concern that machines might have more weight on the labor market than humans, because, same as in the original Ludd’s century, the new ‘looms‘ represent the capital.
I thought that I might see at least two major clusters: the elearning people and the personal knowledge management/ visualisation people. But it turned out that they are so much interconnected that most layouts show just one large hair ball.
So I have to admit: it’s a bubble.
By the way, I color-coded two of the maps by gender. I wonder if 62 males and 34 females is too biased?
Inspired by Downes’s super collection of ideas, and appalled by the discussion 1) in our country about keeping full presence classes, I need to write down some thoughts despite I am not a practitioner in this extremely difficult craft.
Of course I can imagine how terrible the weeks with closed schools have been, thinking of a family in a small flat having children doing their online classes and caring for younger siblings while parents are trying to combine home office with explaining content to kids. And yes, this widens the social gap.
But what I hear everywhere is only about the conditions, not about the objectives. Why don’t we consider
drastically reduced curriculum content,
drastically reduced summative assessments,
and drastically refocussing on independent learning competencies?
We have an emergency! We cannot stubbornly stick to the prescribed catalog of facts to cram into pupils. I am old enough that the narrations of war and post-war plight from my parents were very vivid to me. In WWII, several cohorts had to do a “Not-Abitur” ( = emergency A-levels), and my elementary school teacher’s training was reduced to two years. But he became a great teacher. In Grammar School, we had a “Kurzschuljahr” (= shortened school year) when the start was switched from Easter to summer. And more recently, they made the experiment of “G8” (8 instead of 9 years). All without leading to the End of the world.
Instead of questioning the prescribed objectives, I hear everywhere just lamenting about, or arguing for, the changed circumstances,
the technology (which many have just obstinately ignored for 20 years),
and the feat of creating motivation for what pupils normally just let wash over them.
Of course, “ceteris paribus” ( = all other staying equal) this is impossible. Asynchronous mode furthers, but also requires, more independent work, which cannot quickly be learned in addition to all the traditional subject matter. But isn’t independent thinking the ultimate goal that is normally the effect of dealing with all the McGuffin of content?
(Yes there is content that may need to be quickly acquired. How the ventilator in the ICU works, for example, cannot wait for independent insight by the student. If some effective “Nuremberg Funnel” can optimally animate or simulate the necessary theoretical knowledge, we will welcome it, although it might skip the fostering of independent learning.)
For remote learning, without permanently being nudged, it is much more important to have an honest and plausible justification of why the stuff is relevant. A desired Unit 1 may require some Unit 2 for understanding, and in turn, Unit 2 may presuppose some Unit 3. In a flipped classroom, all this is flipped upside down. No longer does the teacher do the sequencing (Unit 3 > Unit 2 > Unit 1) to push stuff to the learners. Instead, the dependencies may be discussed on the face-to-face day preceding the canned-stuff day, and then the pupils pull the units themselves.
1) This Tweet shows a snippet from a German newspaper quoting a notice from the regional government in Münster to teachers: “Parents […], local politicians […] and colleagues don’t want to hear that you have doubts — but that school is a safe place.“