2022-10-14 Patreon letter - Lessons from summer 2022’s mnemonic medium prototype

Private copy; not to be shared publicly; part of Patron letters on memory system experiments

Over the past few weeks, I’ve run twenty live observation sessions for my most recent mnemonic medium prototype, and I’ve read through a stack of diaries from asynchronous testers. I’d like to share some of what I’ve learned, and where I might go next.

For better or worse (and we’ll see a bit of both), I had trouble with tester screening during this iteration. I suspect a lot of people were just eager to see what I was working on, so they ignored my instructions about tester requirements. I wasted too much time talking to people who didn’t have a good reason to carefully read the test material, or who had never heard of spaced repetition systems (SRS). Of course, the system will eventually need to explain itself to people who aren’t familiar with SRS, but I wasn’t trying to test that this time—more on that later.

At the highest level, when testers fit my intended eligibility criteria (the tester authentically wanted to internalize the material, and they had prior SRS familiarity), the system worked much as I hoped. In other cases, it rarely did. By “worked”, let’s say: appeared to deliver substantial net benefit to the reader; the reader was vocally appreciative of that help and acted on the interface accordingly; the reader said they wanted to review their saved prompts subsequently; the reader (usually) expressed that they want this sort of interaction in a wider range of texts.

That said, “did it work?” blurs away most of the interesting insight. Let’s dig in.

What have I learned about the potential scope of the mnemonic medium?

One of the key questions I’m trying to answer is: how broadly might the ideas of the mnemonic medium apply? Someday, might we use affordances like these in every informational text? What does the medium want to become, to reach its full potential? The medium seems to work well in Quantum Country… but should we conclude that its range is limited to technical primers? I tested this latest prototype with two books: Introduction to Modern Statistics (IMS)—a formal, technical textbook; and Shape Up—an informal, non-technical book on product management.

At first glance, IMS seems quite similar to Quantum Country. But Quantum Country’s mechanics wouldn’t have worked well in this book. Quantum Country is a focused primer, meant for an audience ready to put themselves completely in the author’s hands. Large textbooks like IMS aren’t meant to be read linearly or uniformly. All of my test readers had studied statistics (albeit sometimes long ago); many wanted to pick and choose just a few prompts about new material. Others still wanted to review everything, but without actually saving prompts about familiar material. Neither of those workflows would have been possible with Quantum Country’s design. Last year, when I tried to extend that design to texts like IMS, I saw users routinely frustrated by the system’s authoritarian rigidity. But in my most recent tests, I observed non-linear workflows working happily, with little friction. I’m now fairly confident that the mnemonic medium can be effectively applied to a broad array of technical textbooks—not just linear primers. Because IMS is technical, I can’t yet say anything about the medium’s performance in non-technical textbooks.

Shape Up is instructional, but it’s informal and non-technical. Success with this text would expand the medium’s domain several notches. Here, results were more mixed. Readers in my target pool (material relevant, SRS familiar) did appreciate the medium’s help, but my qualitative sense was that they were getting noticeably less benefit than IMS readers. It seemed more like a nice-to-have than a transformative augmentation.

In hindsight, it helps to forget spaced repetition for a moment and ask: what would be the ideal personal high-growth environment for Shape Up? What really matters here—as several readers pointed out—is that you meaningfully change your product creation practices. This means you probably want plenty of hands-on exercises and activities, personal mentorship, and perhaps ongoing reflection/application prompts. You might want salience prompts, to help you connect the book’s ideas to events in the moment. Traditional retrieval practice would be quite helpful, too. I found myself forgetting the specifics of the book’s methods before I had a chance to apply them. But it seems clearly less critical than those other modalities. You don’t need cybernetic help with absorbing earlier chapters in order to understand later ones.

In a funny sense, Shape Up is a lot like a self-help book. In fact, many informal, non-technical instructional books are like self-help books. In genres like psychology, philosophy, or business, popular books are often really about changing your life in some way. And so augmentation should be about that too. As a reader with an established SRS practice, I do want the mnemonic medium for books like this. Retrieval practice really does help me bring these books’ ideas into my life. But as a researcher, my instinct is that this genre isn’t a strategic next target for the mnemonic medium. If I want to expand the scope of the mnemonic medium into less technical texts, I should try adapting softer science books, in fields like psychology or political science. If I want to aim for less formal texts, I should try adapting informal “explainers” about technical topics. And if I want to augment self-help-ish texts like Shape Up, I should focus on other support mechanisms, like timeful texts—ones meant to help people reshape their lives around the text’s ideas.

The impact of margin prompts on the reading experience

This latest prototype moved authors’ prompts into the page’s margin. That totally transformed the reading experience. Readers consistently noted that the margin prompts signal particularly important passages and hint at what to focus on. Readers felt that the prompts made them slow down and pay more attention. Most—but not all—readers welcomed that influence.

I’ll share a more concrete story that several readers expressed. Upon first read, a passage didn’t seem to contain anything important. Then they noticed a prompt marker in the margin and felt unsure: wait, was there a key detail here after all? In response, they re-read the passage more closely, guided by the prompt’s focus. In each instance, the reader admitted that they’d missed the point highlighted by the prompt. They appreciated having that corrected.

We heard these same sorts of observations from Quantum Country readers. The embedded review sessions made people re-read certain sections, or read more carefully. But those sentiments were stronger and more frequent in this prototype, where margin prompts make their presence felt continuously.

Prompts as summaries
In Shape Up, when the reader’s screen is large enough, prompts’ “front” sides are always visible in the margins. But there’s not always room for that. On smaller screens, and in IMS, prompts are “collapsed”. That is, readers see only a symbol indicating the presence of a prompt. When they mouse over the symbol, the prompt text is displayed. This distinction made a big difference in the reading experience!

When prompt text was always visible, many readers used the prompts as lightweight summaries of the adjacent passages, reading prompts before reading their associated passages. These readers often used the prompt text to decide whether to read the associated passage. What I’d intended was the inverse: people would read the main text, and when something particularly struck their interest, they could scan horizontally into the margin to read and perhaps save the adjacent prompt.

I’m worried about this summary-oriented behavior. Prompt-first reading will often omit meaningful details: the full text contains narrative material which provides necessary context for later sections. More broadly, mnemonic medium prompts aren’t exactly summaries. And they aren’t meant to work in isolation. Prompts contain the information which should be reinforced through retrieval practice, but they lean on structure and detail in the associated narrative. In fact, that connection is part of how the mnemonic medium aims to solve a central problem: outside of rote material, studying other people’s SRS “decks” usually doesn’t work very well! Such prompts usually feel atomized, disconnected from real understanding. By contrast—at least aspirationally—when you recall the information on a mnemonic medium prompt, you’re involved with more than just its raw text. The review resurfaces the much richer narrative context where you found that prompt.

Some of these summary-oriented readers didn’t really care about the retrieval practice mechanics at all. They really just wanted paragraph- and section-level summaries of the text. That does seem like an interesting reading affordance to explore, but if I were trying to solve that design problem, I don’t think my solution would double as spaced repetition prompts. Summaries and prompts are related—but distinct—mediums. Another similar observation: some readers who were struggling with a passage mentioned that a prompt helped them by offering an alternate wording. This likewise strikes me as a nice second-order effect, but prompts are not the right tool for that job, either.

Completionist prompt-reading
Even when readers weren’t using the prompts as summaries, most people with large screens read every prompt in Shape Up, where prompts’ “front” sides were persistently displayed in the margins. I’m sure this was in part due to the novelty of the prototype. People were curious. But after half an hour, this behavior struck me as a touch compulsive—like they felt an obligation; like they weren’t “reading correctly” if they didn’t read the prompt text. Completionist prompt-reading worries me. It seems like a substantial disruption to the reading experience, like reading a heavily footnoted text. Your eyes scan erratically over the page; your attention jumps in and out of the narrative. Maybe the prompts helpfully guide your reading, but does that make up for the distraction? And is helpful guidance truly the reason why you let your eyes dart back and forth—or is the behavior more compulsive, a dutiful completionism?

Another surprising effect of engaging with prompts while reading: that behavior sometimes amounts to implicit, on-the-spot retrieval practice! That is, when people read the prompt text in the margin, challenge themselves to produce a response, then mouse into the prompt to read the author’s response, they’re doing the same thing they’d do in the “real” review session—they’re just not “grading” themselves. In fact, this may be a more natural way to review while reading than the “traditional” mnemonic medium embedded review box, which can feel like an obtrusive interruption. The catch is that these on-the-spot reviews are likely much less effective. They’re too soon. You just read the sentence containing that idea, so it’s (usually) easy to supply the response—probably from short-term memory. The spacing effect literature suggests that this immediate review probably won’t result in much memory consolidation. Better to wait a few minutes; or, probably, a few hours. Also, at least in this prototype, readers don’t inform the system whether their retrieval was successful or not. That means we can’t set the initial review interval appropriately. But maybe none of that matters. I believe the most important thing to get right in spaced repetition memory systems is the emotional experience. They’re plenty efficient, even with naive scheduling; the problem is that people don’t like to use them. Maybe it’s fine to accept less efficiency here if these on-the-spot reviews feel much more natural than the relatively obtrusive review boxes.

When prompts were “collapsed” (in IMS, and in Shape Up on smaller screens), the behaviors I’ve described shifted dramatically. A small handful of readers moused over each prompt marker to read its contents. But most readers only interacted with the prompts in response to some impulse, like when they found a passage particularly difficult or interesting. As a designer, I flinch at the notion of imposing extra interaction costs… but reading behaviors in the “collapsed” mode do strike me as healthier. Or at least closer to what I’d intended.

What to do about all this? My instinct is to make the “collapsed” behavior a user-controlled setting, and to have it default to “collapsed”. As I apply the medium to more texts and run more user observations, I’ll randomize that setting in each session and continue to watch how people behave.

Is “saving a prompt” the right primitive verb?

Prompt saving as all-or-nothing gesture
In this new design, saving a prompt is a lightweight gesture which expresses your interest in a passage. “I care about this detail enough that I want to practice recalling it, so that I internalize it deeply and reliably.” It’s nice that the new design makes that gesture so fluid, spontaneous, and situated. But no matter how light the interaction, that’s a pretty intense desire to express. You’re signing up to expose yourself to repeated testing on that detail. Sure, if we do a good job with triage tools in the review interface, you can incrementally ditch prompts which bore you later. And sure, “englightened” SRS users know that prompts are dirt-cheap—maybe thirty seconds in the first year and half that thereafter—so you’re not committing yourself to much. But that’s often not how it feels in the moment, when you’re making the decision to save a prompt floating in the margin.

Test readers constantly found themselves wanting to gesture at important passages—some way to emphasize, to express “this is important!”. But often their impulse was mismatched with the notion of “saving a prompt.” They had an instinctive emotional response, and they were looking for an expressive outlet. They weren’t necessarily looking to sign themselves up for future retrieval practice. My prototype forced their impulse into two distant choices: you can save (or create) a spaced repetition prompt about that passage, or else you can leave the text totally untouched.

I saw too many impulses fell into the chasm between those choices. I’d see readers hesitate, perhaps select a phrase… then ask for a highlighter, or to be able to “bookmark” a passage, or to extract it “for safe keeping” into some notes system. On a few occasions, readers created a dummy prompt “as a placeholder” to mark a passage of interest. Not good, but who could blame them? It’s all they could do!

This is a problem with my prototype, but it’s also a problem with digital reading in general. It’s like reading a book behind glass! The web is particularly bad. Even when you’re using a browser extension with the usual support for highlights and notes, it’s like reading a book with your hands wrapped in enormous mittens. You can’t really scribble in the margins: your notes hide behind some icon or in some non-spatialized sidebar. Your highlighter’s expressive range on an EPUB or a web page is: you can make little yellow rectangles, sometimes, where there’s text. Arbitrary markup? “Gosh, what would happen on reflow?!” Quiet, engineer—make it work. (Apple Pages sort of did! For PDFs, see LiquidText; for the web, see academic systems iAnnotate and SpaceInk.) Few digital readers have anything resembling the expressive range they’d get with a real paper book and such exotic tools as a pen, a highlighter, sticky notes, and a legal pad. Yes, hypertext is nice; search is nice; copying and pasting excerpts is nice. But on a computer, I still feel like I’m reading through a thick pane of glass. I can insert myself into a text only by filing a form in triplicate. Then, maybe, a yellow rectangle will show up.

I suspect this rant touches the heart of what many testers seemed to be feeling. I gave them a glimpse of something they didn’t realize they’d wanted: a spatialized tool for interacting with a web book, a way to “scribble in the margins”. Then it turned out to be yet another formal tool—literally, in this case, another form to be filed! Oops.

Incrementalism
Expressivity aside, there’s another good reason to smooth out this prototype’s all-or-nothing interaction: incrementalism. In some cases, the trouble was that the reader simply wasn’t yet familiar with spaced repetition, or that they had deluded beliefs about the cognitive value of highlighting. More often, the reader just wasn’t yet ready to commit to anything more than a simple highlight. They were interested—a bit. But they weren’t sure how much yet. Often you can’t clearly appraise what matters to you until you’ve read a whole section. Saving an author-provided prompt may be a bigger gesture than you wanted to make. The situation’s worse if no prompt is provided: for most readers, writing a new prompt requires enormous investment.

A highlight can be a provisional first step of a longer sequence. Some readers wanted to make a quick first pass with a highlighter; then, after they’d built a holistic grasp of the section, they’d re-read the most important areas more carefully. Others wanted to make a quick first pass, then to let the material marinate. If they found themselves thinking about it in the coming days, they’d use those highlights to guide further work with the text.

What if Orbit could facilitate this incremental approach? Here’s a quick sketch of how that might work. You make a first pass on an article, highlighting sections which strike you, perhaps jotting a few short notes. Then a week later, you’re presented with your excerpts and notes, maybe using an approach like LiquidText’s to show that material fluidily in context of the full text. If you feel a renewed surge of interest in any of those markings, you could put in some more work to turn them into a prompt, or more generally give the text more attention. Taylor Rogalski referred to this approach as “inverted Orbit”: you’re starting with a distant relationship, then bringing the ideas that matter into tighter orbits.

I’m instinctively excited about an incremental workflow, but prior attempts here make me wary. Readwise implements a similar model: highlight, then (in response to periodic emails) revise the most interesting material into prompts or deeper notes. I’ve talked to many Readwise users, but I’ve met none who make much use of its incremental elaboration tools. I’d need to understand that better before pursuing this idea further. Speaking for myself, I notice that once days have gone by, I’ve often lost my emotional connction to the text. A few highlights are rarely enough to rekindle that interest. Sometimes I’ll mark up a physical book, intending to write prompts about it later. But next week, when I see the book on my desk, it feels like I’ve created homework for myself. By contrast, during the reading experience, the narrative creates a strong emotional connection to the text. That’s often enough to make me enthusiastic about writing prompts while I read. The next day, prompt-writing feels like a chore. I usually have to re-read the text for a while to get myself interested again. If author-provided prompts were already available for the associated passages, this emotional distance might not matter. I might just need to click a button to accept the author’s prompt. But I worry that the emotional issues broadly remain a barrier here.

Another important prior work comes from Piotr Wozniak, one of the contemporary originators of spaced repetition. His SuperMemo system includes “incremental reading”, which aims to fill a similar need. This design has a small but enthusiastic community of users. For me, at least, it's not quite the right set of primitives. In SuperMemo's incremental reading workflow, the main actions you take are to compress and excerpt. You start by reducing a full article to a few short excerpts which deserve more attention. In a later session, you'll see only those excerpts—each in isolation, out of context—and you might edit them into summaries focused on the elements you find most salient. Then in yet another session, you might transform those summaries into spaced repetition prompts. Typically these are cloze deletions, which are easy to make, but which rarely seem to work well. I like the incrementalism; I like Piotr’s exhortation to stop reading a passage as soon as you feel bored or unfocused. But I don't like that the primary verbs are all about decontextualization. In SuperMemo's conceptual framework, texts are wordy baskets of raw information, waiting to be strip-mined for Platonic nuggets you can “keep”. But for me, texts are narrative, texts are prose, texts are structure, texts are voice. Prompts are helpful cues, but they're subordinate to the richer original material. I don't want to whittle down the source text; I want to layer lenses on top of it.

Act on ideas, rather than acting on prompts?
My instinct is that focusing on incrementalism isn’t quite enough to solve the emotional problems I observed during testing, but we might be able to make some progress by aligning the core verb more closely with readers’ expressive intent. Here’s one approach I’d like to explore.

My latest prototype’s intended workflow is: read the main text until you hit an idea that feels important; look in the margin for an associated prompt; read and evaluate it; save it, if it captures the idea as you hoped. What if the primary workflow involved acting on ideas, rather than acting on prompts? Here’s how that might work: read the main text until you hit an idea that feels important; select some relevant text; click “Save”. That’s it! The text you selected is visually highlighted; it’s saved to some personal library of excerpts. Probably you can jot a quick associated note too.

So far, I’ve just described a typical annotation tool. The mnemonic medium twist is: if your saved text had associated author-provided prompts, those would be automatically saved too. The prompts would surface in future reviews as usual, perhaps with some extra design elements to ground them in your saved text (and its context). You’d see those prompts in the margin once they’re saved (and a hint of them while you’re selecting), so you can evaluate and edit/remove them if you like. But my intent is that you usually wouldn’t bother thinking about the prompts. You’d just provisionally accept author-provided prompts associated with your highlights; we’d make it cheap to ditch them later. If there are no author-provided prompts, you’d notice that via the empty margin. You could choose to write one immediately, but you might also refine the text into prompts later, through some separate resurfacing workflow. (I’m not yet sure how to solve the problems which Readwise users experience with the latter.)

I’ve not exactly solved the stated problem. Readers wanted some lower-stakes way to express what they found important in the text, without making decisions about “saving prompts”, or committing themselves to future retrieval practice. But I think this workflow hints at an important observation: readers often just want to express their interest in an idea. Making people act on “prompts associated with their idea of interest” creates extra weight and indirection. If we could remove that distraction from the reading experience, I suspect people wouldn’t feel much need to explicitly pick and choose prompts. Serious readers might welcome high-quality prompts associated with their highlights in later review sessions—particularly if those prompts felt grounded in the readers’ selected text; if the prompts felt like provisional offerings rather than commitments; and if declining those offerings felt cheap, guilt-free, and reversible. We have plenty of latitude to tune the affective knob of opt-in vs. opt-out here.

An idea-centric interaction model can still produce many of the positive passive effects that I described earlier. Maybe un-saved text with associated prompts is very subtly highlighted, or there’s still some icon in the margins—some way to give you peripheral vision into this extra layer of the text, to subtly absorb “something important here!”. And if you’d like to take an actively studious stance towards a text, perhaps you could flip a switch to show unsaved prompts in the margins, like this prototype does.

On that note, it’s interesting to observe that an idea-centric design is mostly helpful for when you’re not reading studiously, when you don’t necessarily care very much about the text. One could reasonably argue that I shouldn’t focus on such cases. This doesn’t play to the strengths of the medium—serious people internalizing difficult material. In fact, when you’re reading quite studiously, you probably want to save every author prompt by default; any highlighting interaction is a fiddly nuisance. But my instinct at the moment is that the line is blurry; people’s stance towards towards text will shift back and forth in ad-hoc ways.

This is just a sketch, and there are lots of unsolved problems. The most serious one is: how to handle higher-level summary or distillation prompts, which pertain to entire passages rather than to specific phrases? These are often the most useful prompts. One simple solution would be to include those prompts if you highlight any phrase within the long passage they cover. But I’m not satisfied with that yet.

More practically, I’m not excited about the prospect of creating yet another annotation and excerpting tool, particularly when I consider all the subsequent library management and integration workflows which users would naturally expect. I would probably only pursue this path if I found some way to avoid that.

In-context review
This idea-centric design direction rekindles my interest in an idea I’ve been tossing around for a while: can the review experience somehow happen in the context of the book? Right now, you read a mnemonic text; you save prompts; then later those prompts appear one by one, totally divorced from their source. You can click a link to return to the source location, but that means leaving the review for a separate interface and workflow. Review itself is completely isolated from the text, diminishing the prompts’ emotional connection to the original narrative. This separation also creates frictions around curiosity and remediation. For instance, if you find yourself recalling an answer but not quite understanding what it means, you should be able to fluidly and instantaneously peek into the source context, without disrupting the flow of your review. Likewise if you find yourself curious to see an illustration which you remember sat near the source text.

I’ve made a few attempts at designs like this over the past four (!!) years, but I’ve never been able to make them work. Here are a few notes on what I’ve found.

  • You don’t want to provide too strong a cue for retrieval practice. So maybe the context only appears when the answer is revealed.
  • All that extra text feels overwhelming and distracts from engaging with the answer. So maybe it’s progressively blurred but sharpens on touch or gesture.
  • Review usually takes place on mobile devices, where you need to make significant tradeoffs between screen real estate for the answer and for the context. So maybe the context lies blurred “behind” the answer along the Z axis and can be “brought to the surface” on touch.
  • One outlandish approach would be to display every prompt as a cloze deletion in context of the text. Review would consist of a shifting sequence of windows into source texts, each with some segment blacked out. But that’s far too much cueing, and cloze prompts don’t seem to work nearly as well as question/answer prompts.
  • Finally, a broader problem: as you understand a topic better, your sense of the prompt often transcends any one source and becomes more about connections between them and your own ideas. Naively keeping a prompt visibly anchored in a single source text might actually restrain this process.

My instinct is that some good solution is possible here, and that it would radically transform the feeling of review. I think it would also help smooth the boundary between resurfacing highlights and resurfacing prompts, since both interactions would now be anchored in the context of the source text. Smoothing that boundary might in turn help smooth the in-text “saving” interaction. A solution might point towards a kind of “incremental reading” which helps you distill key ideas and connections as in SuperMemo, but while retaining the rich context of the source text.

“Onboarding”

Contrary to my instructions, half of my test readers either didn’t know what spaced repetition was, or didn’t really see why it might be relevant to them as a professional knowledge worker, outside of language learning. This irked me at first—I didn’t intend to test the system on SRS-naive readers—but, as I’ll explain, it was ultimately quite instructive. To make something of those sessions, I gave SRS-naive testers a 5-10 minute pitch on the value of spaced repetition for internalizing conceptual material. I had some success: among SRS-naive testers for whom the material was truly relevant, almost all ended up engaging seriously. But these conversations clearly illustrated the enormous challenges facing me in “onboarding” design.

In Quantum Country, we integrated a long introduction to the medium into the first essay—about two thousand words. We explained it over time, a section here and there, interleaved into the larger structure of the first essay, and contextualized alongside the concrete interface elements. Then the follow-up emails and end-of-review summaries did more explaining, incrementally over time. This really did seem to work, but I have no idea how to translate Quantum Country’s approach into a general system which can be layered onto every text. By default, users will speed-run interface text. Quantum Country readers only read our long explanations because they were written in the voice of the book’s authors; the authors had already built trust with the reader before discussing the medium; and the explanations were presented both stylistically and structurally as part of the primary text.

Some of my testers had already read this Quantum Country text. They understood my current prototype immediately. It’s good at least to see that the onboarding “transfers”. Other testers hadn’t seen the mnemonic medium, but they had read Michael Nielsen’s “Augmenting Long-Term Memory” or Nicky Case’s “How to Remember Anything Forever-ish” or Gwern’s “Spaced Repetition for Efficient Learning”. These testers also understood the potential benefits of the mnemonic medium immediately. That’s more evidence that lengthy introductory essays can do the job. However, each of those essays is much longer than the medium-centric material in Quantum Country. We’re not gaining much practical ground.

A final cluster of testers had extensive SRS experience from learning a language or from some similar rote subject matter (e.g. anatomy, pharmacology). These testers had more varied reactions to the mnemonic medium. One common reaction was: spaced repetition was so effective for learning languages, but I had no idea how to apply it to anything else—wow, this is great! But another common reaction was: spaced repetition was a useful hassle; it was all about memorizing rote piles of information; I don’t see how that relates to understanding or to anything I’m interested in now (sotto voce: and I don’t buy your explanation that it does). For these users, and for others with school-induced traumas, I suppose there’s some un-onboarding to do.

My impulse is to distance myself from the word “review”, and even the word “repetition”. People are (rightly) much more interested in “internalizing” a text’s ideas than in “remembering” them. Aspirationally, the system is about marination, about establishing a powerful (but lightweight!) ritual for deepening your relationship with ideas you find important. Not about “studying” or “reviewing” or even “practicing”. I’m not thinking about these substitutions in terms of some kind of facile rebranding: I want to shift the mechanics and feeling of the system to better reflect the words I’m highlighting.

Diction aside, how to actually make the system explain itself to SRS-naive readers? For the moment, I have no idea—and I’m inclined to punt. My intention for the near future is to focus on demos with carefully prepared texts, which means I can at least partially follow Quantum Country’s pattern. I think I’ll put together a sharp paragraph of introduction, which I’ll give to authors to present “in their voice” in the text (rewriting as necessary). I’ll link there and in the UI to a longer, essay-like explanation “on Orbit’s site.”

Longer-term, the “right” onboarding path will depend a great deal on context. If I’m exploring author-integrated mnemonic essays, then the author should introduce the medium, and I’ll help them do that. If I’m working on texts to be used in the context of a course or program, then the medium should be introduced by the facilitators, and I’ll help them do that. If I’m experimenting with sharable user-generated “layers” on arbitrary texts, I’ll need to rely on those communities to write and circulate canonical introductions like Nicky’s, Michael’s, and Gwern’s, or to refer to those. Maybe some canonical YouTube introduction videos will get made at some point—perhaps when I start working on mnemonic video?

What’s next

Apart from the more conceptual discussion above, I have some mundane design issues to resolve. For example, people expected prompts they saved to appear somehow in the prompt lists at the end of chapters. And when people went out of their way to cherry-pick relevant prompts in IMS, they were confused that inline review still contained every prompt from each section. The behavior of the “skip” button made sense to pretty much no one. And so on. These issues all seem tractable, and I’d like to resolve them before I do any more tests.

Then I’d like to explore some of the middle ground between IMS and Shape Up: less-formal “explainers” on technical topics, and serious texts on less technical topics. I’d also like to find a more authentic context to test the system, one where the readers really need to learn this material, perhaps as part of a self-motivated program or course.

Meanwhile, I’ll start producing design concepts around the more conceptual problems and opportunities I’ve described above. We’ll see where that goes.


I’d like to thank Taylor Rogalski for helpful discussions around prompt-centric vs. idea-centric interaction design. My thanks also to Hammad Bashir, who joined me in implementing this most recent design.

Last updated 2023-07-13.