Mnemonic medium readers sometimes feel impeded by authors’ wording choices

In the feedback to How to write good prompts, a number of readers report feeling that they’ve had difficulty answering some prompts because they’re framed in terms of my wording. Not that my wording is bad or wrong—it’s just that these readers feel that if they had worded the question themselves, they’d have chosen different phrasing which would have stuck better for them, personally.

We didn’t get this feedback in reports on Quantum Country; I think it’s an issue specific to this being a non-technical text. The wording complaint arises because there isn’t a fixed, pre-specified way to refer to some notion, which makes it impossible to follow the Expert response heuristic, after Issa Rice. The author had to come up with terms themselves, and the reader feels they might have used different ones. This happens in technical texts, too, but less frequently: the referents are more often precisely specifiable.

This complaint was mostly focused on cloze deletion prompts, which are extra-dependent on issues of wording: Cloze deletions often create frustrations in non-technical mnemonic texts

More broadly speaking, The mnemonic medium should give readers control over the prompts they collect.


mduncs:

something that caught my attention with the mnemonic essay format and some review with Orbit is the difficulty in remembering prompts that aren’t written in my voice. little differences in phrasing and vocabulary help things stick out but hinder them sticking in my mind.

Geoffrey Litt

One thing I noticed: while prompts were generally good, some answers weren’t phrased the way I would have written. Which leaves me unsure how well I’ll remember.

Soren Bjornstad:

On that note, the prompts about terminology seem to be particularly hard for me. My impression was that some of the terms are onlyincluded in clozes, and not first prompted for by questions, which might contribute given the above…but I don’t actually have an easily accessible list of all the prompts so I can’t be objective about this. This might also be a sort of cognitive interference in that it’s harder to take to terms that aren’t for new concepts, and I already have my own terms? So I’m not sure if any of this experience would translate to the primary intended audience of this article.

me: Interestingly, quite a few people have expressed the same trouble with clozes, particularly those which end up feeling essentially linguistic. I think it’s pretty important that I understand this better. This type of prompt works reasonably well (but not amazingly well) when I write it for myself. I think part of the issue is that when I write it for myself, I’m acknowledging: “listen, what I’m basically doing here is memorizing a piece of terminology the author has chosen to use, because I think the wording itself is helpful.” But when the author asks you to do it, it feels like an arbitrary imposition—you’re being asked to memory specific wording, which feels like the wrong level of abstraction.

This feels like an accurate assessment to me. And an update on the results of a month or so of reviewing: I’ve picked up on many of them, but the ones I’m continuing to have trouble with all fall in this category.

Stephen Malina:

Inability to edit cards. I often find that my first writing of (especially) answers to prompt doesn’t fit the way I actually end up recalling the answer, even though the actual way I recall the answer is also valid. As a result, with my own Anki cards I often edit answers to be phrased in the way that I tend to recall the answers. I find that otherwise every time I see the card, my scrupulous tendencies get triggered and I question whether my different phrasing means the same thing as the answer as stated on the card. On the other hand, with Orbit, not only are the prompt answers uneditable, but they’re phrased in your (or Jose’s) vocabulary rather than my own. While this may seem like a minor thing, it adds a a small but real burden to going through prompts.

Inability to add and remove cards. I think you did a great job with the prompt cards, yet I still find that a small fraction of them just don’t make sense to me. For example, there’s a card about “Focus: “ that I just consistently misanswer. Normally, I’d deal with this by either removing the card or adding additional cards to fill in whatever gaps I think prevent me from getting these cards (as you also recommend), but with Orbit I can’t so I find myself stuck with these cards as leeches. I’ve found consistently seeing these leeches and getting them wrong seems to disproportionately impact the emotional experience of reviewing cards.

Contrary take

NothingIsntTrivial:

I feel like @Meaningness’s writing (in general) echoes a lot of my own thinking and experience. Our “abstraction stacks” could be very similar. I think this means it’s easier for prompts to be “good” (right amount of tractable/effortful). Answering prompts feels a lot more like building on and riffing on a line of thought I already have, rather than engaging with something entirely new.

Last updated 2023-07-13.