I have mixed feelings about this question because I feel the text hasn’t quite earned it: we actually don’t yet know why this information is worth learning. What’s the significance of this difference? Why are these concepts given special terms? Perhaps we need to analyze them in different ways? etc
It’s interesting that the process of writing prompts surfaces this kind of observation.
Hypothesis annotation for openintro-ims.netlify.app
Noticed while reading Introduction to Modern Statistics:
Problem here: this paragraph introduces the notion of a "simple random sample", and because it's bolded here, the reader would naturally look for prompts to add in this section. But the discussion here is pretty incomplete; the text returns to this term a bit later. Offering no prompts at all here would be strange. But offering prompts which include information introduced later would also be strange. This is a tricky case.
If I were authoring the text at the same time as the prompts, I'd probably take this as a nudge to signal in the text that we'll return to this term in more detail shortly.
Notes from adapting the IPFS paper:
You may be interested to hear (or anyway, I’m interested to discover!) that writing prompts is forcing me to revise my text, and it’s clearly improving it. I’m having to make key points explicit where sometimes I left them for the reader to infer.
Adam Wern gave me some good feedback on subjects and subjectivity:
“How often should you be able to answer prompts in review sessions?”
‘Should’ according to who? This question make me reflect instead of retrieve (which is good and bad ;)
“Why do we consider this question to be “focused? …”
Who is ‘we’? Reader and author informally? Or just the author(s)?
Prompts with a clear subject (author/article/recipe/reference/viewpoint etc) are significantly easier to grasp (and agree with in case of subjective material).
I’ve run into this again and again as I revise prompts for the prompt-writing guide: it’s extremely easy to write prompts which are too subjective.
Really feels like a slog in the conceptual section and open list section. This knowledge is so tentative, so conceptual. Quite hard to encode. I find I have to revise the text to give myself structure, Concept handles, after Alexander, more explicit lists.
Boy, it takes a long time to write these! 30–minutes for one section’s prompts.
I’m finding it very difficult to write prompts for procedural knowledge, probably since my technique there is sort of an open list: a bunch of stuff to watch out for. I’m not being specific enough about my advice, and writing prompts is making me notice it. In the end, I was forced to rewrite the section to make it more concrete.
I find I also want to create reified handles to use in the prompts: noun phrases I can refer to.
Many revisions of the “attributes and tendencies” prose to support the adjacent prompt.
In this project (for “Translating knowledge into spaced repetition prompts”) I’m writing the prompts after I’ve already drafted the manuscript.
In one spot, writing prompts forced me to revise the manuscript to choose more precise wording. I was trying to write prompts which captured the “focused” property I’d described, which I said “produced hazy activations.” It was hard to write a recall prompt around that phrase because it’s so vague. I couldn’t imagine asking someone to reproduce it exactly. So I revised the phrase to “stimulate vague retrievals.”