It’s been interesting to compare the reader accuracies on Quantum Country’s questions to those of the several non-technical mnemonic essays published so far. In Quantum Country, 109/112 (97%) questions have >50% in-essay accuracy; 97/112 (86%) have >75%.
But the other essays have much lower accuracy rates (50+% accuracy, 75+% accuracy):
(I should probably just visualize these distributions—this is hard to read)
I note that Matt Clancy’s Orbit-using microeconomics course has distributions of question accuracy closer to Quantum Country’s than to these non-technical essays.
The failure modes are also worse: QC’s worst questions are forgotten about half the time; these essays’ worst questions are forgotten 70+% of the time.
This is likely one quantitative reflection of Mnemonic medium readers sometimes feel impeded by authors’ wording choices.
We can probably learn a lot by trying to revise all these prompts so that they’re almost all above 50%.
See also Using spaced repetition systems to see through a piece of mathematics - Michael Nielsen:
Mathematics is particularly well suited to deep Ankification, since much of it is about precise relationships between precisely-specified objects. Although I use Anki extensively for studying many other subjects, I haven’t used it at anything like this kind of depth.