It’s been interesting to compare the reader accuracies on Quantum Country’s questions to those of the several non-technical mnemonic essays published so far. In Quantum Country, 109/112 (97%) questions have >50% in-essay accuracy; 97/112 (86%) have >75%.

But the other essays have much lower accuracy rates (50+% accuracy, 75+% accuracy):

- Me, How to write good prompts: 83%, 33%
- Eggplant, Maps, the territory, and metarationality: 86%, 50%
- Nintil, Was Planck right? The effects of aging on the productivity of scientists: 90%, 77%

(I should probably just visualize these distributions—this is hard to read)

I note that Matt Clancy’s Orbit-using microeconomics course has distributions of question accuracy closer to Quantum Country’s than to these non-technical essays.

The failure modes are also worse: QC’s worst questions are forgotten about half the time; these essays’ worst questions are forgotten 70+% of the time.

This is likely one quantitative reflection of Mnemonic medium readers sometimes feel impeded by authors’ wording choices.

We can probably learn a lot by trying to revise all these prompts so that they’re almost all above 50%.

See also Using spaced repetition systems to see through a piece of mathematics - Michael Nielsen:

Mathematics is particularly well suited to deep Ankification, since much of it is about precise relationships between precisely-specified objects. Although I use Anki extensively for studying many other subjects, I haven’t used it at anything like this kind of depth.