GPT-3 can generate shallow variations of spaced repetition prompt questions

Rather than trying to generate full question/answer pairs, a much easier task for Using machine learning to generate good spaced repetition prompts from explanatory text is to use Large language models to generate variations on question text, so that it’s phrased differently every time you see it, but reinforcing the same basic retrieval pattern. i.e. this is one way to automate Would spaced repetition memory systems perform better with varied question texts? and to accomplish Spaced repetition memory prompts should be written to discourage shallow “pattern matching”.

2023-06-14: this is working pretty well in simple experiments: 20230614100259, with both GPT-4 and GPT-3.5-turbo (slightly better results for the former)

My guess is that a lesser language

See also

References

Giacomo Randazzo proposed this idea in conversation on 2020-09-02

Last updated 2023-07-13.