In prompt generation, LLMs may perform better when given prompt-writing principles

The Spaced repetition memory system community has figured out a lot of Important attributes of good spaced repetition memory prompts. When I provide those to GPT-4, it seems to do a better job at generating spaced repetition prompts. Chain-of-thought-style prompting with respect to those hints may also help (i.e. “explain how the prompt fulfills each principle…”).

It’s not clear how strong or reliable these effects are. In my informal experiments, they sometimes seem to matter a lot and sometimes not. It would be nice to evaluate with A dataset of expert-written prompts would help development of prompt generation systems.

See example 20230614114329.

References

This suggestion was first made to me by someone on Twitter (I’m sorry, I can’t find the message!), and then again with a concrete prompt by Yuval Milo in May 2023.

Last updated 2023-07-13.