Published: March 13, 2019
The Bitter Lesson
By Rich Sutton
Via Gwern
Many AI researchers like to believe that improvements will come from {clever representations of domain knowledge}, but the gains of the last many decades have come from {Moore’s Law playing out with general-purpose methods}. This has been true in chess, in Go, in speech recognition, in computer vision, and so on.
Sutton claims the field hasn’t fully internalized this outcome, in large part because it’s unappealing. It’s satisfying to get results based on your personal understanding of a problem space, but it seems to be much less satisfying to get results by identifying a structure which {more effectively scales to enormous quantities of computation} (e.g. {reinforcement learning}).
Sutton argues that the dominance of compute should push us to handle human minds by building in “only the meta-methods that can find and capture this arbitrary complexity.” Leave the representation discovery to the algorithms.