What kinds of learning will be valuable in a world with powerful AI?

For the purpose of this note, I’ll assume an AI with 90%+ performance on GPQA Diamond (graduate-level Google-proof QA set, where 2024-03 SotA ~50% with Claude Opus; PhDs get 65% in their domain (74% with a generous exception) and 34% outside) and MMMU (multimodal reasoning benchmark; 2024-03 SotA ~59% with Claude Opus / Gemini Ultra; median human expert is 83%).

In a world like this, what kinds of knowledge and abilities will still be meaningful for people to acquire?

I find it very helpful to focus my attention on hard boundaries on what an AI can do—boundaries which don’t really come from its reasoning ability. For instance, right now, when I ask GPT-4 to translate code from Javascript to Python, it makes mistakes; so I need enough understanding of both those languages to supervise it. But that’s a parochial state of affairs. The model had all the information it needed to perform the task; there was an objective “right” answer; it just wasn’t “smart” enough. It could have been. (Though I suppose that theoretically, the halting problem means no program can always perform program translation 100% correctly.) So, maybe, the knowledge I need to supervise this translation won’t be needed in the future. But some knowledge is not like this:

  • Knowledge necessary to supervise a model’s work on ill-defined problems—software design, urban planning, public policy, composition of all kinds
    • You have to be able to communicate what you want
    • vocabulary, notation, representations, patterns
    • Often you figure out what you want by doing the task, making the process fundamentally interactive, incremental
    • What knowledge is needed to guide these kinds of processes?
    • Reminds me of Schön—very reflection-in-action
    • Critically: taste
    • What is taste, anyway? I presume there is a literature on this.
    • Is it just Anderson-style pattern compilation?
    • To what extent is creating stuff important to developing taste?
  • Relatedly: knowledge needed to reason about questions for which answers are necessarily personal (e.g. moral philosophy)
  • Knowledge for crafts/arts/practices where the pleasure comes from being the one to act: dance, playing the cello, solving crossword puzzles
  • Knowledge which is pleasurable for its own sake—what does a hypercube look like?
  • Knowledge of teamwork and collaboration strategies
    • How will the knowledge needed to collaborate with an AI differ from what’s needed to collaborate with other people?
  • Reflective ability which can produce data that the AI cannot perceive—for instance, enteroception! How does your gut react to this proposal?

Library/internet knowledge-splitting parallel

Long ago, we relied on memory as our primary individual store of knowledge. With the advent of writing, we began to partially outsource: scholars keep personal libraries; they don’t just keep everything in their heads. More recently, that distribution has shifted somewhat with the internet. We keep some things in our head; some things in our personal libraries; and some things accessible via the internet. With very knowledgeable AIs, I expect this will shift again. But how? What will we want to keep in our minds, versus accessible via texts (either in personal library, internet search) or AI?

We might look for parallels in collective intelligence in cultural knowledge transmission. Prior to computers and AI, we also chose to keep some knowledge in the form of the other people surrounding us. We can communicate with them to use that knowledge, for some cost. Is the AI like that, but with somewhat altered characteristics?

Negation

Consider the negation: what would be lost if we outsourced all knowing to AI systems?

Here I think we can benefit from classic arguments against technopoly (per Neil Postman). Ivan Illich’s arguments in Tools for Conviviality also come to mind: when we understand our tools, we can shape our environment and society according to our own values, rather than being (exclusively) shaped by our tools/environment. Autonomy and freedom seem particularly at risk in a world where all knowledge is external and opaque. How can a person learn to develop judgment and critical capacity if they don’t know anything? I presume this degradation of human capacity would erode social fabric in important social ways, though it’s hard for me to predict how.

Can we really delimit some small “preserve” of knowledge which would suffice to maintain human autonomy, judgment, and social fabric, apart from all domain/topic knowledge? Maybe one just studies the Great Books… but wouldn’t that entail abnegating human power to determine our destiny in technical matters? This seems particularly bad in a world where technical matters are more and more the center of power and our future.

Perception and attention—a cognitivist approach

Cognitive psychology has demonstrated that in many unexpected ways, our perception of “low-level” stimuli (shapes, sounds) are substantially determined by “high-level” phenomena (memory, beliefs). For example, Tachistoscope tests find much higher recognition for a target letter when the presentation is embedded within real words (or seemingly real words) in the subject’s language, versus when it’s embedded in a “nonsense” word.

Relatedly, attention is significantly modulated by things like knowledge. Gary Klein’s framing studies of intelligence officers find that when they’re thinking with different framings, they notice very different things in their datasets; and expert officers get their improved performance in large part from the larger repertoire of frames they work with.

Surprisal obviously depends on prior knowledge. Consider the experience of listening to jazz. A very impressive performance often sounds like a confusing cacophany to a novice. But an experienced musician might enjoy it because of the way it subverts their expectations of the harmonic form.

All this suggests that we can’t really separate knowledge from our basic sensory experience of the world. But it doesn’t really help me say much about how the distribution of value shifts with increasingly powerful AI.

Knowing-in-action: pre-verbal knowledge

Often, as we work with complex and unique situations, we have some vague feel for what we want to try, but we can’t necessarily externalize it. That feeling may guide action long before we can, through reflection, articulate a theory. If using AI in our work requires legible, explicit communication, this means AI can’t be used in these situations until they’re already well understood! See e.g. The Reflective Practitioner - Donald Schön

Last updated 2024-04-01.