Preventing AI proliferation seems hard

I’m wary of solutions to AI risk which involve a small number of organizations maintaining exclusive control over these systems:

  • This seems like an unstable equilibrium prone to defection and slips.
  • Steady algorithmic advances will continuously lower the barriers to training “indie models”.
  • Massive security breaches of high-value computer systems occur somewhat routinely.
  • What about all the ex-employees of the big AI labs?
Last updated 2023-07-13.