Can humanity live with a misaligned ASI?

Say that we get lucky and get a slow take-off, but we don’t figure out alignment very quickly, either. Can we learn how to live with the AI misalignment risk posed by this nascent super intelligence?

Kevin Lacker argues that we can, because:

  • corporations and governments are already smarter and more powerful than individuals
    • Sort of? This doesn’t really match the sort of concerns I have about ASI
  • we can incentivize them to compete
    • I don’t see any particular reason why the balance of power this will create should necessarily benefit us or avoid our destruction.
  • we can define some rules, like “no violence”
    • This seems very difficult in practice; that was the whole point of the stories in “I, Robot”.
Last updated 2023-07-13.