Aligned AI still poses terrible risk

AI alignment might be part of our solution to the problem of AI risk, but it’s not enough to prevent catastrophe.

For example, there’s AI misuse: an AI which is perfectly aligned with a terrorist’s values can still create a horrific bioweapon. To avoid this problem with alignment alone, we must ensure that AI remains aligned to, and absolutely and perfectly controlled by, some entity with desirable values. But Preventing AI proliferation seems hard.

Last updated 2023-07-13.