Here I specifically mean the technical alignment problem: creating a system which does what its operator would intend.
This is often discussed interchangeably with “AI safety”, or as “the name for the solution to AI risk”, but that’s obviously misleading. Aligned AI still poses terrible risk.
Of course, there’s also the very serious problem of “aligned with what?!”.