AI risk due to an AI (generally people are worried about ASI here) acting against its operators’ wishes. Generally people worried about this worry about it causing our total extinction, though of course intermediate possibilities also exist. AI alignment is about preventing this risk.
Related:
Ngo et al. The Alignment Problem from a Deep Learning Perspective (2023)