Let’s think about slowing down AI - Katja Grace

Let’s think about slowing down AI - by Katja Grace

Katja Grace argues that we should much seriously consider Slow down AI. Her central point is that the AI community seems to have somehow internalized the idea that this is impossible, useless, unthinkable, not even worth trying. She tries to make the case that it’s more plausible/ordinary than we might think, that we haven’t thought enough about it to reject it so strongly, and that it may be much more helpful than we imagine.

Q. Katja’s point about the wildly different standards of ambition applied to AI capability vs. slowing down AI research?
A. AI capability work is extraordinarily ambitious—let’s create a god!—but responses to slowing down AI research are often quite unambitious (“coordinating people seems hard”)

Q. Give a few examples of technologies which likely have big economic value but where progress/uptake are slower than could be, for safety reasons:
A. e.g. medical research, nuclear energy, genetic anything, reproductive innovation, recreational drugs, geoengineering, intelligence research

Q. Give a few examples of ways to slow down AI which aren’t terrorism:
A. e.g. don’t advance capacity yourself, convince others, organize people, formulate policies, create social norms, alter incentives, etc…

Q. Give a few of Katja’s funny examples of unlikely worldwide coordination.
A. Not eating sand. Eschewing bestiality. Not wearing Victorian attire on the street.

Q. How does Katja rebut arms-race arguments for capabilities acceleration?
A. She builds a game theoretical model and shows that the best strategy for avoiding apocalypse is quite sensitive to many parameters, in unintuitive ways. Lots of reasonable choices on parameters suggest that one shouldn’t race.

Q. Katja’s reaction to people’s concerns that one can’t coordinate with China?
A. She thinks people have an unconsidered assumption that it’s impossible to take action outside the US, and especially in Asia—she wrote to a ton of Asian ML researchers, and “it was a lot like interacting with people in the US.”

Q. Why does Katja believe that convincing people doesn’t seem that hard?
A. The median ML researcher already believes in AI risk and is open to discussing it; they just haven’t altered their behavior.

Q. Katja’s response to the argument that ignorant US officials couldn’t regulate AI?
A. Obstruction doesn’t need discernment: regulation regularly slows complex things down just fine.

Q. Katja’s argument against the idea that “the room where AI happens will afford good options for a person who cares about safety”?
A. If you believe the ASI won’t be values-aligned, and will overpower us, then the values of whoever’s in the room seem like a small factor, if they still create the ASI.

Q. Why does Katja feel slowing down AGI is not luddism?
A. If you think AGI is likely to destroy the world, it’s a bad technology, and it’s not luddite to oppose it, any more than refusing to use radioactive toothpaste.

Q. Katja’s argument about how we can still achieve techno-utopia without AGI?
A. Narrow AIs are arguably more valuable for many of the things we care about (e.g. longevity).

Last updated 2023-07-13.