“The schism followed differences over the group’s direction after it took a landmark $1bn investment from Microsoft in 2019, according to two people familiar with the split.”
“Anthropic’s goal is to make the fundamental research advances that will let us build more capable, general, and reliable AI systems, then deploy these systems in a way that benefits people.”
Daniela Amodei: “we really just wanted the opportunity to kind of get that group together to do this focused kind of research bet of building you know steerable interpretable and reliable ai systems uh with humans at the center of them
Dario Amodei: “we were all working at open ai and sort of you know trying to make this focused bet on you know basically scaling plus safety or you know or safety safety with a with a lens towards scaling being a big part of the path to agi um and yeah i mean we felt we were making this focused bet kind of within a within a larger organization and it just eventually came to the conclusion that it would be great to have an organization that like top to bottom was just was just focused on this bet and could make make kind of all strategic decisions with this with this with this bet in mind”
In what ways has Anthropic caused net acceleration?
On publication, from Chris Olah: “We don't consider any research area to be blanket safe to publish. Instead, we consider all releases on a case by case basis, weighing expected safety benefit against capabilities/acceleratory risk. In the case of difficult scenarios, we have a formal infohazard review procedure.” (source)
March 2023 overview of policy work from Jack Clark: source