2023-04-30 Patreon post - Ethics of AI-based invention - a personal inquiry

Hofstadter’s Law wryly captures my experience of difficult work: “It always takes longer than you expect, even when you take into account Hofstadter’s Law.” He suggested that law in 1979, alongside some pessimistic observations about chess-playing AI: “…people used to estimate that it would be ten years until a computer (or program) was world champion. But after ten years had passed, it seemed that the day…was still more than ten years away.”

Ironically, my experience observing the last ten years of AI research has been exactly the opposite. The pace has been extraordinary. Each time I’m startled by a new result, I update my expectations of the field’s velocity. Yet somehow, I never seem to update far enough—even when I take that very fact into account. My own ignorance is partly to blame; AI has been a side interest for me. But my subjective experience is of an inverse Hofstadter’s Law.

No surprise, then: GPT-4’s performance truly shocked me. This is a system that can outperform a well-educated teenager at many (most?) short-lived cognition-centric tasks. It’s hard to think about anything else. Inevitably, I now find myself with an ever-growing pile of design ideas for novel AI-powered interfaces. But I’ve also found myself with an gnawing concern: what are my moral responsibilities, as an inventor, when creating new applications of AI models with such rapidly accelerating capabilities?

If today's pace continues, the coming decade’s models are likely to enable extraordinary good: scientific breakthroughs, creative superpowers, aggregate economic leaps. Yet such models also seem very likely to induce prodigious harm—plausibly more than any invention produced in my lifetime. I’m worried about mass job displacement and the resulting social upheaval. I’m worried about misuse: cyberattacks, targeted misinformation and harassment campaigns, concentration and fortification of power, atrocities from “battlefield AI.” I’m worried about a rise in bewildering accidents and subtle injustices, as we hand ever more agency to inscrutable autonomous systems. I’m not certain of any of this, but I don’t need much clairvoyance to be plenty concerned, even without the (also worrying) specter of misaligned superintelligence.

In sum, these systems’ capabilities seem to be growing much more quickly than our ability to understand or cope with them. I wouldn’t feel comfortable working on AI capabilities directly today. But I’m not an AI researcher; I’m not training super-powerful models myself. So until recently, the harms I’ve mentioned have been abstract concerns. Now, though, my mind is dreaming up new kinds of software built atop these models. That makes me a moral actor here.

If I worry that our current pace is reckless, then I shouldn’t accelerate that pace by my own actions. More broadly, if I think these models will induce so much harm—perhaps alongside still greater good!—then do I really want to bring them into my creative practice? Does that make me party to something essentially noxious, sullying? Under what circumstances? Concretely: I have some ideas for novel reading interfaces that use large language models as an implementation detail. What moral considerations should guide my conduct, in development and in publication? What sorts of projects should I avoid altogether? “All of them”?

One trouble here is that I can’t endorse any fixed moral system. I’m not a utilitarian, or a Christian, or a neo-Aristotelean. That would make things simpler. Unfortunately, I’m more aligned with John Dewey’s pragmatic ethics: there is no complete moral framework, but there are lots of useful moral ideas and perceptions. We have to figure things out as we go, in context, collaboratively, iteratively, taking into account many (possibly conflicting) value judgments.

In that spirit, this essay will mine a range of moral traditions for insight about my quandary. There’s plenty I dislike in each philosophy, so I’ll make this a moral buffet, focusing on the elements I find helpful and blithely ignoring the rest. And I’ve skipped many traditions which were less instructive for me. I’m not an expert in moral philosophy; I’ll be aiming for usefulness rather than technical accuracy in my discussion.

Before we begin, let me emphasize that this is a personal moral inquiry. This essay explores how I ought to act; it does not assert how you ought to act. That said, I do have one “ought” for you: if you’re a technologist, this is a serious moral problem which you should consider quite carefully. Most of the time, in most situations, I don’t think we need to engage in elaborate moral deliberation. Our instincts are generally fine, and most ethical codes agree in everyday circumstances. But AI is a much thornier terrain. The potential impacts (good and ill) are enormous; reasoning about them is difficult; there’s irreducible uncertainty; moral traditions conflict or offer little guidance. Making matters worse, motivated reasoning is far too easy and already far too pervasive—the social and economic incentives to accelerate are enormous. I think “default” behaviors here are likely to produce significant harm. My reflections here are confused and imperfect, but I hope they will help inspire your own deliberation.

Utilitarianism

Let’s warm up with a familiar moral tradition: the utilitarianism which surrounds me in San Francisco. When I tell people here about my moral confusion, I’m usually met with bewilderment. For most utilitarians, my problem seems straightforward. Add up the benefits; subtract the costs. As one person told me somewhat sheepishly: “Listen, Andy… you’re just not that important! Things are already moving so quickly that any acceleration you cause will be imperceptible. A speculative reading interface seems harmless.”

I’m not deluded. I think that assessment is basically right, in terms of my direct counterfactual impact. I also think that utilitarianism often produces terrible conclusions. But even utilitarianism has more to contribute than this.

Popularization, normalization, dissemination, investment

Trends, fashions, and flashy exemplars push around the computing world. Plenty of young technologists and designers look up to me. It’s easy to imagine myself expanding a young person’s understanding of how AI can be used to design interfaces, shifting their career to emphasize AI-based invention. I’ve already had that kind of influence in my prior work. Likewise, my work has inspired lots of copycats. Those copycats are actually part of my theory of change: I depend on others to productize and scale my research. But I certainly don’t expect a startup to adopt my ethics. Copycats also mean attracting more funding into the AI space. Venture capital is, by nature, an almost pure force of acceleration; few investors seem influenced by any ethical considerations of AI. All these indirect impacts add up to more counterfactual harm than might have seemed obvious.

I can damp my impact a little: zero hype; anti-marketing; make the work anti-flashy; focus on the conceptual design rather than the AI components; be proactive in public writing about harms; be reticent about capabilities. Still, I can’t fully mitigate my influence. There’s unavoidable cost here. So even utilitarianism isn’t as permissive as it initially seemed: any of my projects must pass some strong “benefit” bar to be worth pursuing, at least in public. I commit to not sharing AI-based work in progress until I feel confident it passes that bar.

I’ll also set a high bar for amplifying others’ AI-based tinkering; I won’t gush about exciting AI papers on social media. And I’ll set an even higher bar for associating myself with AI in public events or groups, for instance by being some kind of featured guest. These constraints may sound a bit frivolous—oh no, my follower count!—but they come with real costs: I use social media and public conversation to deepen my emotional connection with ideas, and to incrementally develop my thoughts. I’ll need to use private conversation for those purposes in this domain.

Economic dislocation

In terms of misuse and accidents, a user interface for reading seems innocuous. So far, we’ve asked about the ethics of specific acts: what would happen if I did this one design project? That kind of utilitarianism often produces reasoning like “I’m just one small drop in a big ocean.” In such cases it can be helpful to rephrase the question as a general rule, and to ask: what would happen if everybody obeyed it?

Let’s try this rule: “So long as you’re making something in an ostensibly low-risk domain (like reading), and the concept seems highly beneficial, AI-based interface invention is fine.”

If everyone followed this rule, we seem pretty likely to displace jobs at a startling, unprecedented rate over the coming decade. That’s a lot of suffering for our utilitarian calculus—particularly if the economic dislocation produces dangerous social unrest. A utilitarian might argue that this rule’s benefits would outweigh such costs. I think it’s quite hard to tell.

Yet, consider the inverse rule: “Do not invent tools which displace jobs.” Written so broadly, the cost seems far too high. I’m glad that the steam engine exists. I’m glad that the electric motor exists. I’m glad that the personal computer exists. How should I think about this, if only from a utilitarian perspective?

Some kinds of displacement seem to cause less suffering than others. One factor is clearly time. An invention adopted over a generation will cause much less disruption than one adopted overnight. Some people will retire unaffected, or a little early; others will slowly shift into other work; others will have enough warning to avoid that career before they begin it.

Another factor lies in the relationship between the displaced jobs and new jobs created by the invention. Desktop publishing displaced many jobs in the typesetting industry, but that knowledge might have transitioned gracefully to new roles in digital graphic design. On the other hand, I expect robotics in manufacturing have been less kind to most assembly line workers. Large language model-based customer support bots will probably create new jobs, but not ones which most existing customer service representatives can easily access. And, in these last two cases, I’d guess that the invention creates fewer new jobs than it displaces in those industries.

When displacement comes with greater economic productivity and higher aggregate incomes, we should see increased labor demand and new jobs in service sectors. This probably occurred during the era of mass production at the start of the twentieth century. Mass production sharply decreased middle-class families’ material costs, and many likely found themselves with much more disposable income. By contrast, automated customer support phone systems probably replaced countless human operators with minimal impact on aggregate income.

Finally, what kind of social safety nets are available for these displaced workers? If utilitarians decide that the benefit to society from economic productivity justifies the suffering from job displacement, how will we as a society help those harmed? Social technologies like unemployment insurance and universal basic income shift the utilitarian equilibrium here.

In summary, a good rule will depend on a pile of highly contingent parameters. Like: “Do not invent tools—even in a low-risk domain, even highly beneficial ones—which are likely to displace jobs much more rapidly than retraining, service sector demand, and social safety nets are likely to accommodate within a relatively short time.” The trouble is that this rule is impossible to evaluate, both in principle and for most particular instances. It requires clairvoyance, and interpretation of nebulous words like "accommodate." (This isn’t just a problem for AI impacts—it’s a problem with utilitarianism and analytical moral systems in general.)

Some personal conclusions; caution around job-displacing AI

I can still draw a few personal conclusions from that mess of a rule. Given the extraordinary current pace around AI, I’m nowhere near confident that the constraints in that rule are satisfied. So: I won’t work on any AI application which will plausibly cause meaningful direct job displacement, until social safety nets seem likely to become much stronger, or until some other regulation or analysis eases my concerns of widespread economic upheaval.

I bite this bullet: I wouldn’t work on self-driving cars at the moment. I don’t like that conclusion, given the auto accident death toll. But this doesn’t mean I accept that we never get autonomous vehicles. It means that I want more analysis, or a policy change, or something which will make me less worried about rapidly displacing 2-3% of the US workforce amid a wave of other AI-driven unemployment.

I often hear utilitarian AI arguments like: “The printing press caused wars and atrocities, but surely you’re glad we have it.” But we’re in a very different moral situation. First, I don’t expect that Gutenberg and his contemporaries had any idea of the suffering they would unleash. Second, even if they did, I can’t imagine what action I would have preferred they take. By contrast, in the case of autonomous vehicles, we know (some of) the harm we’ll cause, and there are plausible ways to mitigate that harm.

For example, these vehicles are expected to cause huge economic productivity gains; so let’s tax them, and create pensions and programs for displaced workers. There’s a halo of other revenue sources: maybe we’ll tax insurers a fraction of what they’ll save paying medical bills associated with accidents. Shape the taxes to phase in as the technology is adopted, and out after a generation. Yes, we’d overtax in some instances and under-compensate some of those affected. I see that as probably fine. Something half as ambitious as this would probably be fine; lots of other creative solutions would probably be fine. “Let’s wait and see” seems less fine. There’s essential uncertainty, but the finance world is used to that; clever risk-mitigating instruments and if-then regulation can soften the edges here.

Look: I want to live in a society where as much meaningless work as possible is automated, and where, as a result, people can spend most of their time doing whatever they find meaningful. But there’s a great deal of path dependence in getting to that world. We’re on a path with very high costs, and I think we can find our way to a much better one. Until then, I’m comfortable with a policy which would defer the benefits of my contributions to projects like autonomous vehicles.

We run into similar tensions with rules like “Don’t invent anything which could be turned into a weapon.” Wouldn’t that forbid much high-energy applied physics work, and much synthetic biology? “Don’t invent anything which could lead to endangering the human species.” Wouldn’t that forbid nanotechnology research? Like the unemployment rule, these rules can only be rescued with a litany of complicated parameters. These rules are much further from any project I’m contemplating, so I’ll simply leave them in a broken state, and commit to not touching any project along those lines for now.

Christianity

“Love thy neighbor as thyself” is pretty great moral advice. In this formulation, it’s not just a statement about how you should act. It’s a statement about how you should feel. That’s important because the classic formulation of the Golden Rule—“do unto others as you would have them do unto you”—doesn’t seem to constrain my actions in this space very much. I’m in a relative position of power. I’m quite happy to have large swaths of my work automated away; I’m confident that I can find something else to do. I don’t mind if my work is co-opted in a commercial data set. And so on.

But, no—love thy neighbors. One thing that bothers me about much discussion around the harms of AI is that it’s easy to treat people as abstractions. Which regulations should we adopt? How should we fix situations where the model gives undesirable outputs? Which people should we allow to use the model, and under what terms?

“Love thy neighbor” pushes away from abstractions, and towards the particular. It says: no amount of utilitarian number-crunching justifies cruel indifference. It makes me connect with individual lives. One practical consequence is that I’m now actively collecting individuals’ stories of AI-driven job displacement and misuse. When reading stories like this one about an artist who feels their work has become much less meaningful, I can easily summon love for that person. What effect does that have on my actions? I can’t produce a systematic rule, but the feeling profoundly shapes the way I think about embarking (or not) on projects in this space. Positive personal stories of impact also provide a helpful influence: the point of all this work, in my mind, is to create flourishing.

More contemporarily, this is something I like about the “ethics of care” proposed by Carol Gilligan: moral philosophy tends to focus on duty to abstract rules, or to theoretical people. But so much of what we actually find good in the world depends on individual people's relationships, their attention and care for particular other individuals they love. Part of the problem with making scalable software systems is that it automatically puts me in a stance of abstraction, of de-particularizing. Maybe this observation should push me toward Robin Sloan's notion of software as a home-cooked meal: I'm not very concerned about doing harm with an AI-based invention I create for the sole use of three friends.

Insofar as Christianity considers groups of people, rather than individuals, it focuses our attention on the least fortunate. Classic formulations of utilitarianism sum over all people, a kind of egalitarianism. But I’m sympathetic to Christianity’s emphasis here. So even when I do utilitarian calculations about the costs of my projects, I commit to weighting the least fortunate.

Another nice Christian maxim is: “Speak up for those who cannot speak for themselves”. Decisions about AI-based systems are mostly getting made by a small group of technologists and venture capitalists. I feel a moral impulse to represent the views and interests of the people who aren’t present. This essay is one small example of that.

Rounding out our Christian buffet: “If your right hand causes you to sin, cut it off and throw it away! It is better to lose one of your members than to have your whole body go into hell.” My AI-directed creative impulses are my right hand, here. If find that those impulses are leading me to make moral decisions that I regret, I should just ditch them. Christianity emphasizes personal sacrifice for the good of others. It sometimes asks for more than I would endorse, but I commit to this particular sacrifice if it seems at all requisite.

Buddhism

Traditional Buddhism might not really have an ethical system, but I still find its ideas quite helpful in this deliberation. For example, in the Buddhist tradition, there are three “poisons” which keep us trapped in suffering: attachment, aversion, and ignorance.

Attachment is a hungry, clinging desire. In the present dilemma, I struggle with attachment to achieving, to “producing output”, to others’ validation (“wow, amazing project!”), to being perceived as competent and innovative, to novelty. These sorts of attachments can never really be sated, and they motivate “unwholesome” action.

Buddhism’s proposed antidote to attachment is to free myself from these cravings: notice them as percepts, without identifying with them; then act from equanimity. That’s the project of a lifetime, but I find it quite a useful frame when thinking about potential AI-based work. It’s easy to notice: oh, yes, I’m drawn to that idea in this moment because I want approval. And as soon as I pay attention to it in that way, the hunger loses much of its power.

Aversion is an impulsive negative reaction to painful or unpleasant things, a flinching away that gives rise to fear, resentment, and anger. Here, I feel aversion around “falling behind”, being “stuck” in my work, being perceived as “soft” or “obsolete”. I’ll confess that I also feel aversion to constraining my work at all—the whole project of this essay—alongside aversion to being judged as morally “bad”.

One proposed antidote to aversion is “non-aversion”, which, like non-attachment, involves cultivating perception and equanimity through mindfulness. It’s pretty easy for me to notice aversion’s hand guiding my impulses around AI-based design when I pay attention. Another effective antidote is “loving-kindness”. This is like Christianity’s “love thy neighbor”, but bigger, embodied: love everyone; love yourself; love every living thing; stoke that feeling viscerally and bodily; cultivate an earnest wish for all to experience happiness and freedom from suffering. It’s a joyful feeling, and it absolutely keeps aversion at bay.

The third poison, ignorance, refers to Buddhism’s claims about the nature of reality. These ideas do bear on moral questions, but I’ll skip them to keep us from getting too deep into “philosophy seminar” territory.

The “poisons” aren’t really a virtue ethic, as I understand them. The claim isn’t that an action is righteous if and only if it’s done without attachment, aversion, or ignorance. But paying attention to these ideas has helped me think much more clearly as I consider my AI-based projects, and, I think, produce more ethical conclusions. A simple way to think about these poisons is: they’re a lens which reveals places where I’m acting from selfishness.

Tantra, via David Chapman

I only understand the Tantric tradition of Buddhism second-hand, through David Chapman. But I understand it to suggest another consideration: my creative impulses are important, morally! By “impulses”, I don’t mean my grasping attachment-based urges to “produce”, but the wide-eyed, curious, playful excitement for creation. To ignore or unilaterally flatten these impulses is to engage in a kind of self-destruction. No, this doesn’t give my creative spirit unlimited license—but it’s a legitimate party to the moral deliberation.

Chapman’s interpretation is that “being ‘morally correct’ in an ordinary, unimaginative, conformist way may be an excuse for avoiding the scary possibility of extraordinary goodness, or greatness.” He describes the higher aspiration as nobility: “the aspiration to manifest glory for the benefit of others.” I think this is very beautiful, though words like “glory” and “benefit” must do a lot of work to guide appropriate action in difficult situations like the ones we’re discussing here.

Aristotelianism

For Aristotle, the ethical thing to do in a given situation is what a supremely virtuous person would do. And he proposes that a virtuous person is one who has found a happy median between excess and deficiency—for example: gentleness (median), rather than irritability (excess) or servility (deficiency).

I want to mention Aristotle’s virtue of courage, rather than rashness (excess) or cowardice (deficiency). For a while, I felt so confused and overwhelmed by the ethics of this situation—so afraid that I’d make a harmful choice—that I felt like running away from the problem. Just throwing up my hands and having nothing to do with AI. But if I make that decision, I don’t want to do it out of fear or overwhelm; I want to make that choice explicitly, courageously. This is an ambiguous, unknowable situation. I’m not going to be able to reason my way to certainty. I need to take a stand.

Confucianism

One important idea I take from the Analects is that society itself has moral patienthood. This notion is surprisingly absent in most other ethical frameworks. I don’t endorse Confucianism’s proposed resolution—that we should maintain harmony by fulfilling our “natural social roles”—but I do think AI threatens society.

Threatening an evil tyrant can be just, so it can be just to threaten an evil society. But I don’t necessarily accept the premise that our society merits that threat, and even if I did, it’s far from clear that AI will reform society in a corrective direction. If I think of society as a person, what might it mean to love it as a neighbor? To want it to grow, sure—but to despise its suffering?

How does this play out for my proposed reading user interface? I really don’t know. I think it mostly pushes around my utilitarian calculus, and makes me apply the precautionary principle somewhat more strongly to potential harms.

John Dewey’s pragmatism

I first encountered John Dewey through his writing on education reform, but he’s also written some of my favorite moral philosophy. He argues that in a modern, dynamic society, there can be no fixed ethical code. Rather than searching for some kind of ultimate answer, we should focus on finding ways to improve our moral judgments. Then we can apply those methods iteratively. His proposed methods are rooted in democratic deliberation. We should draw together the value judgments and experiences of those affected by a decision, ensure that feedback will flow to decision-makers, and make moral decision-makers accountable to those they affect.

We’re far from these democratic ideals in the creation and deployment of AI-based systems. People affected by these systems have effectively no voice or recourse. Dewey emphasizes continuous democratic involvement through constant social interaction; instead, we have a small, insular group, mostly in San Francisco, making decisions which affect all.

I’d like to experiment with ways to make my work in this space better embody Dewey’s democratic ideals. Before deploying any AI-based systems of my own, I’ll create some channel for public participation in that decision. The channel will remain open, so that I can learn from feedback, and so that I can un-deploy the system if the public decides that’s the right course. I really don’t know what the appropriate details might be here, but I expect to develop them iteratively, in public.

Sprinkling the word “democratic” here doesn’t guarantee that my work won’t do harm. One problem for Dewey’s philosophy, particularly for AI, is that it emphasizes experimentation and learning from experience. But with sufficiently powerful systems, a single iteration can do tremendous damage. I see this lens as one piece of a larger approach to moral discovery, and not one I’d emphasize when a decision has large, irreversible consequences.

Democracy also requires informed participation. If people don’t understand what these systems are, or what they can do—both for good and for ill—it’ll be hard to involve their views in the moral deliberations. I’d like to find a way to help here, perhaps by making some good explanatory media.

Phenomenology

I’ll admit: I’ve struggled to deeply grasp Husserl and his successors. But I’m willing to mischaracterize phenomenology if it produces some insight that seems helpful. Applying its ideas here, my understanding is that I should ask: what is it like to be the one making this ethical decision? What are the qualities of that experience? Do I feel like I’m trying to “get away with” something? Do I feel a sense of pride and capacity? Those feelings, in those circumstances, contain real, meaningful moral cues.

A month ago, the prospect of working on an AI-based design of any kind made me feel internally quite contorted. After much deliberation and the tentative commitments I’ve outlined in this essay, I notice that my internal experience is much more sedate. I still feel the nagging hint of motivated reasoning, but it’s subtler, and a sense of generosity to myself and others is more central.

Positive moral obligations for helping with AI impacts?

Most of this essay has focused on moral constraint: actions I must avoid taking, motives I must avoid having. I know many people in this space who have reasoned themselves into moral obligation. If AI could cause such tremendous harm, and they could conceivably help avert that, they feel a duty to contribute. If I’m so worried, why aren’t I shifting my research focus to directly mitigate AI impacts?

I’m instinctively wary of ethics which create strong positive obligations to act. There are lots of meaningful things I could be doing. I feel I should spend my time working on something useful and good, which also aligns well with my interests and capabilities. I’ve been watching “AI safety” and its adjacent fields from the sidelines for years, and I haven’t yet spotted any opportunities which check those boxes. But it’s also not reasonable to expect those ideas to fall into my lap. If any are to be found, they’ll come from tinkering, reading, and discussion. I’ve ramped up the time I’m spending on such things, though still as a secondary activity for now.

One challenge here is that most “AI safety”-adjacent projects are more ambiguous, morally, than they might seem. Efforts to democratize and open-source models might help avoid oligarchic concentration, but they also exacerbate misuse threats. Technical projects which improve our understanding or control over large models also seem likely to accelerate those models’ capabilities. For example, RLHF was invented to help align large models with human preferences, but it also unlocked much more capable models. In the end I may find myself contributing to one of these projects, even if it might do some harm, but I want to highlight the issue: “AI safety” vs. “AI capabilities” is a pervasive and misleading dichotomy.

Conclusion

Where does all this leave me? Friends aware of my moral deliberations have asked for my “take-aways.” Well, one of the take-aways is that this issue does not compress neatly into take-aways. There is no systematic conclusion. I won’t abstain completely from inventing AI-based systems, but I’ve limited myself to a pretty narrow subset. I think my speculative AI-based reading interface is OK for now, with numerous caveats.

I’ve made some initial commitments:

  • I won’t publish or deploy AI-based systems unless I feel they’re likely to be of significant social benefit.
  • I won’t uncritically amplify others’ AI-based systems or their capabilities on social networks.
  • I won’t work on any AI-based system likely to cause meaningful direct job displacement, or which could be directly weaponized, or which could produce disastrous accidents.
  • I’m collecting a public list of personal stories of AI impact.
  • I’ll experiment with some channel of democratic participation for my first AI-related project’s dissemination.

And some broader resolutions:

  • I’ll bar myself from AI-related work if I believe it’s morally corroding me.
  • In dissemination: zero hype; anti-marketing; make the work anti-flashy; focus on the conceptual design rather than the AI components.
  • When using a utilitarian lens, weight impacts on the less fortunate.
  • I’ll cultivate awareness of my attachments and fears around projects in this space, and avoid actions which seem driven by those distortions.
  • I’ll ramp up my reading and conversations around AI impact mitigation projects, looking for opportunities to contribute.
  • I’ll also be looking for ways to contribute to public understanding of AI systems and their impacts, perhaps through some explanatory media project.

These points are all tentative: the goal, for me, has been to find a solid starting point for iteration and experimentation. This space is so dynamic that I’m sure my views will evolve rapidly as events unfold, and as I learn more.


I’d like to thank Avital Balwit, Ben Reinhardt, Catherine Olsson, Danny Hernandez, Joe Edelman, Leopold Aschenbrenner, Matthew Siu, Nicky Case, Sara LaHue, and Zvi Mowshowitz for helpful discussion; and particularly David Chapman, Jeremy Howard, and Michael Nielsen for conversations which have substantially shaped my views here. None of these people should be understood to endorse my positions.

Feedback and scrap for Navigating the moral landscape of AI-based invention

Last updated 2023-07-13.