Ethical AI in The New Yorker (and elsewhere)

| #data-science #ethics #personal

A gouache and watercolor painting of an abstract and colorful New York City scene.

New York City: Bird's Eyes View (1920) by Joaquín Torres-García, via the Yale University Art Gallery.

In a recent issue of The New Yorker, Andrew Marantz takes a detailed look into two prominent communities in an ideological battle over AI. “Doomers” are convinced that advances in AI may spell the (possibly literal) end of the world, while the “accelerationists” believe that it’s the doomers who are threatening humanity by stifling innovation and world-improving technological progress.1 “Among the A.I. Doomsayers” (or, as it’s titled in the print issue, “O.K., Doomer”) is an engrossing and well-reported piece that I recommend to anyone interested in the present and future of artificial intelligence, but it left me feeling that important voices were missing.

After reading the piece, I dashed off an email to The New Yorker with a few of my own thoughts on the topic. I was pleased to learn that my letter was chosen for publication in the April 8th, 2024, issue. My somewhat slapdash note was edited from a rambling 300 words to a tight 199, which you can find on the New Yorker website or below, quoted in full:

Ethical A.I.

Andrew Marantz’s appraisal of two Silicon Valley camps that hold conflicting ideas about A.I.’s development—“doomers,” who think it may spell disaster, and “effective accelerationists,” who believe it will bring unprecedented abundance—offers a fascinating look at the factions that have dominated the recent discourse (“O.K., Doomer,” March 18th). But readers should know that these two vocal cliques do not speak for the entire industry. Many in the A.I. and machine-learning worlds are working to advance technological progress safely, and do not suggest (or, for that matter, believe) that A.I. is going to lead society to either utopia or apocalypse.

These people include A.I. ethicists, who seek to mitigate harm that A.I. has caused or is poised to inflict. Ethicists focus on concrete technical problems, such as trying to create metrics to better define and evaluate fairness in a broad range of machine-learning tasks. They also critique damaging uses of A.I., including predictive policing (which uses data to forecast criminal activity) and school-dropout-warning algorithms, both of which have been shown to reflect racist biases. With this in mind, it can be frustrating to watch the doomers fixate on end-of-the-world scenarios while seeming to ignore less sensational harms that are already here.

(To Marantz’s credit, he makes a passing parenthetical reference to the less sensational middle-ground of the AI culture war: “And then there are the normies, based anywhere other than the Bay Area or the Internet, who have mostly tuned out the debate, attributing it to sci-fi fume-huffing or corporate hot air.”)

The talented editors at The New Yorker deftly excised the wordy fluff from my letter, so I’ll try not to add it all back here and instead direct you to others who have written incisively on the work done to uncover and remediate harms created by AI systems today.


  1. Confusingly and sometimes frustratingly, these groups often overlap. A one-sentence “Statement on AI Risk” that compares the risks of AI to nuclear war is signed by Sam Altman (among many others), whose company OpenAI exists in a quantum superposition of preaching about the risks of AI while also making a lot of money off of the technology, and lobbies both for and against regulation. Several OpenAI employees quit the company in part due to safety concerns in order to form their own AI behemoth, Anthropic, where employees agonize daily over the implications of their work. Marantz quotes “rationalist” blogger Scott Alexander on how the factions have become strange bedfellows: “Imagine if oil companies and environmental activists were both considered part of the broader ‘fossil fuel community.’“