Michelle Goldberg
He Studied Cognitive
Science at Stanford. Then He Wrote a Startling Play About A.I.
Authoritarianism.
Feb. 16, 2026
Karan Brar, who plays Maneesh in
“Data.”Credit...Rachel
Papo for The New York Times
Opinion
Columnist
When I saw “Data,” a
zippy Off Broadway play about the ethical crises of employees at a
Palantir-like A.I. company, last month, I was struck by its prescience. It’s
about a brilliant, conflicted computer programmer pulled into a secret project
— stop reading here if you want to avoid spoilers — to win a Department of
Homeland Security contract for a database tracking immigrants. A brisk
theatrical thriller, the play perfectly captures the slick, grandiose language
with which tech titans justify their potentially totalitarian projects to the
public and perhaps to themselves.
“Data is the language of
our time,” says a data analytics manager named Alex, sounding a lot like the
Palantir chief Alex Karp. “And like all languages, its narratives will be
written by the victors. So if those fluent in the language don’t help democracy
flourish, we hurt it. And if we don’t win this contract, someone else less
fluent will.”
I’m always on the
lookout for art that tries to make sense of our careening, crises-ridden
political moment, and found the play invigorating. But over the last two weeks,
as events in the real world have come to echo some of the plot points in
“Data,” it’s started to seem almost prophetic.
Its protagonist, Maneesh, has created
an algorithm with frighteningly accurate predictive powers. When I saw the
play, I had no idea whether such technology was really on the horizon. But this
week, The Atlantic reported on
Mantic, a start-up whose A.I. engine outperforms many of the best human
forecasters across domains from politics to sports to entertainment.
I also wondered how many
of the people unleashing A.I. tools on us really share the angst of Maneesh and
his co-worker, Riley, who laments, “I come here every day and I make the world
a worse place.” That’s what I think most people who work on A.I. are doing, but it
was hard to imagine that many of them think that, immersed as they are in a culture that
lauds them as heroic explorers on the cusp of awe-inspiring breakthroughs in
human — or maybe post-human — possibility. As a New York magazine review of
“Data” put it, “Who gets so far at work without thinking through — and long
since justifying — the consequences?”
But last week, Mrinank
Sharma, a safety researcher at Anthropic, quit with the sort of open letter
that would have seemed wildly overwrought in a theatrical script. “The world is
in peril,” he wrote, describing
constant pressure at work “to set aside what matters most.” Henceforth, said
Sharma, he would devote himself to “community building” and poetry. Two days
later Zoƫ Hitzig, a researcher at OpenAI, announced her resignation in The New York Times,
describing the way the tool could use people’s intimate data to target them
with ads.
I reached out to the
writer of “Data,” Matthew Libby, because I was curious about how he got so much
so right, and learned that before he worked in theater, he studied cognitive
science at Stanford. More specifically, he has a degree in symbolic systems, an
interdisciplinary program that combines subjects including computer science,
philosophy and psychology. He always intended to be a writer, he said, but
wanted to make sure he had something to write about.
Not surprisingly, Libby,
who graduated in 2017, felt the pull of Silicon Valley, at one point
interviewing for an internship at Palantir. He was heartbroken when he didn’t
get it. But when he came across a 2017 Intercept story headlined “Palantir
Provides the Engine for Donald Trump’s Deportation Machine,” he wondered what
he would have done if he’d worked there, which is how “Data” was born.
Perhaps the most interesting thing
about “Data” isn’t its insight into those who leave companies making dangerous
A.I., but into the majority who stay, and the stories they tell themselves
about what they’re building. “My experience of the tech industry is just that
there’s always this air of inevitability,” said Libby. “You know, ‘We can’t
pause any of this because it’s coming no matter what, and don’t you want to be
the person doing it?’”
Among technologies, A.I.
is unique in that those who are creating it — and profiting off it — will from
time to time warn that it could destroy humanity. As Sam Altman said in 2015,
shortly before helping found OpenAI, “I think that A.I. will probably, most
likely, sort of lead to the end of the world. But in the meantime, there will
be great companies created with serious machine learning.” A slightly truncated
version of that quote appears as an epigraph in Libby’s script.
Just last month Dario
Amodei, who leads Anthropic, the most seemingly responsible of the A.I. giants,
published an essay titled
“The Adolescence of Technology,” about potential A.I. apocalypses. A.I.
systems, he wrote, could turn against humankind or help to create biological
weapons. They could be used to build a digital panopticon more comprehensive
than anything existing today, or develop propaganda so precisely tailored to
its users that it would amount to brainwashing.
But as Amodei sees it,
these hellish possibilities are less reasons to slow A.I. development, or to
keep it out of the hands of the surveillance state, than to make sure that the
United States stays ahead of China. “It makes sense to use A.I. to empower democracies
to resist autocracies,” he wrote. “This is the reason Anthropic considers it
important to provide A.I. to the intelligence and defense communities in the
U.S. and its democratic allies.” His argument would be sounder if the United
States were still, in any meaningful sense, part of a coalition of democracies,
rather than a nation ruled by an aspiring autocrat who is propped up in no
small part by the tech industry.
In “Data,” Alex makes a similar
argument for bidding on the Department of Homeland Security contract. “We’re
the fighters protecting democracy,” he says. “China already has an automated
social credit system they’re exporting to developing nations. Russia has the
most targeted disinformation infrastructure known to man. That’s what they’re innovating towards. If we stop innovating? We lose our
lead.” The threat of authoritarianism abroad becomes a rationale for building
the tools of digital authoritarianism at home. Too bad it’s not just fiction.