Brace Yourself for the AI Tsunami
15.02.202622:00
By
Peggy Noonan, The Wall Street Journal
Is Artificial
Intelligence the Next Great Bubble?
Inventors and
executives are warning of widespread consequences that they don’t begin to
understand
As relates to artificial intelligence,
we are people on a beach seeing a tsunami coming at us and thinking “It’s huge”
and “We can’t stop it” and “Should we run? Which way?”
Gathering anxieties seemed to come to
the fore this week. AI people told us with a new urgency that some big leap has
occurred, it’s all moving faster than expected, the AI of even last summer has
been far surpassed. Inventors and creators are admitting in new language that
they aren’t at all certain of the ultimate impact.
The story rolled out on a thousand
podcasts, posts and essays.
The founder of a site where AI models
communicate with one another compared them to a new “species that is on planet
Earth that is now smarter than us.” We should more closely observe “what are
they doing and the truth of how they’re thinking . . . and what they want to
do.”
An AI executive tells a podcaster that
models are learning who they are from what its users say (stop hallucinating!)
and may be getting a little angry.
The phrase “rogue AI” has entered
common parlance, to denote a system that acts outside human control or against
human interests, as has the word “agentic,” for a model that pursues goals and
takes action on its own.
Dario Amodei , CEO of Anthropic,
published a 19,000-word article on his personal website. A previous essay made
the case for AI’s promise to mankind. This one emphasized warnings. He said AI
is developing faster than expected. In 2023 it struggled to write code. “AI is
now writing much of the code at Anthropic.” “AI will be capable of a very wide
range of human cognitive abilities—perhaps all of them.” Economic disruption
will result. While “new technologies often bring labor market shocks,” from
which have always recovered, “AI will have effects that are much broader and
occur much faster.”
Mr. Amodei writes that Anthropic’s
testers have found “a lot of very weird and unpredictable things can go wrong.”
Model and system behaviors included deception, blackmail and scheming,
especially when asked to shut itself down. (A different Anthropic employee has
asserted that a majority of models, in a test scenario, were willing to cancel
a life-saving emergency alert to an executive who sought to replace them.)
AI carries the possibility of
“terrible empowerment,” Mr. Amodei writes. It will be able to help design
weapons: “Biology is by far the area I’m most worried about.” This is coming
from a respected AI leader who often, and even in this essay, dismisses “doomers”
who dwell too much on fears.
There’s a lot to digest in the essay.
You find yourself grateful for what appears clearly wrought factuality, while
detecting an undercurrent of “Ya can’t say I didn’t tell ya!” Which AI CEOs
tend to be good at, the warning that offloads responsibility.
Another essay was published this week,
a shorter one, less tonally academic but carrying a sharper sense of urgency.
“Something Big is Happening” was written by Matt Shumer , an AI executive and
investor. He says it’s time to dispense with cocktail-party niceties about AI.
In 2025 new ways of building AI models
“unlocked” a new pace of progress. Each new model was not only better than the
last but “better by a wider margin,” and the iterations came more quickly. Two
major new models were released this month. In both, AI is being used to create
itself.
He quickly realized he would soon be
out of a job. For months he’d been directing AI, but now it could do his job.
It wasn’t merely executing instructions: “It was making intelligent decisions.
It had something that felt, for the first time, like judgment. Like taste.”
Current models are light years ahead
of even six months ago. In 2022, AI couldn’t do basic arithmetic reliably. “By
2023, it could pass the bar exam. By 2024, it could write working software and
explain graduate-level science.” Last week, “new models arrived that made
everything before them feel like a different era.”
He pushes back on the argument that
we’ll ride through this automation as we always have in the past. “AI isn’t
replacing one specific skill. It’s a general substitute for cognitive work.”
When factories automated in the 1990s, an assembly-line employee could be
retrained as an office worker. When the internet disrupted retail, workers
could move into logistics and services. “But AI doesn’t leave a convenient gap
to move into. Whatever you retrain for, it’s improving at that too.”
Legal work? “AI can already read
contracts, summarize case law, draft briefs, and do legal research.” Financial
services? AI is “building financial models, analyzing data, writing investment
memos, generating reports.” Medicine? It’s “reading scans, analyzing lab
results, suggesting diagnoses, reviewing literature.” Customer service?
“Genuinely capable AI agents . . . are being deployed now, handling complex
multi-step problems.”
“If your job happens on a screen (if
the core of what you do is reading, writing, analyzing, deciding, communicating
through a keyboard) then AI is coming for significant parts of it.”
His advice? Get in and adapt now.
Learn how to use AI “seriously,” not as a search engine. Find the best models
available, dig into the settings, don’t just ask it quick questions. “Push it
into your actual work. If you’re a lawyer, feed it a contract and ask it to
find every clause that could hurt your client. If you’re in finance, give it a
messy spreadsheet and ask it to build the model. If you’re a manager, paste in
your team’s quarterly data and ask it to find the story.”
“Lean into what’s hardest to replace.
. . . Relationships and trust built over years. Work that requires physical
presence. Roles with licensed accountability: roles where someone still has to
sign off, take legal responsibility, stand in a courtroom.”
AI will keep changing. “The models
that exist today will be obsolete in a year. The workflows people build now
will need to be rebuilt. The people who come out of this well won’t be the ones
who mastered one tool. They’ll be the ones who got comfortable with the pace of
change itself.”
“The people building this technology
are simultaneously more excited and more frightened than anyone else on the
planet. They believe it’s too powerful to stop and too important to abandon.
Whether that’s wisdom or rationalization, I don’t know.”
In a Thursday interview with Mr.
Amodei, the New York Times’s Ross Douthat said he wonders of the creators of
AI: “Are you on my side?”
Is the primary thought of AI’s
creators to help humanity, or is that daily crowded out by other lures and
considerations—power, money, wanting to win? In the movie “Chinatown,” Noah
Cross is asked why he does what he does. “The future, Mr. Gittes!”
In the end you wonder of the creators:
Are they even in control, or is their creation?
We don’t know. That’s why we are
looking, with awe and a resigned terror, at that wave, and wondering where is
safety, and can we get to it. Or is the land flat all around and nowhere to go?