The AI Hype Machine is Running on Empty.
After dumping hundreds
of billions of dollars into AI startups, investors are discovering that the
payoff to date has been extremely underwhelming.
EXPERT OPINION BY HOWARD TULLMAN, GENERAL MANAGING PARTNER, G2T3V AND CHICAGO HIGH TECH INVESTORS @HOWARDTULLMAN1
JUL 23, 2024
In a column
in January I noted that in the practical world of business, where real results
matter rather than hype and bragging rights, the smart players were
starting to back away from their
substantial commitments and investments in generative A.I. tools and projects.
Especially the guys who write the checks and keep score. Yeah, they were all
still talking a good game, but fewer and fewer of them were putting their money
where their mouths were.
The main reason seems to
be that the near-term prospects for seeing concrete growth and improvement in
revenues as opposed to cosmetic reductions in admittedly overbuilt headcounts
aren't very encouraging. In many cases, any paths to eventual bottom-line
benefits weren't even apparent because the operating costs of these new large
language model (LLM) engines are so high that the businesses were spending
serious capital dollars to generate digital dimes - if they were lucky.
Compelling, substantive use cases for these tools as opposed to novelties,
chatbots and toys have been few and far between.
What has really been
emerging is the fact that, in addition to AI being ridiculously costly and
resource intensive, after all the manipulation of the underlying data is done,
you still need to hand off the output to a human being to actually get something
done. Instructions aren't the endpoint of virtually any process - whether it's
manufacturing, medicine, or movement - it's real-world implementation and
execution by people that ultimately gets the tasks done.
Things might be getting
done faster, but it's by no means certain that the outputs are better. And it's
absolutely clear that these outputs aren't new or innovative because they're
ultimately constrained by the limits of their training data to making what
amounts to best guesses at what's next based on what's happened in the past.
You still can't Google the future. And if you're not smart and sharp enough
to ask exactly the right
questions in your prompts, you get garbage for an answer.
There's no big prize for even having the best answer to the wrong question.
Bumping the speed and
the scope of analysis or review may create some efficiencies, but these
"improvements" don't add "intelligence" until the outputs
are evaluated and employed by the human end users. No one's willing to turn
these systems loose until their results, findings and conclusions have been
vetted and fine-tuned by humans. Hallucinations might be a more polite term
than lies or fabrications by the machines, but ultimately no one is going to
trust them with our lives or our livelihoods any time soon.
There's still talk about
the next generation or newest black box that will work some kind of magic that
not only scales but shrinks costs as well, but there's no evidence that it's
anything more than a pipe dream about a new version of Moore's Law. One of the
flaws in this analysis is that the underlying foundation of Moore's Law is that
experience is gained in production over time, which enables exponential
enhancements in the circuitry. Sadly, to date, it's clear that in the GPT world
we're interested bystanders at best and, while it's fascinating to watch, we
rarely learn much of anything beneficial that will let us improve the process.
Nor is there evidence that simply by adding more data and more computational
power, we do anything to improve or expand the outputs so that they become
self-effectuating and autonomous.
Interestingly enough, we
are finally starting to see that even the shameless hucksters and promoters on
Wall Street are taking a hard second look and changing their tunes from rabid
generative A.I. boosterism to a far more tentative endorsement that smells more
defensive than aggressive. Research reports, press conferences, and
presentations from most of the leading financial firms, led by Goldman Sachs,
are beginning to observe and report on the empirical evidence in the field,
which suggests that they may have completely misunderstood what's happening
with this latest technology. Two key things are becoming obvious and each of
them is largely contrary to the speeches and spiels we've been hearing from
these guys for the last two or three years.
First, there are entire
industries where the ultimate impact of these kinds of tools will be largely
immaterial - construction is a good example. Even Goldman Sachs suggests that
only around 6% of the fieldwork in construction and extraction businesses will
be automated and the productivity improvements would most likely simply be a
wash for the additional costs. There may be some augmentation but even those
tasks will continue to be directed and executed by onsite workers. Fast food,
customer service, and transportation will be other areas where it will be very
difficult - without sacrificing the quality of engagement and experience - to
dramatically reduce personnel. We've already seen all of the major QSR players,
including McDonalds, take steps to back away from some of their initial AI
implementations.
Second, the most likely
jobs to be eliminated in large numbers through substantial task automation
(30%-to-50%) are NOT likely to be the low-paying positions (no collar and blue
collar), which require physical labor and direct interaction with customers and
co-workers. Instead, white collar and new collar (knowledge workers) positions
including administrative jobs, legal work, financial analysts, marketing, and
writers and editors will take the hit. Unilever in Europe is already leading the pack in this
workforce pruning.) The two critical defining characteristics of the
targeted jobs will be that (1) it is very difficult in many of these cases to
directly measure productivity and (2) senior managers looking for easy cuts and
economies with only passing concerns for content, originality, innovative
analysis and quality will happily trade out these positions for machine-created
material that may well be drawn and lifted from other similarly situated
creators.
When you look closely,
as everyone has finally begun to do, the only conclusion you can make is that
unless you're Nvidia and basically producing the picks and shovels for this
industry (and largely without competitors), there's unlikely to be very much there
there. And what is there will inure (as usual in tech) to the biggest of the big
guys. As the saying goes, when the elephants start dancing, the
grass takes a beating.