Don't Be Frightened--or Fooled--by the
A.I. Monster
A dark "secret" from my
past is resurfacing in the form of businesses that are selling nonsense and
calling it artificial intelligence.
I’m embarrassed. I
feel a little bit like Frankenstein’s father. The “monster” I built is somewhat
more mundane than the big guy of fiction. On the other hand, my creature is
real - it’s alive! - and taken on a life of its own, morphing into
something that’s just as evil and mendacious. Worse yet, my creation is
spawning a whole new generation of artificial intelligence impostors and other
simple macros masquerading as intelligent machines.
I wrote briefly about this problem and the rampant confusion in
a recent post, but I think it needs some further
explanation so we can all try to get on the same page and set some basic ground
rules about this A.I. stuff.
About 40 years ago, I built a relatively simple system that I
named the “consultant in a box.” The system linked specific numerical scores
and behavioral rankings with phrases and texts, which were then combined by a
word processor into what appeared to be evaluative paragraphs prepared by a
sociologist or psychologist. We sometimes jokingly called this glorified Wang
program our “shrink on a stick” because it created frighteningly convincing
formulations that could fool most readers and reviewers into thinking the
reports were the product of thorough research and thoughtful analysis. They were
instead pro forma pap
being poured out of a production line.
On its very best days,
my little monster machine would put out dozens of slick little synopses that
ranked and rated sales people and jobseekers. These rankings were no better
than, and about halfway between, horoscopes and fortune cookies. But they were
completely convincing because we had figured out how to quickly, easily and
inexpensively tell a whole lot of people in a hurry just what they thought they
wanted to hear.
In the years that followed, I incorporated my evaluation engine
into a variety of different technical and mechanical environments. It even
eventually directed the way various characters reacted and responded to choices
made by each player in one of my more successful CD-ROM computer games, called
“ERASER TURNABOUT” and published by Warner Bros. Interactive. The way the
player initially responded to a detailed “interactive” video conversation with
the psychiatrist in the game determined the ways in which the game progressed
and the player’s journey as well as his or her likely success. The system
essentially produced a different variation of the game every time it was
played. These days-; literally decades later-; companies like Narrative Science turn baseball box scores into newspaper
stories and stock stats into portfolio analyses, albeit with a touch more
science and a little less schmaltz.
But my shabby past
came painfully back to haunt me just recently during a board meeting, as we sat
reviewing reports on prospective candidates while trying to find a great new
head of sales for one of my portfolio companies. One of the participants
started trying to parse and analyze a couple of the boilerplate comments buried
in a bogus report that was put together by the company’s HR team. She might
just as well have been reading tea leaves. Turns out the HR guys used some
outside consulting/recruiting firm that was in turn using a system just like my
old one to crank out this crap and try to convince some clients that what they
were reading had even the slightest connection to reality.
If it hadn’t been so
ludicrous (no offense to Ludacris), it would have just been pitiful to see such
a waste of time and money. I wouldn’t rely on a program like this to pick a
horse in the fifth race at Pimlico much less to select the person you were
trying to hire to help you build your business. But there we were watching
someone trying to make sense out of sentences arranged by a software program that
had about as much substance as the server aligning the silverware brings to the
task of setting the table. The server knows exactly where to place the spoon,
but hasn’t the slightest idea of whether you’re going to use it to eat your
soup or your spumoni. It’s all a matter of placement and proximity - location
and language - and not about performance or personality. Just because you know
where to stick a fork doesn’t mean that you understand what to do with it.
And that’s what got me
thinking again about what’s so wrong about the way too many people are talking
about artificial intelligence. As I’ve said before, true A.I.- when it
arrives - won’t be about business process automation. This is the easy
stuff that bots ought to be doing already in a bunch of big businesses. A.I. is
not as simple as predetermined pattern recognition (or tagging a million pix
for future matching), which is really all about accessing memory. Simply asking
a machine to find and match text in a database that aligns with the content of
queries initiated by a user isn’t moving the needle forward. It’s certainly not
to be confused with “reading” or as exhibiting any actual intelligence. And
finally, there’s nothing to get all excited about regarding accelerated data
sourcing, which is nothing more than rapid recall and retrieval. So, what’s a
simple test for the real A.I. of the future?
I think A.I. comes down to two simple words: Extraction and
Extrapolation. A true A.I. system will perform both functions without
supervision or ongoing direction. It will have procedural rules, some data
management protocols, and guard rails, but no a priori restrictions or limitations.
Extraction in
this context means that the system will continuously review the flow of data
(which is basically unstructured) and from the data flow it will derive and
identify behaviors, frequencies and trends-; not by comparing them to
pre-existing models or patterns, but instead by finding new ones that were
previously unknown, unbounded or otherwise unidentified. The determination of
the ranges and boundaries of these new “objects” will be among the most
critical chores of the new systems, which will need to ascertain the extent,
perimeters and parameters of the new patterns and objects by applying new
measurements of power, density and frequency to the data flows. As the power
and presence of the new objects diminishes at the margins, the boundaries of
the new phenomena will be ascertained and locked in.
Extrapolation in
this context means that the system will have the independent capacity to
capture these new patterns and objects. And beyond that, to rationally build
upon them, expand them and - most particularly - generalize their patterns
and behavior into other areas, both adjacent and remote. Critically, the fundamental
activities will not be the incremental expansion of prior experiences and
analytical results, but instead create and develop new projections and
anticipatory expectations of future behaviors and activities.
The bottom line: if
you already know why, it ain’t A.I.