Tuesday, January 28, 2025

NEW INC. MAGAZINE COLUMN ON A.I. FROM HOWARD TULLMAN

 

AI is becoming a part of of our everyday lives, whether we like it or not. 

EXPERT OPINION BY HOWARD TULLMAN, GENERAL MANAGING PARTNER, G2T3V AND CHICAGO HIGH TECH INVESTORS @HOWARDTULLMAN1

JAN 28, 2025

Senior education officials, regulators, and media mavens all over the world have been focused for some time on the issue of how teachers will be able to distinguish between materials written by students and those created by technologies driven by artificial intelligence. Interestingly, the majority of educators who work in the field every day with students don’t think this is much of a concern. They know their students, they know their respective abilities and capacity, and frankly they only wish their students were smart, motivated and talented enough to try to accomplish such a feat of prevarication.  

Another interesting discussion is taking place in the work world. It’s everywhere. Consider the controversy from the movies, which are now all a twitter (no pun intended) about the AI-based voice enhancement technology used to improve the authenticity of the Hungarian voices in The Brutalist movie. Then there’s the magazine world: the not-too-distant but humiliating discovery in 2023 that articles in Sports Illustrated were actually AI-written and attributed to non-existent authors. Which, by the way, also had headshots. No one complained about the content of the stories, they were just apparently horrified by the process of computers replacing copywriters.

All these concerns stem from fears arising in two different areas. First, there is anxiety across industries about job elimination through automation and A.I. implementation. And, second, the increasingly prevalent idea that we are all less able to tell the difference in so many ways between men and machines.

Plenty has been written about job losses, but we’re just beginning to realize how exposed and how unaware we are of the extent to which our expanding and encroaching technologies have subtly and unobtrusively invaded and subsumed so many aspects of our day-to-day lives. One of the most simple and obvious examples is captchas. We now take for granted and unironically that it’s become our daily job to repeatedly prove to computers that we are real human beings before they permit us to get on with so many different activities and transactions. For the moment, it seems that we’re all stuck with technology, when all we really want is stuff that works.

Real-World Insight

The problem is that our technology development work is so completely focused on the future that we seldom, if ever, look backward. As a result, rather than learning from mistakes, we are doomed to keep repeating them and forgetting the lessons that we should have painfully learned by now. As a result, we quickly come to depend on these new modes of assistance and support. At the same time, we become fearful because we know that there are aspects of their operation and abilities that we can’t entirely control. I’m not talking about Skynet and Arnold. But some more subversive undertakings are superficially attractive, clearly less threatening at the moment. These are designed to replicate, impersonate, and deal directly with other machines and computers “as if” they were human.

With the announced and accelerating rollouts of agentic tech, I believe that we’re on the cusp of another deep technology rabbit hole which we’re largely unprepared for and ill-equipped to deal successfully with. What we never seem to appreciate is that when we develop new disruptive tools and technologies, we immediately seize on the initial implementations and put them into action before we remotely understand them in their entirety. Much less consider their unforeseen and consequential longer-term effects, or even appreciate how long and costly a process will be required to understand how to best put them to use. Every new technology is a package deal, which brings its own negativity right along with all its benefits.

The recent unveiling by OpenAI of its new agentic offering called Operator is the latest clear step forward, for better or for worse. Incorporating computer-using agency, along with the ability to interpret and act upon handwritten lists and other images, Operator – for all intents and purposes – looks to other computers like a human operator who is using both a keyboard and a mouse. Already connected to Open Table and Instacart among other apps and services, Operator can seamlessly book tables and reservations, order tickets, select groceries, and initiate regularly scheduled tasks with very limited, if any, human intervention once the process is set in motion. Only at the final moments and specifically when payment information and confirmation is required does the system pause and ask for approval before proceeding. It’s only a short further step to complete autonomy and reaching the point where, as the late great singer-songwriter Jim Croce sang in his version of his hit Operator, “There’s no one there I really wanted to talk to.”

AI Anxiety

If this prospect doesn’t recall the frightening scenes from Fantasia where the unstoppable brooms carrying buckets of water marched ceaselessly forward and step right over poor Mickey, the Sorcerer’s Apprentice, then you’re simply not old enough or a fan of classic Disney movies. Embedded in this fantasy is a real warning which has even more direct and important application today. It’s not difficult to imagine even more sophisticated and fully automated onslaughts launched against ticket sellers or new and more convincing scams and frauds using data and imagery extracted by these new tools.

A photo of a handwritten shopping list – as used in the Operator demo video – seems innocent and harmless until you realize that you’ve provided the digital world with the ability to readily replicate your cursive signature. This may matter less as we move forward, and the schools completely abandon any effort to teach our kids how to sign their names on documents or even write properly and settle instead for block printing.

Bottom line: Here we go again on a wild chase into the future without any clear end in sight or a sufficient understanding of the risks involved or how they might be limited or circumscribed. We’re buying the ticket, closing our eyes, and taking the ride. As the late Hunter Thompson used to say: “There is no honest way to describe the edge because the only people who really know where it is are the ones who have gone over.”

 

Total Pageviews

GOOGLE ANALYTICS

Blog Archive