What ChatGPT Needs Is Context
Like the calculator, the spreadsheet, and the internet, AI-driven LLM tools are likely to change HOW we do our work, but not the fact that humans will still be the ones doing the work in the first place.
Join the DZone community and get the full member experience.
Join For FreeAs part of my involvement at LeadDev NYC, I had the opportunity a short video message that would be part of a montage played for folks between the live talks. I decided to speak about the way engineers are enabling the future of products (you can watch it here).
It seems to me that questions like “how can engineers affect the future of (whatever)” sometimes come from a place of anxiety. And these days, there’s no greater source of that anxiety than the advances — and the impacts we imagine coming from those advances — in large language models (LLM), more broadly billed as artificial intelligence (AI).
But LLM and AI are techniques. Nobody in tech ever lost their job because of a new technique, but plenty of folks become anxious when techniques grow to full-on implementations that take the world by storm. I’m speaking, of course, of tools like Bard, DALL-E, and ChatGPT.
It’s inarguable that the things these tools can accomplish are both impressive and diverse. But we — both as tech practitioners and humans moving through the world — encounter a lot of tools that are both impressive and versatile. My argument about AI-driven tools isn’t that they’re worthless. My frustration is with the statements of how they’ll change the world, replacing the work done by entire swaths of professionals.
This point of view is entirely unfounded in experience, if not fact.
Nathan Hamiel recently dug into this on ModernCISO.com, and honestly I can’t provide better examples than he does. What he drills down to time and time again is that LLM-based tools can accomplish wondrous things, but only when given highly specific, tightly bounded directions that require the operator to have intimate knowledge of both the subject and the desired outcome, and often took multiple attempts to perfect before it “just worked like magic.” In fact, in reading his essay I was reminded of the scene in the movie “Sully.”
Every time I read a blog breathlessly proclaiming “I used ChatGPT to accomplish this really hard thing,” I’m left wondering, like Sully, “How many times? How many practice attempts did they make before successfully pulling it off.”
I’m not saying these tools are invalid or the claims overblown. I simply want to put their achievements into a realistic context. And that context should be familiar to us, because we’ve already seen a similar thing happen — on both a larger and yet also smaller scale — in our lifetimes.
The first example I encountered was as simple and unassuming as it was revolutionary: the calculator.
The first pocket calculator came onto the market in 1971. Advances in what that tiny yet powerful device could do were both rapid and impressive. It wasn’t just the improvements in speed, or size, or interface. There were massive leaps in the types of operations that could be performed.
“This is going to change the world!” proclaimed a chorus of voices from supporters. Ironically, the detractors were shouting the same thing, although with a decidedly different intonation. In what should sound eerily familiar to us in 2023, this tool was banned from schools, job interviews, and other settings for fear it would diminish children’s ability to learn; that it would make identifying truly skilled individuals impossible; that it would upset the balance of merit in the school, in the workplace, and in society at large.
With the benefit of over 50 years of perspective, we can see how foolish such fears were. Calculators proved themselves to be a useful and versatile tools, but they were limited by the math skills of the operator.
To put it plainly, no matter how powerful the calculator, if I am using the square root function while balancing my checkbook, something has gone horribly wrong, and it’s probably not my finances which are to blame.
In the hands of a novice, a scientific calculator is far more likely to be used to spell out “fart” than it is to find the proof to Fermat’s theorem. (Use hex mode; put in “46415254.” Don’t ask me why I know this.)
Like the calculator, the spreadsheet, and the internet, AI-driven LLM tools are likely to change HOW we do our work, but not the fact that humans will still be the ones doing the work in the first place.
The shift will come in people learning how to use the tool to its best advantage: translating from our context into the context we know it needs to be in.
Which brings me back to the beginning. What is it, then, that we as engineers, developers, designers, and creative people can do to affect the future of products. And I think I’ve laid out an answer both satisfying and filled with hope:
We should endeavor to be fully present as thinking, feeling, context-seeking humans in all of our work. To embrace new tools and use them to their best advantage, while also being clear about their limitations. We affect the future of products when the bulk of the work we do is not with our brains or our hands, but with our hearts.
Published at DZone with permission of Leon Adato. See the original article here.
Opinions expressed by DZone contributors are their own.
Comments