AI Not Living Up to the Hype? It’s a Communications Problem
The limiting factor is an inability to tell AI what to do without having to use technical details and code.
Join the DZone community and get the full member experience.
Join For FreeThere can no longer be any doubt that artificial intelligence (AI) is the next frontier for the digital economy. While it may not have fully penetrated the public consciousness yet, AI is already working behind the scenes to enhance a wide variety of systems and services, fueling everything from chatbots to full-stack IT automation.
Much of this progress, however, has been driven by the rapid expansion of computing power, not the technologies that are intrinsic to AI. Functionally, AI consists of statistical data analysis using the same basic mathematical models developed in the 1990s and early 2000s. Even futuristic-sounding developments like artificial neural networking, which implies a process that mimics the neurons in the brain, is nothing more than sequential matrix multiplication -- in other words, fairly basic math but scaled to a high degree.
This is part of the reason AI’s potential has been so overhyped of late. Computing power can only take AI so far before the economics break down, and that level will fall far short of the many promises being made, like fully autonomous software testing and intuitive learning. These things are possible only if we achieve a number of breakthroughs in the AI models and algorithms themselves. And because this will take a little longer to accomplish, it’s fair to say the world will be less than impressed by AI not meeting their expectations in the near-term but will then be thoroughly amazed as the technology progresses.
At the moment, the key limiting factor in AI is the difficulty in getting humans and AI to communicate. Ideally, we should be able to tell an AI model what we want it to do without having to dig into the technical details. We should be able to define a problem and let AI not only come up with the right solution but also then gather the necessary resources and write the proper code to fulfill the objective all on its own.
Declarative vs Imperative
The field of software development, and quality assurance (QA) in particular, offers a clear example of how this is playing out in the enterprise. Even though automation has become endemic throughout much of the development process, testing remains firmly entrenched in the manual era.
A recent GitLab survey of more than 4,000 developers bears this out, with the majority of those questioned citing that testing is the top delaying factor in release schedules. One respondent even went so far as to say, “Testing delays everything.” While there has been ample discussion of automating the entire QA process with AI — which in theory should unburden testing teams from many rote, repetitive tasks required to get code to operational readiness — there is no reason to expect this will happen in the near future.
In the vast majority of cases, testing staff cannot effectively communicate its needs to the AI model without having to specify precise elements within the code or describe highly technical configurations. Only in low-code environments is it even remotely possible to design a test using visual flow-chart methods. In nearly all other cases, it is simply beyond the means of current AI models to understand the requirements of a given software product without breaking down the language barrier that exists between humans and algorithms.
At its core, this barrier lies in the difference between the way the two sides form understandings of the worlds they occupy. AI functions in a strictly imperative manner, as in, “I want you to perform this very specific action,” and then it does so. The human mind uses a more declarative approach, incorporating “how” and “why” a certain action needs to be performed, which requires technical expertise and a deep understanding of the broader context that informs the performance of that action. So rather than just doing one thing as instructed, humans must first identify the ultimate goal of this action, and its relationship to other actions, locate the proper tools and apply them in a precise manner to achieve the desired result.
In a real-world application, such as an autonomous car, we can see how the imperative style contrasts with the declarative. To get to a destination, a human driver must start the car, put it in drive, turn the wheel, apply the brakes and perform a string of precise actions to achieve their goal. We could call this “imperative” driving. A self-driving car using a declarative approach simply needs to be told where to go, and the algorithm fills in all the details of the steps required to get there.
Why Testers Shouldn’t Be Expected to Code
My interest in this particular problem dates back to my time at an investment bank, where I specialized in systems architecture, continuous delivery, live troubleshooting, and performance optimization for the firm’s social trading platform. As can be expected, testing was a vital component in this environment, which was built around high-speed, high-quality trading tools that could cause severe financial disruption, even bankruptcy, if the software contained flaws.
My initial thought was to create a simple programming language that could be used to create tests without having to circle the process back to programmers. But even though our testing staff was populated with highly trained experts who had a deep understanding of the complexity of our software, they were not coders, so asking them to do even rudimentary programming was not the right answer.
Clearly, what was needed was a more intuitive means of building tests. The light bulb went off one day during a whiteboard presentation of a flow chart. It immediately struck us all as a clear, concise way of conveying processes with multiple, complex dependencies.
Still, bridging this gap with AI is far more difficult than with an ordinary computing platform. Nevertheless, by taking advantage of model-based testing and other emerging techniques, I believe it will become possible. By presenting the requirements for a given system as a digital twin of the software being tested, model-based testing shows great promise to enable imperative forms of understanding by AI. The key stumbling block at this point is that all models must be built on a clearly defined set of requirements, which are difficult to describe to an AI platform, particularly when the work involves abstract ideas.
Naturally, this will involve a substantial amount of research and innovative new approaches to AI modeling, but this is clearly the direction that software testing is headed. And in the end, it will enable AI to fulfill the promises that are fueling its rapid adoption today.
Opinions expressed by DZone contributors are their own.
Comments