How to Build an API Testing Program with MuleSoft Anypoint
What is an API testing program, and why do you need one?
Join the DZone community and get the full member experience.
Join For FreeIn a recent study by Dimensional Research, 91% of respondents reported that they are using or will adopt microservices-based architectures. Many of the companies trying to build such large-scale software systems employ some sort of orchestration or ESB system to organize how the services talk to each other. MuleSoft Anypoint is a popular enterprise-ready solution to reach these goals. While some purists may say this approach is an anti-pattern, the advantages of such an approach are undeniable for reasons including accelerating go-to-market speed.
However, 76% of respondents also reported that it takes longer to resolve issues in microservices environments. These numbers point to a trend that cannot be kicked down the road: It has never been easier to build APIs, yet the testing and debugging of APIs is getting exponentially more challenging.
Let's explore how MuleSoft Anypoint (any many other API managers) excels in certain areas of API quality, but also leaves gaps that may hold you back from solving the quality-at-speed problem.
A Better Way to Productize APIs
MuleSoft's Anypoint Platform includes a native testing framework (MUnit) that allows Mulesoft experts to conduct unit and API tests on Mule apps. You can also mock APIs to run tests (shift left) before going live.
However, MUnit specifically tests Mule flows. The reality is that today's average business transaction involves 35 or more API connections. While MUnit frees developers and engineers to productize APIs easily in Anypoint Studio, MUnit does not extend testing coverage to the APIs that are outside of your Anypoint platform. If solely depending on MUnit for global API quality, your team will not have the clarity to uphold internal and external SLAs for API uptime and performance.
The ultimate goal of modern API testing is to ensure that functional, integration, performance, and data-driven tests capture the entire API consumer flow. This is primarily to catch the most common root cause of API problems: human error. Consider the following:
One of my recent clients was a large publisher that was monitoring their APIs using single calls and status code checks. The monitoring data was being fed to a centralized analytics dashboard that reported nothing wrong with API uptime for weeks, but partners were complaining of outages. So my team changed their monitors to data-driven multi-step tests that reproduced their partners’ normal flows. Not long after, we realized that every Monday morning for two hours, the publisher’s API listed hundreds of bad ISBNs. The problem was that they had set up their API management platform to cache common endpoints for performance, but when they updated their database on Mondays, hundreds of out-of-print ISBNs were being shared in one of their endpoints. The problem was fixed but in the postmortem, the publisher did not know which business or technical stakeholder owned the problem.
With human error behind most of your current and future API quality headaches, you must ask whether siloed testing efforts, even if they are bridged by sending test result data to a platform like Elastic, can connect the dots to detect human error. If there are faults in the testing and monitoring strategy itself, you’ll be seeing false-positives for a long time before recognizing there is an issue.
Gartner Inc. estimates that up to 95% of cloud breaches occur due to human errors such as configuration mistakes
Whether just launching your first Scrum sprints, or trying to evolve an intricate service mesh toward a serverless future, you can benefit immensely from a standardized API testing program that is independent of the team and platform you use to build the APIs. Just think of this article: It had to be reviewed by multiple independent editors before being posted. Why wouldn’t your APIs require at least that same level of independent review?
Checklist: Building Your API Testing Program
By "program," I simply mean to enforce a consistent and standardized policy towards testing. A program involves the following critical elements:
Bring in Testing Experts: Leaving the functional testing of an API to the person that developed it will always result in problems. Further, a good testing strategy involves connecting various APIs owned by different teams.
Expand Coverage to Full User Flows: End-to-end testing of websites and applications is most effective when it captures real world conditions. Similarly, API testing must also capture normal API consumer flows through various transactions.
Go Beyond Uptime: API monitoring must be a measure of more than up or down: it must collect evidence that the APIs are functionally working as expected. This can only be done by using functional tests as monitors. When possible, use the same tests from your testing automation stack.
Centralize and Standardize Testing: Business and technical owners on distributed teams should be able to work together effortlessly with confidence that their API testing data is accurate. If one person leaves, your team should not struggle to understand what they were doing. Smooth transitions regarding testing should be consistent across the organization.
Detect Performance Flaws Early: Tools today should allow you to run load tests as part of your automated stack. While these performance tests aren’t as robust as the final load tests, they can offer early indicators of things like memory leaks that are harder to fix once the code has been promoted to near-production. Again, those performance tests shouldn’t just be single hits: they should test hundreds of thousands of complete API consumer flows.
Diagnose API Flaws Quickly: Sufficient coverage must be coupled with highly detailed reporting that can help you detect signals from a lot of noise. With the prevalence of APIs today, it has never been easier to export your data to visualization tools like Kibana to discover actionable insights.
Ultimately, MuleSoft Anypoint (and Anypoint Server Mesh) offer great capabilities for designing and managing APIs throughout their life cycle. But Anypoint is not meant to test all API user flows - including any legacy or other APIs outside of Anypoint. Start with MUnit, but evolve to a universal API testing tool from platform providers such as API Fortress, SmartBear or Parasoft. This is the best way to build an API testing program within MuleSoft Anypoint.
Opinions expressed by DZone contributors are their own.
Comments