Monitoring at the Edge of the Third Act of the Internet
The edge is a new frontier. It throws up many new challenges that the enterprise must consider in relation to their monitoring strategy.
Join the DZone community and get the full member experience.
Join For FreeWhether you’re in tech, media, retail, or any other business with or without a digital presence, the biggest challenge you are facing is how to deliver something to the last mile. If I own a grocery store, while it’s easy for me to have a big warehouse where I store and sell goods, no one will drive there if it’s not convenient. This is the reason why stores are located close to their customers — so anyone can stop on their way home and pick up their weekly groceries. The biggest challenge for everyone has been how to deliver any product or service as conveniently and as fast as possible to the end user.
Amazon has disrupted the retail industry with its "same day" delivery, setting a very high bar for "last mile" delivery. Along the same lines, their acquisition of Whole Foods Market shows that they see a big opportunity in disrupting the perishable goods industry by streamlining its delivery chain and offering a more convenient experience for getting weekly groceries. Given the above examples, it is apparent that "edge" is not just a term for the computing industry — the concept is applicable to all customer facing industries.
If we now look at IT and computing specifically, one of the unavoidable technical limitations we are facing is that digital information cannot travel faster than the speed of light and although that’s incredibly fast, the further away a digital user is from where the information is sent, the longer it takes the user to get it — we call this "latency."
Edge Computing Represents the Third Act of the Internet
Today, the Internet has gone through a series of transformations to handle the last mile problem. We have moved from the monolithic approach of the early days where everything was delivered from a single centralized datacenter. Akamai pioneered the Second Act of the Internet when they launched the Content Delivery Network (CDN) to bring commonly used content closer to end users by caching it in decentralized datacenters. These datacenters, although closer to end users are still a significant distance away. Data packets have to travel hundreds of miles, thus remain subject to delays from network hops, best-effort-routing, and indeed, latency.
The next iteration of the Internet, currently in the process of being implemented by companies like Amazon, VaporIO, and Packet, is here to enable some of the incredible technologies that are yet to come, such as true virtual reality (VR), true virtual learning, and the plethora of innovations within the Internet of Things (IoT) industry. Due to their latency requirements and demand for super-fast connectivity, these applications need to get super close to the consumer.
The Third Act of the Internet is therefore all about how to get content and apps as close as possible to the digital user, whether a human, an autonomous car, or an item of wearable technology. That’s what the edge is about: How do we enable the smart revolution in response to the demand for super low latency?
Edge Technology Is an Enabler of a More Connected World
Edge computing will change the way we interact with technology; it will become even more ubiquitous than it is today. Imagine waking up and without asking Siri what the weather will be like, the answer is given to you: “Don’t forget your umbrella,” Siri will say before you walk out the door without it. This kind of "intelligent" service requires edge computing. Ultimately, technology is just an enabler for a more connected, more AI-driven world in which the digital user is constantly getting feedback and receiving recommendations.
It’s always the applications that drive technological innovation. Think about the last decade and the advances we’ve seen in AI, such as Uber or Siri. These new technologies and applications have driven 4G adoption as well as innovation in cloud computing technologies. Similarly, the next iteration of apps, which demand ultra-low latency, will drive the growth of edge computing and 5G adoption.
The Edge Will Become an Extension of the Cloud
Edge compute is essentially the next step in the evolution of distributed computing.
Edge computing tackles the problem of how to take the compute experiences that are currently running in big datacenters or in a cloud provider and move them to hundreds of micro-datacenters located close to the end user. Such a location could be Grand Central terminal in NYC. A few million people pass through Grand Central each day, during which they’re constantly sending and receiving data. We need the data to be instantly available and the applications they’re using to be super-fast, so we can’t depend on hosting them in a datacenter located in upstate New York over a hundred miles away. Who will put "your" data in Grand Central to enable this? The edge is about pushing the limits and bringing the apps closer to where your users are.
Still, despite this evolution, the edge won’t eliminate the need for cloud computing. Instead, the two will coexist. Edge computing is more distributed and lightweight; it’s about whatever needs to happen to get things closer to the end user. While some data processing will take place on your handheld device — your iPhone or your Android, or via the kind of AI chips that Tesla is deploying in their cars (since you can’t put thousands of servers at the edge and must instead rely on maybe only 20 or 30) — big processing will still reside in the big datacenters (cloud and traditional).
State of the Edge points out that “as the demand for edge applications grows, the cloud will drift closer to the edge.” Indeed, we are beginning to see cloud companies such as AWS and Microsoft disrupt the emerging edge ecosystem and announce edge compute resources to move data processing closer to the end user. The edge will become an extension of the cloud, an extension of the big datacenters. To think of it via another everyday analogy: you can have a big IKEA that houses and sells everything in an old Brooklyn port, but the little IKEA store in midtown Manhattan only sells the most common things.
Monitoring at the Edge
The edge is a new frontier. It throws up many new challenges that enterprise must consider in relation to their monitoring strategy.
One of these is dealing with the amount of data that monitoring systems will need to collect, which will be enormous as more and more “things” are connected to the Internet. IDC’s latest forecast estimates there will be 41.6 billion connected IoT devices by 2025, generating 79.4 zettabytes of data.
Solutions around gathering analytics from edge data centers are beginning to emerge, such as the new edge solutions recently announced by Dell that will include enhanced telemetry management and a streaming analytics engine, as well as micro datacenters and new edge servers. Challenges around analysis also apply. The more data is collected, the more we will need to develop rigorous systems of machine learning and artificial intelligence to help process it.
A second significant monitoring challenge is access. If 5G is fully deployed, due to the fact that high frequency radio waves struggle over long distances and through objects, there will be many more cell towers than today, with antennas as close as 500 feet apart. How many of these small cell sites do you monitor from? Similarly, how do you choose which edge datacenters to monitor from? How extensive does the monitoring footprint need to be, to truly cover the edge in all its manifestations?
Developing the Right Strategy
Developing the right strategy to gain a comprehensive picture of how things are performing from an edge perspective is critical. There has been a widespread evolution of the monitoring industry towards digital experience monitoring (DEM). This involves a significant shift from monitoring the health of a network or an application to instead monitoring the desired outcome which is what the user experiences.
Traditional monitoring tools focus on the infrastructure and applications you directly control, which leads to blind spots into other critical services and networks that lie between the hosting infrastructure and the end user. A good DEM solution takes a much more holistic approach to monitoring, looking at the digital performance of the entire IT infrastructure involved in delivering an application or a service to its end users.
With the emergence of edge computing, taking a user-centric monitoring approach is more critical than ever before! If you have an app hosted on the edge and you are monitoring from a centralized AWS or Azure datacenter, it won’t tell you how your "edge app" is performing or whether your customers can even access it. It is critical that the monitoring solution is location aware. It's also critical that your monitoring solution gives you the opportunity to bring context to the data you collect and allows a way to specifically test parts of your application or test specific edge locations with minimal production changes.
We are already seeing some services running at the edge from CDNs such as Akamai, Fastly, and Cloudflare. To ensure that API services (which currently power most AI recommendations) are always on and as reliable as possible, the big CDN providers are beginning to offer edge services that move API traffic onto their edge networks so that they can serve API responses from edge servers instead of the origin servers. This is where API Monitoring is already critical, and not just from an availability perspective, but by asking whether or not your API calls are returning the correct responses to ensure the integrity of the service.
End-to-end monitoring is essential to gain a full picture and understand where things might be broken. This will continue to be the case as momentum for edge computing continues to grow.
Published at DZone with permission of Mehdi Daoudi, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
Comments