Workload-Centric Over Infrastructure-Centric Development
This article takes a look at why a workload-centric approach is better for a developer's productivity in comparison to an infrastructure-centric approach.
Join the DZone community and get the full member experience.
Join For FreeThis article is meant to be a starting point for sharing observations, experiences, and ideas to begin and build a conversation, exchange experiences, discuss, and shape a product vision we can work towards as a community.
Our initial product vision and definition reads as follows:
“Score is an open source project that provides a developer-centric and platform-agnostic workload specification to improve developer productivity and experience. It eliminates configuration inconsistencies between local and remote environments.”
A lot of words, I know—stick with me as we take a step back to understand why and how we got here. We’ll keep referring to the terms I just mentioned, so at the end, you’ll be able to paint the full picture, and we can understand what Score stands for together.
Infrastructure Centric Development
The Enemy Is Complexity
Above, we talk about “configuration inconsistencies between local and remote environments.” To understand what we mean by that, we have to understand infrastructure-centric development: the source of all evil.
In an infrastructure-centric workflow, developers are concerned with the platforms and tooling their workloads run on. Locally, this will be a lightweight tool such as Docker Compose. So far, so good. Remotely, however, you’ll likely be confronted with tooling such as Helm, Kustomize, Terraform, Argo CD, or similar. Developers having to promote their workload from their local setup to production environments that rely on a different set of tech and tools will run into issues surrounding:
- Environment-specific configuration: Your workload might have many environment variables configured—some of which are environment-specific, and others should remain unchanged. For complex applications, it is often not clear if
CPX_API_KEY
orSRD_MEM_CAP
should be updated for the test environment. What if you miss the update ofDBURL
, and it is pointing at the development database? - Platform-specific configuration: A workload with a dependency on a database might point to a Postgres Docker image or mock server in lower environments. On its way to production, however, a database has to be provisioned and allocated via Terraform. How do you ensure that everything is configured and provisioned correctly?
The fact that production environments require specialized knowledge and operational expertise increases the risk of wrongly specified or inconsistent configurations. The question of how things are reflected appropriately between environments is answered differently in teams. There might be a few super helpful people that end up being the “go-to people” for helping others get unstuck. There might be a page on the internal wiki site explaining how to get help or a full-blown ticketing system to track and route requests. The course of action also depends on the complexity of the task at hand. A variable change is easier to manage than satisfying a dependency on a database across various environments.
Workload Centric Development
Compartmentalizing Complexity
Above, we say that Score attempts to “improve developer productivity and experience.” To understand what we mean, we must understand workload-centric development: the solution to all of your problems.
By embracing a workload-centric approach to software development, developers no longer have to worry about environment-specific implementation details when promoting their workloads towards production. In a workload-centric world, developers declare what their workload requires to run successfully and independently of a platform or environment.
For example: “My workload has one container, which should be made available on a TCP port and relies on a Postgres database.” The following are questions that come with this response:
- What is the image pull secret?
- What is the TCP port number?
- What is the exact number of replicas?
- What is the provisioning and allocation of the database?
The platform in the target environment is responsible for answering these questions and ensuring that everything is configured and injected accordingly. This offers the potential of a reality in which developers can focus on local development without worrying about different configuration constructs, rules, and values in remote environments.
Workload-centric development allows developers to focus on their workload’s architecture rather than the tech stack in the target environment. It establishes a clear contract between Dev and Ops. The operations team is provided with a comprehensive set of configurational requirements, which—if met—ensure that the workload runs as intended. Code is passed through the fence rather than being thrown over it.
Score Principles
So far, we talked about the issue of developers struggling with “configuration inconsistencies between local and remote environments” and how “to improve developer productivity and experience” in this context. We’ll now look at how Score attempts to put this into practice by providing “a developer-centric and platform-agnostic workload specification.”
A while ago, the Score team formulated a set of workload-centric development principles based on which the Score specification was eventually developed:
- Establish a single source of truth for workload configuration that served as a main point of reference for development teams when trying to understand a workload’s runtime requirement and allows them to apply changes in a one-directional and standardized manner. To qualify as an SSOT, it has to be platform agnostic—meaning it’s not tied to an orchestrator, platform, or tool such as Kubernetes, Google Cloud Run, or Nomad—and environment agnostic—meaning it captures the configuration that should be the same across environments.
- Provide a tightly scoped workload spec that shields developers from the configurational complexity of container orchestrators and tooling. Container orchestration systems such as Kubernetes provide a massive number of options that can be configured just for a workload. Developers may find it challenging to understand which properties are needed, optional, or can be disregarded. By keeping a tightly scoped workload spec that only exposes core application constructs, developers can keep their focus.
- Implement a declarative approach for infrastructure management that allows developers to describe a workload’s resource dependencies without worrying about whom, when, and how it will be provisioned in the target environment.
Looking at our opening statement of it being “developer-centric” and “platform-agnostic,” we can add to the list of words without sounding overwhelming: “environment-agnostic,” “declarative,” and “tightly scoped.” A less wordy explanation offers the name itself. The Score specification, just like the musical notation, makes developers the conductor of their workload, which is deployed across an orchestra of tech and tools.
How Is It Implemented?
Our story doesn’t end with the Score specification. Its counterpart is the Score implementation, a CLI tool which the Score specification file (score.yaml
) is executed against. It is tied to a platform such as Docker Compose (score-compose) or Helm (score-helm) and allows it to generate a platform-specific configuration file such as docker-compose.yaml
or a helm values.yaml
file.
With Score, the developers’ workflow stays largely uninterrupted. The score.yaml
file is saved alongside the source code and generates the required configuration with a single command.
Example
In this example, we are working with a simple Docker Compose service that is based on a busybox image. The score.yaml
we created for it looks as follows:
apiVersion: score.dev/v1b1
metadata:
name: hello-world
containers:
hello:
image: busybox
command: ["/bin/sh"]
args: ["-c", "while true; do echo Hello World!; sleep 5; done"]
To convert the score.yaml
file into an executable compose.yaml
file, we simply need to run the score-compose CLI tool:
$ score-compose run -f ./score.yaml -o ./compose.yaml
The generated compose.yaml
will contain a single service definition:
services:
hello-world:
command:
- -c
- 'while true; do echo Hello World!; sleep 5; done'entrypoint:
- /bin/sh
image: busybox
The service can now be run with docker-compose
as usual:
$ docker-compose -f ./compose.yaml up hello-world
[+] Running 2/2
⠿ Network compose_default Created 0.0s
⠿ Container compose-hello-world-1 Created 0.1s
Attaching to compose-hello-world-1
compose-hello-world-1 | Hello World!
compose-hello-world-1 | Hello World!
The same score.yaml
can be run with any other Score implementation CLI. With all platform configuration files being generated from the same specification, the risk of configuration inconsistencies between environments significantly decreases. A change to score.yaml
will automatically be reflected in all environments without the developer needing to manually intervene.
Check out our current suite of available CLI tools here.
Conclusion
Score wants to make developers’ lives easier by allowing them to focus on developing features instead of fighting with infrastructure. It simplifies the promotion of workloads from local to remote development environments by automating repetitive configuration work.
We’re curious to hear: What experience did you have working with complex microservice applications? Do you have an idea on how to improve the Score specification? Or would you like to support us in building the next generation of Score implementations? Get involved here!
Thanks for reading!
Published at DZone with permission of Susanne Tuenker. See the original article here.
Opinions expressed by DZone contributors are their own.
Comments