Building Jenkins Infrastructure Pipelines
Learn how to create and configure Jenkins infrastructure pipelines to automate building, testing, and deploying applications.
Join the DZone community and get the full member experience.
Join For FreeJenkins allows you to automate everything, from building and testing code to deploying to production. Jenkins works on the principle of pipelines, which can be customized to fit the needs of any project.
After installing Jenkins, we launch it and navigate to the web interface, usually available at http://localhost:8080. On the first launch, Jenkins will ask you to enter a password, which is displayed in the console or located in a file on the server. After entering the password, you are redirected to the plugin setup page.
To work with infrastructure pipelines, you will need the following plugins:
- Pipeline: The main plugin for creating and managing pipelines in Jenkins.
- Git plugin: Necessary for integration with Git and working with repositories.
- Docker Pipeline: Allows you to use Docker within Jenkins pipelines.
Also, in the Jenkins settings, there is a section related to the configuration of version control systems, and there you need to add a repository. For Git, this will require specifying the repository URL and account credentials.
Now you can create an infrastructure pipeline, which is a series of automated steps that transform your code into production-ready software. The main goal of all this is to make the software delivery process as fast as possible.
Creating a Basic Pipeline
A pipeline consists of a series of steps, each of which performs a specific task. Typically, the steps look like this:
- Checkout — extracting the source code from the version control system
- Build — building the project using build tools, such as Maven
- Test — running automated tests to check the code quality
- Deploy — deploying the built application to the target server or cloud
Conditions determine the circumstances under which each pipeline step should or should not be executed. Jenkins Pipeline has a "when" directive that allows you to restrict the execution of steps based on specific conditions.
Triggers determine what exactly triggers the execution of the pipeline:
- Push to repository — the pipeline is triggered every time new commits are pushed to the repository.
- Schedule — the pipeline can be configured to run on a schedule, for example, every night for nightly builds.
- External events — the pipeline can also be configured to run in response to external events.
To make all this work, you need to create a Jenkinsfile — a file that describes the pipeline. Here's an example of a simple Jenkinsfile:
pipeline {
agent any
stages {
stage('Checkout') {
steps {
git 'https://your-repository-url.git'
}
}
stage('Build') {
steps {
sh 'mvn clean package'
}
}
stage('Test') {
steps {
sh 'mvn test'
}
}
stage('Deploy') {
steps {
// deployment steps
}
}
}
post {
success {
echo 'The pipeline has completed successfully.'
}
}
}
Jenkinsfile describes a basic pipeline with four stages: checkout, build, test, and deploy
Parameterized Builds
Parameterized builds allow you to dynamically manage build parameters.
To start, you need to define the parameters in the Jenkinsfile used to configure the pipeline. This is done using the "parameters" directive, where you can specify various parameter types (string, choice, booleanParam, etc.).
pipeline {
agent any
parameters {
string(name: 'DEPLOY_ENV', defaultValue: 'staging', description: 'Target environment')
choice(name: 'VERSION', choices: ['1.0', '1.1', '2.0'], description: 'App version to deploy')
booleanParam(name: 'RUN_TESTS', defaultValue: true, description: 'Run tests?')
}
stages {
stage('Initialization') {
steps {
echo "Deploying version ${params.VERSION} to ${params.DEPLOY_ENV}"
script {
if (params.RUN_TESTS) {
echo "Tests will be run"
} else {
echo "Skipping tests"
}
}
}
}
// other stages
}
}
When the pipeline is executed, the system will prompt the user to fill in the parameters according to their definitions.
You can use parameters to conditionally execute certain pipeline stages. For example, only run the testing stages if the RUN_TESTS parameter is set to true.
The DEPLOY_ENV parameter can be used to dynamically select the target environment for deployment, allowing you to use the same pipeline to deploy to different environments, such as production.
Dynamic Environment Creation
Dynamic environment creation allows you to automate the process of provisioning and removing temporary test or staging environments for each new build, branch, or pull request. In Jenkins, this can be achieved using pipelines, Groovy scripts, and integration with tools like Docker, Kubernetes, Terraform, etc.
Let's say you want to create a temporary test environment for each branch in a Git repository, using Docker. In the Jenkinsfile, you can define stages for building a Docker image, running a container for testing, and removing the container after the tests are complete:
pipeline {
agent any
stages {
stage('Build Docker Image') {
steps {
script {
// For example, the Dockerfile is located at the root of the project
sh 'docker build -t my-app:${GIT_COMMIT} .'
}
}
}
stage('Deploy to Test Environment') {
steps {
script {
// run the container from the built image
sh 'docker run -d --name test-my-app-${GIT_COMMIT} -p 8080:80 my-app:${GIT_COMMIT}'
}
}
}
stage('Run Tests') {
steps {
script {
// steps to run tests
echo 'Running tests against the test environment'
}
}
}
stage('Cleanup') {
steps {
script {
// stop and remove the container after testing
sh 'docker stop test-my-app-${GIT_COMMIT}'
sh 'docker rm test-my-app-${GIT_COMMIT}'
}
}
}
}
}
If Kubernetes is used to manage the containers, you can dynamically create and delete namespaces to isolate the test environments. In this case, the Jenkinsfile might look like this:
pipeline {
agent any
environment {
KUBE_NAMESPACE = "test-${GIT_COMMIT}"
}
stages {
stage('Create Namespace') {
steps {
script {
// create a new namespace in Kubernetes
sh "kubectl create namespace ${KUBE_NAMESPACE}"
}
}
}
stage('Deploy to Kubernetes') {
steps {
script {
// deploy the application to the created namespace
sh "kubectl apply -f k8s/deployment.yaml -n ${KUBE_NAMESPACE}"
sh "kubectl apply -f k8s/service.yaml -n ${KUBE_NAMESPACE}"
}
}
}
stage('Run Tests') {
steps {
script {
// test the application
echo 'Running tests against the Kubernetes environment'
}
}
}
stage('Cleanup') {
steps {
script {
// delete the namespace and all associated resources
sh "kubectl delete namespace ${KUBE_NAMESPACE}"
}
}
}
}
}
Easily Integrate Prometheus
The Prometheus metrics can be set up in Jenkins through "Manage Jenkins" -> "Manage Plugins."
After installation, we go to the Jenkins settings, and in the Prometheus Metrics section, we enable the exposure of metrics — enable Prometheus metrics.
The plugin will be accessible by default at the URL http://<JENKINS_URL>/prometheus/, where <JENKINS_URL> is the address of the Jenkins server.
In the Prometheus configuration file prometheus.yml, we add a new job to collect metrics from Jenkins:
scrape_configs:
- job_name: 'jenkins'
metrics_path: '/prometheus/'
static_configs:
- targets: ['<JENKINS_IP>:<PORT>']
Then, through Grafana, we can point to the Prometheus source and visualize the data.
The Prometheus integration allows you to monitor various Jenkins metrics, such as the number of builds, job durations, and resource utilization. This can be particularly useful for identifying performance bottlenecks, tracking trends, and optimizing your Jenkins infrastructure.
By leveraging the power of Prometheus and Grafana, you can gain valuable insights into your Jenkins environment and make data-driven decisions to improve your continuous integration and deployment processes.
Conclusion
Jenkins is a powerful automation tool that can help streamline your software delivery process. By leveraging infrastructure pipelines, you can easily define and manage the steps required to transform your code into production-ready software.
Opinions expressed by DZone contributors are their own.
Comments