Getting Started With CockroachDB on Red Hat OpenShift
In this article, we are going to cover the requirements and process of installing CockroachDB on the OpenShift platform.
Join the DZone community and get the full member experience.
Join For FreeI write about CockroachDB against/along wide-ranging systems, frameworks, integrations, etc. Here is a list of my previously published CockroachDB articles.
Prerequisites
- OpenShift CodeReady Containers: 4.5.9
- CockroachDB: 20.1.5
- CockroachDB Kubernetes Operator: 1.0.1
- Note: I'm using CodeReady Containers from Red Hat, the link requires login.
OpenShift CRC Installation
Feel free to refer to the RH docs to install the CodeReady environment:
crc setup
We should now be able to confirm the version of crc
crc version
CodeReady Containers version: 1.16.0+bf72d3a
OpenShift version: 4.5.9 (embedded in binary)
Start a Cluster
crc start
INFO Checking if running as non-root
INFO Checking if HyperKit is installed
INFO Checking if crc-driver-hyperkit is installed
INFO Checking file permissions for /etc/hosts
INFO Checking file permissions for /etc/resolver/testing
? Image pull secret [? for help] *******************************************************************************************************************************************INFO Extracting bundle: crc_hyperkit_4.5.9.crcbundle ... *****************************crc.qcow2: 64.47 MiB / 9.86 GiB [>_____________________________________________] 0.64%crc.qcow2: 9.86 GiB / 9.86 GiB [---------------------------------------------] 100.00%INFO Checking size of the disk image /Users/artem/.crc/cache/crc_hyperkit_4.5.9/crc.qcow2 ... ******************************************************************************INFO Creating CodeReady Containers VM for OpenShift 4.5.9... *************************INFO CodeReady Containers VM is running ************************************INFO Generating new SSH Key pair ... ************************************INFO Copying kubeconfig file to instance dir ... ************************************INFO Starting network time synchronization in CodeReady Containers VM ****************INFO Verifying validity of the cluster certificates ... ******************************INFO Restarting the host network ************************************INFO Check internal and public DNS query ... ************************************INFO Check DNS query from host ... ************************************INFO Starting OpenShift kubelet service ************************************INFO Configuring cluster for first start ************************************INFO Adding user's pull secret ... ************************************INFO Updating cluster ID ... ************************************INFO Starting OpenShift cluster ... [waiting 3m]
INFO Updating kubeconfig ************************************WARN The cluster might report a degraded or error state. This is expected since several operators have been disabled to lower the resource usage. For more information, please consult the documentation *********************************************************Started the OpenShift cluster***********************************************************************************************************************************************To access the cluster, first set up your environment by following 'crc oc-env' instructions.********************************************************************************Then you can access it by running 'oc login -u developer -p developer https://api.crc.testing:6443'.************************************************************************To login as an admin, run 'oc login -u kubeadmin -p secret https://api.crc.testing:6443'.***********
To access the cluster, first set up your environment by following 'crc oc-env' instructions.
You can now run 'crc console' and use these credentials to access the OpenShift web console.
crc oc-env
eval $(crc oc-env)
Login as Kubeadmin
oc login -u kubeadmin -p secret https://api.crc.testing:6443
Login successful.
You have access to 57 projects, the list has been suppressed. You can list all projects with 'oc projects'
Using project "default".
Create a CockroachDB Namespace
oc create namespace cockroachdb
namespace/cockroachdb created
Open CRC Console
crc console
It will open the OpenShift console in a browser window. We will use the credentials for kubeadmin
to login.
Install the CockroachDB Operator
Navigate to the OperatorHub in the OpenShift web console.
Search for cockroach
and select the tile for CockroachDB. Do not click on the Marketplace
labeled tile as it requires a subscription.
Click the tile, a self-guided dialogue will appear. Click install, another dialogue will appear.
We're going to leave everything as is except for the drop-down for the namespace. We're going to install the CockroachDB Operator into the namespace we created earlier. Click install.
Once the operator is installed, go to the installed operators section and select CockroachDB. Click create an instance.
We're going to leave everything as is. By default, we're going to deploy a 3 node secure cluster. Click create.
Now we can navigate to the Pods section and observe how our pods are created.
At this point, you may use the standard kubectl
commands to inspect your cluster.
kubectl get pods --namespace=cockroachdb
An equivalent OpenShift command is:
oc get pods --namespace=cockroachdb
NAME READY STATUS RESTARTS AGE
cockroach-operator-65c4f6df45-h5r5n 1/1 Running 0 6m11s
crdb-tls-example-0 1/1 Running 0 3m15s
crdb-tls-example-1 1/1 Running 0 103s
crdb-tls-example-2 1/1 Running 0 89s
If you prefer to set your namespace preferences instead of typing the namespace each time, feel free to issue the command below:
oc config set-context --current --namespace=cockroachdb
Context "default/api-crc-testing:6443/kube:admin" modified.
kubectl equivalent command is below:
kubectl config set-context --current --namespace=default
Validate the namespace:
oc config view --minify | grep namespace:
Or with kubectl:
kubectl config view --minify | grep namespace:
namespace: cockroachdb
From this point on, I will stick to the OpenShift nomenclature for cluster commands.
Create Secure Client Pod To Connect to the Cluster
Now that we have a CockroachDB cluster running in the OpenShift environment, let's connect to the cluster using a secure client.
My pod YAML for the secure client looks like so:
apiVersion: v1
kind: Pod
metadata:
name: crdb-client-secure
labels:
app.kubernetes.io/component: database
app.kubernetes.io/instance: crdb-tls-example
app.kubernetes.io/name: cockroachdb
spec:
serviceAccountName: cockroach-operator-role
containers:
- name: crdb-client-secure
image: cockroachdb/cockroach:v20.1.5
imagePullPolicy: IfNotPresent
volumeMounts:
- name: client-certs
mountPath: /cockroach/cockroach-certs/
# Keep a pod open indefinitely so kubectl exec can be used to get a shell to it
# and run cockroach client commands, such as cockroach sql, cockroach node status, etc.
command:
- sleep
- "2147483648" # 2^31
# This pod isn't doing anything important, so don't bother waiting to terminate it.
terminationGracePeriodSeconds: 0
volumes:
- name: client-certs
projected:
sources:
- secret:
name: crdb-tls-example-node
items:
- key: ca.crt
path: ca.crt
- secret:
name: crdb-tls-example-root
items:
- key: tls.crt
path: client.root.crt
- key: tls.key
path: client.root.key
defaultMode: 256
In the Pods section, click on create pod button and paste the yaml code.
Click create and wait for the pod to run.
Going back to the Pods section, we can see our new client pod running.
Connect to CockroachDB
oc exec -it crdb-client-secure --namespace cockroachdb -- ./cockroach sql --certs-dir=/cockroach/cockroach-certs/ --host=crdb-tls-example-public
#
# Welcome to the CockroachDB SQL shell.
# All statements must be terminated by a semicolon.
# To exit, type: \q.
#
# Server version: CockroachDB CCL v20.1.5 (x86_64-unknown-linux-gnu, built 2020/08/24 19:52:08, go1.13.9) (same version as client)
# Cluster ID: 0813c343-c86b-4be8-9ad0-477cdb5db749
#
# Enter \? for a brief introduction.
#
root@crdb-tls-example-public:26257/defaultdb> \q
And we're in! Let's step through some of the intricacies of what we just did.
We spun up a secure client pod, it belongs to the same namespace and the same labels as our CockroachDB StatefulSet, we are using the same image as the cluster, we're mounting the operator's generated certs. If at this point you're not seeing the same results, it helps to shell into the client pod and see if certs are available.
oc exec -it crdb-client-secure sh --namespace=cockroachdb
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl kubectl exec [POD] -- [COMMAND] instead.
# cd /cockroach/cockroach-certs
# ls
ca.crt client.root.key
client.root.crt
The other piece that can trip you up is the host to use for the --host
flag. You can use the public service name.
oc get services --namespace=cockroachdb
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
crdb-tls-example ClusterIP None <none> 26257/TCP,8080/TCP 14m
crdb-tls-example-public ClusterIP 172.25.180.197 <none> 26257/TCP,8080/TCP 14m
Run Sample Workload
We are going to initialize the Movr workload:
oc exec -it crdb-client-secure --namespace cockroachdb -- ./cockroach workload init movr 'postgresql://root@crdb-tls-example-0.crdb-tls-example.cockroachdb:26257?sslcert=%2Fcockroach%2Fcockroach-certs%2Fclient.root.crt&sslkey=%2Fcockroach%2Fcockroach-certs%2Fclient.root.key&sslmode=verify-full&sslrootcert=%2Fcockroach%2Fcockroach-certs%2Fca.crt'
- Pro tip: navigating to the CockroachDB logs in the OpenShift console will reveal the JDBC url for Cockroach.
Running a Movr workload is as follows:
oc exec -it crdb-client-secure --namespace cockroachdb -- ./cockroach workload run movr --duration=3m --tolerate-errors --max-rate=20 --concurrency=1 --display-every=10s 'postgresql://root@crdb-tls-example-0.crdb-tls-example.cockroachdb:26257?sslcert=%2Fcockroach%2Fcockroach-certs%2Fclient.root.crt&sslkey=%2Fcockroach%2Fcockroach-certs%2Fclient.root.key&sslmode=verify-full&sslrootcert=%2Fcockroach%2Fcockroach-certs%2Fca.crt'
Demonstrate Resilience
While the workload is running, kill a node and see how the cluster heals itself.
Notice the start time.
We can also kill a node using CLI.
oc delete pod crdb-tls-example-1 --namespace=cockroachdb
pod "crdb-tls-example-1" deleted
Let's also look at the CockroachDB Admin UI for the health status. Port forward the Admin UI in OpenShift.
oc port-forward --namespace=cockroachdb crdb-tls-example-0 8080 &
Forwarding from [::1]:8080 -> 8080
Create a SQL user to access the Web UI.
oc exec -it crdb-client-secure --namespace cockroachdb -- ./cockroach sql --certs-dir=/cockroach/cockroach-certs/ --host=crdb-tls-example-public
CREATE USER craig WITH PASSWORD 'cockroach';
Navigate to the Admin UI and enter user craig
and password cockroach
to login.
Notice the replication status:
After the cluster heals itself, check the replication status again.
And that's our overview of the CockroachDB operator for OpenShift. I would like to thank our engineering team as well as personal thanks to Chris Ireland, Chris Love for the guidance in getting the secure client working and Chris Casano for the idea of running a sample workload.
Until next time!
Published at DZone with permission of Artem Ervits. See the original article here.
Opinions expressed by DZone contributors are their own.
Comments