Go Microservices, Part 8: Centralized Configuration With Viper and Spring Cloud Config
Learn how to centralize essential aspects of your microservices project, such as logs, with Spring Cloud Config.
Join the DZone community and get the full member experience.
Join For Freecentralizing something when dealing with microservices may seem a bit off given that microservices, after all, is about decomposing your system into separate independent pieces of software. however, what we’re typically after is the isolation of processes. other aspects of microservice operations should be dealt with in a centralized way. for example, logs should end up in your logging solution such as the elk stack , monitoring goes into a dedicated monitoring - in this part of the blog series, we’ll deal with externalized and centralized configuration using spring cloud config and git.
handling configuration for the various microservices that our application consists of in a centralized manner is actually quite natural as well. especially when running in a containerized environment on an unknown number of underlying hardware nodes, managing config files built into each microservice image or from mounted volumes can quickly become a real headache. there are a number of proven projects to help deal with this, for example, etcd , consul , and zookeeper . however, it should be noted that those projects provide a lot more than just serving configuration. since this blog series focuses on integrating go microservices with the spring cloud/netflix oss ecosystem of supporting services, we’ll be basing our centralized configuration on spring cloud configuration , a piece of software dedicated to providing exactly that.
spring cloud config
the spring cloud ecosystem provides a solution for centralized configuration not-so-creatively named spring cloud config . the spring cloud config server can be viewed as a proxy between your services and their actual configuration, providing a number of really neat features such as:
- support for several different configuration backends such as git (default), file systems and plugins for using etcd , consul and zookeeper as stores.
- transparent decryption of encrypted properties.
- pluggable security
- push mechanism using git hooks / rest api and spring cloud bus (e.g. rabbitmq) to propagate changes in config files to services, making live reload of configuration possible.
for a more in-depth article about spring cloud config in particular, take a look at my colleague magnus's recent blog post .
in this blog post, we will integrate our “accountservice” with a spring cloud config server backed by a public git repository on github, from which we’ll fetch configuration, encrypt/decrypt a property and also implement live-reload of config properties.
here’s a simple overview of the overall solution we’re aiming for:
overview
since we’re running docker in swarm mode, we’ll continue using docker mechanics in various ways. inside the swarm, we should run at least one (preferably more) instances of spring cloud configuration servers. when one of our microservices starts up, all they need to know about are the following:
- the logical service name and port of the config server. i.e - we’re deploying our config servers on docker swarm as services, let’s say we name that service “configserver.” that means that is the only thing the microservices needs to know about addressing in order to make a request for its configuration.
- what their name is, e.g. “accountservice.”
- what execution profile it is running, e.g. “dev”, “test” or “prod”. if you’re familiar with the concept of spring.profiles.active , this is a home-brewn counterpart we can use for go.
- if we’re using git as backend and want to fetch configuration from a particular branch, that needs to be known up front. (optional)
given the four criteria above, a sample get request for configuration could look like this in go code:
resp, err := http.get("http://configserver:8888/accountservice/dev/p8")
i.e.:
protocol://url:port/applicationname/profile/branch
setting up a spring cloud configuration server in your swarm
for part 8, you’ll probably want to clone branch p8 since it includes the source for the config server:
git clone https://github.com/callistaenterprise/goblog.git
git checkout p8
you could probably set up and deploy the config server in other ways. however, for simplicity, i’ve prepared a /support folder in the root /goblog folder of the source code repository of the blog series which will contain the requisite 3rd party services we’ll need further on.
typically, each required support component will either be a simple dockerfile for conveniently building and deploying components which we can use out of the box, or it will be (java) source code and configuration (spring cloud applications are usually based on spring boot) we’ll need to build ourselves using gradle. (no worries, all you need is to have a jdk installed).
(most of these spring cloud applications were prepared by my colleague magnus for his microservices blog series .)
let’s get started with the config server, shall we?
rabbitmq
what? weren’t we about to install spring cloud configuration server? well - that piece of software depends on having a message broker to propagate configuration changes using spring cloud bus backed by rabbitmq. having rabbitmq around is a very good thing anyway which we’ll be using in a later blog post so we’ll start by getting rabbitmq up and running as a service in our swarm.
i’ve prepared a dockerfile inside /goblog/support/rabbitmq to use a pre-baked image which we’ll deploy as a docker swarm service.
we’ll create a new bash (.sh) script to automate things for us if/when we need to update things.
in the root /goblog folder, create a new file support.sh :
#!/bin/bash
# rabbitmq
docker service rm rabbitmq
docker build -t someprefix/rabbitmq support/rabbitmq/
docker service create --name=rabbitmq --replicas=1 --network=my_network -p 1883:1883 -p 5672:5672 -p 15672:15672 someprefix/rabbitmq
(you may need to chmod it to make it executable)
run it and wait while docker downloads the necessary images and deploys rabbitmq into your swarm. when it’s done, you should be able to open the rabbitmq admin gui and log in using guest/guest at:
open http://$managerip:15672/#/
your web browser should open and display something like this:
if you see the rabbitmq admin gui, we can be fairly sure it works as advertised.
spring cloud configuration server
in /support/config-server you’ll find a spring boot application pre-configured to run the config server. we’ll be using a git repository for storing and accessing our configuration using yaml files.
feel free to take a look at /goblog/support/config-server/src/main/resources/application.yml which is the config file of the config server:
---
# for deployment in docker containers
spring:
profiles: docker
cloud:
config:
server:
git:
uri: https://github.com/eriklupander/go-microservice-config.git
# home-baked keystore for encryption. of course, a real environment wouldn't expose passwords in a blog...
encrypt:
key-store:
location: file:/server.jks
password: letmein
alias: goblogkey
secret: changeme
# since we're running in docker swarm mode, disable eureka service discovery
eureka:
client:
enabled: false
# spring cloud config requires rabbitmq, use the service name.
spring.rabbitmq.host: rabbitmq
spring.rabbitmq.port: 5672
we see a few things:
- we’re telling the config-server to fetch configuration from our git-repo at the specified uri.
- a keystore for encryption (self-signed) and decryption (we’ll get back to that)
- since we’re running in docker swarm mode, eureka service discovery is disabled.
- the config server is expecting to find a rabbitmq host at “rabbitmq” which just happens to be the docker swarm service name we just gave our rabbitmq service.
the dockerfile for the config-server is quite simple:
from davidcaste/alpine-java-unlimited-jce
expose 8888
add ./build/libs/*.jar app.jar
add ./server.jks /
entrypoint ["java","-dspring.profiles.active=docker","-djava.security.egd=file:/dev/./urandom","-jar","/app.jar"]
(never mind that java.security.egd stuff, it’s a workaround for a problem we don’t care about in this blog series)
a few things of note here:
- we’re using a base docker image based on alpine linux that has the java unlimited cryptography extension installed, this is a requirement if we want to use the encryption/decryption features of spring cloud config.
- a home-baked keystore is added to the root folder of the container image.
build the keystore
to use encrypted properties later on, we’ll configure the config server with a self-signed certificate. (you’ll need to have keytool on your path).
in the /goblog/support/config-server/ folder, run:
keytool -genkeypair -alias goblogkey -keyalg rsa \
-dname "cn=go blog,ou=unit,o=organization,l=city,s=state,c=se" \
-keypass changeme -keystore server.jks -storepass letmein \
-validity 730
this should create server.jks . feel free to modify any properties/passwords, just remember to update application.yml accordingly!
build and deploy
time to build and deploy the server. let’s create a shell script to save us time if or when we need to do this again. remember - you need a java runtime environment to build this! in the /goblog folder, create a file named springcloud.sh . we will put all things that actually need building (and that may take some time) in there:
#!/bin/bash
cd support/config-server
./gradlew build
cd ../..
docker build -t someprefix/configserver support/config-server/
docker service rm configserver
docker service create --replicas 1 --name configserver -p 8888:8888 --network my_network --update-delay 10s --with-registry-auth --update-parallelism 1 someprefix/configserver
run it from the /goblog folder (you may need to chmod +x first):
> ./springcloud.sh
this may take a while, give it a minute or two and then check if you can see it up-and-running using docker service :
> docker service ls
id name mode replicas image
39d26cc3zeor rabbitmq replicated 1/1 someprefix/rabbitmq
eu00ii1zoe76 viz replicated 1/1 manomarks/visualizer:latest
q36gw6ee6wry accountservice replicated 1/1 someprefix/accountservice
t105u5bw2cld quotes-service replicated 1/1 eriklupander/quotes-service:latest
urrfsu262e9i dvizz replicated 1/1 eriklupander/dvizz:latest
w0jo03yx79mu configserver replicated 1/1 someprefix/configserver
try to manually load the “accountservice” configuration as json using curl:
> curl http://$managerip:8888/accountservice/dev/master
{"name":"accountservice","profiles":["dev"],"label":"master","version":"b8cfe2779e9604804e625135b96b4724ea378736",
"propertysources":[
{"name":"https://github.com/eriklupander/go-microservice-config.git/accountservice-dev.yml",
"source":
{"server_port":6767,"server_name":"accountservice dev"}
}]
}
(formatted for brevity)
the actual configuration is stored within the “source” property where all values from the .yml file will appear as key-value pairs. loading and parsing the “source” property into a usable configuration in go is the centerpiece of this blog post.
the yaml config files
before moving on to go code, let’s take a look inside the root folder of the p8 branch of the configuration-repo :
accountservice-dev.yml
accountservice-test.yml
both these files are currently very sparsely populated:
server_port: 6767
server_name: accountservice test
the_password: (we'll get back to this one)
the only thing we’re configuring at this point is the http port we want our service to bind to. a real service will probably have a lot more stuff in it.
using encryption/decryption
one really neat thing about spring cloud config is its built-in support for transparently decrypting values encrypted directly in the configuration files. for example, take a look at accountservice-test.yml where we have a dummy “the_password” property:
server_port: 6767
server_name: accountservice test
the_password: '{cipher}aqb1bmfcu5uscctwuweqt293npq0elefhhp5b2szy8m4kuzzqxofsmxhah7sthnnjoudgxrvkppzekdgo6ajfsprzvf04sxovz6rjg6hml1sakly/k1r/e0wp0rrgysbgh9nnebhzqjz8ogadvrdho5vxzzgx8uj5kn+x6nrqobbiv6xtyvj9csqj/btf/u1t8/oj54vhwi5h1gsvdox67teta0vdpin2askkz6w5lyqocrjbonuuhyp5roconw0pklp+2zhrmcy0mxhcjsnjohvqazmprukygcjcy3lhjd39s2eoydmyz944tkhei6rwtcfozlcir/wazwotd5siua9q8a9ng2gppclgk7x649ayqynl+ruy1q7t7fbw/tzsbg='
by prefixing the encrypted string with {cipher} , our spring cloud configuration server will know how to automatically decrypt the value for us before passing the result to the service. in a running instance with everything configured correctly, a curl request to the rest api to fetch this config would return:
...
"source": {
"server_port": 6767,
"server_name": "accountservice test",
"the_password": "password"
....
pretty neat, right? the “the_password” property can be stored as clear-text encrypted string on a public server (if you trust the encryption algorithm and the integrity of your signing key) and the spring cloud config server (which may not under any circumstance be made available unsecured and/or visible outside of your internal cluster!!) transparently decrypts the property into actual value ‘password’.
of course, you need to encrypt the value using the same key as spring cloud config is using for decryption, something that can be done over the config server’s http api:
curl http://$managerip:8888/encrypt -d 'password'
aqclkemzqsgivpkx+vx6vz+7ww00n... (rest omitted for brevity)
viper
our go-based configuration framework of choice is viper . viper has a nice api to work with, is extensible and doesn’t get in the way of our normal application code. while viper doesn’t support loading configuration from spring cloud configuration servers natively, we’ll write a short snippet of code that does this for us. viper also handles many file types as config source - for example, json, yaml, and plain properties files. viper can also read environment variables from the os for us which can quite neat. once initialized and populated, our configuration is always available using the various viper.get* functions. very convenient, indeed.
remember the picture at the top of this blog post? well, if not - here it is again:
we’ll make our microservices do an http request on start, extract the “source” part of the json response and stuff that into viper so we can get the http port for our web server there. let’s go!
loading the configuration
as already demonstrated using curl, we can do a plain http request to the config server where we just need to know our name and our “profile”. we’ll start by adding some parsing of flags to our “accountservice” main.go so we can specify an environment “profile” when starting as well as an optional uri to the config server:
var appname = "accountservice"
// init function, runs before main()
func init() {
// read command line flags
profile := flag.string("profile", "test", "environment profile, something similar to spring profiles")
configserverurl := flag.string("configserverurl", "http://configserver:8888", "address to config server")
configbranch := flag.string("configbranch", "master", "git branch to fetch configuration from")
flag.parse()
// pass the flag values into viper.
viper.set("profile", *profile)
viper.set("configserverurl", *configserverurl)
viper.set("configbranch", *configbranch)
}
func main() {
fmt.printf("starting %v\n", appname)
// new - load the config
config.loadconfigurationfrombranch(
viper.getstring("configserverurl"),
appname,
viper.getstring("profile"),
viper.getstring("configbranch"))
initializeboltclient()
service.startwebserver(viper.getstring("server_port")) // new, use port from loaded config
}
the config.loadconfigurationfrombranch(..) function goes into a new package we’re calling config . create /goblog/accountservice/config and the following file named loader.go :
// loads config from for example http://configserver:8888/accountservice/test/p8
func loadconfigurationfrombranch(configserverurl string, appname string, profile string, branch string) {
url := fmt.sprintf("%s/%s/%s/%s", configserverurl, appname, profile, branch)
fmt.printf("loading config from %s\n", url)
body, err := fetchconfiguration(url)
if err != nil {
panic("couldn't load configuration, cannot start. terminating. error: " + err.error())
}
parseconfiguration(body)
}
// make http request to fetch configuration from config server
func fetchconfiguration(url string) ([]byte, error) {
resp, err := http.get(url)
if err != nil {
panic("couldn't load configuration, cannot start. terminating. error: " + err.error())
}
body, err := ioutil.readall(resp.body)
return body, err
}
// pass json bytes into struct and then into viper
func parseconfiguration(body []byte) {
var cloudconfig springcloudconfig
err := json.unmarshal(body, &cloudconfig)
if err != nil {
panic("cannot parse configuration, message: " + err.error())
}
for key, value := range cloudconfig.propertysources[0].source {
viper.set(key, value)
fmt.printf("loading config property %v => %v\n", key, value)
}
if viper.isset("server_name") {
fmt.printf("successfully loaded configuration for service %s\n", viper.getstring("server_name"))
}
}
// structs having same structure as response from spring cloud config
type springcloudconfig struct {
name string `json:"name"`
profiles []string `json:"profiles"`
label string `json:"label"`
version string `json:"version"`
propertysources []propertysource `json:"propertysources"`
}
type propertysource struct {
name string `json:"name"`
source map[string]interface{} `json:"source"`
}
basically, we’re doing that http get to the config server with our appname, profile and git branch, then unmarshalling the response json into the springcloudconfig struct we’re declaring in the same file. finally, we’re simply iterating over all the key-value pairs in the cloudconfig.propertysources[0] and stuffing each pair into viper so we can access them whenever we want using viper.getstring(key) or another of the typed getters the viper api provides.
note that if we have an issue contacting the configuration server or parsing its response, we panic() the entire microservice which will kill it. docker swarm will detect this and try to deploy a new instance in a few seconds. the typical reason for a behavior such as this is when starting your cluster from cold and the go-based microservice will start much faster than the spring boot-based config server does. let swarm retry a few times and things should sort themselves out.
we’ve split the actual work up into one public function and a few package-scoped ones for easier unit testing. the unit test for checking so we can transform json into actual viper properties looks like this using the goconvey style of tests:
func testparseconfiguration(t *testing.t) {
convey("given a json configuration response body", t, func() {
var body = `{"name":"accountservice-dev","profiles":["dev"],"label":null,"version":null,"propertysources":[{"name":"file:/config-repo/accountservice-dev.yml","source":{"server_port":6767"}}]}`
convey("when parsed", func() {
parseconfiguration([]byte(body))
convey("then viper should have been populated with values from source", func() {
so(viper.getstring("server_port"), shouldequal, "6767")
})
})
})
}
run from goblog/accountservice if you want to:
> go test ./...
updates to the dockerfile
given that we’re loading the configuration from an external source, our service needs a hint about where to find it. that’s performed by using flags as command-line arguments when starting the container and service:
goblog/accountservice/dockerfile :
from iron/base
expose 6767
add accountservice-linux-amd64 /
add healthchecker-linux-amd64 /
healthcheck --interval=3s --timeout=3s cmd ["./healthchecker-linux-amd64", "-port=6767"] || exit 1
entrypoint ["./accountservice-linux-amd64", "-configserverurl=http://configserver:8888", "-profile=test", "-configbranch=p8"]
our entrypoint now supplies values making it possible to configure from where to load configuration.
into the swarm
you probably noted that we’re not using 6767 as a hard-coded port number anymore, i.e:
service.startwebserver(viper.getstring("server_port"))
use the copyall.sh script to build and redeploy the updated “accountservice” into docker swarm
> ./copyall.sh
after everything’s finished, the service should still be running exactly as it did before you started on this part of the blog series, with the exception that it actually picked its http port from an external and centralized configuration server rather than being hard-coded into the compiled binary.
(do note that ports exposed in dockerfiles, healthcheck cmds and docker swarm “docker service create” statements doesn’t know anything about config servers. in a ci/cd pipeline, you’d probably externalize relevant properties so they are injectable by the build server at build time.)
let’s take a look at the log output of our accountservice:
> docker logs -f [containerid]
starting accountservice
loading config from http://configserver:8888/accountservice/test/p8
loading config property the_password => password
loading config property server_port => 6767
loading config property server_name => accountservice test
successfully loaded configuration for service accountservice test
(to actually print config values is a bad practice, the output above is just for educational reasons!)
live configuration updates
- "oh, did that external service we're using for [some purpose] change their url?"
- "darn. none told us!!"
i assume many of us have encountered situations where we need to either rebuild an entire application or at least restart it to update some invalid or changed configuration value. spring cloud has the concept of @refreshscope s where beans can be live-updated with changed configuration propagated from a git commit hook .
this figure provides an overview of how a push to a git repo is propagated to our go-based microservices:
in this blog post, we’re using a github repo which has absolutely no way of knowing how to perform a post-commit hook operation to my laptop’s spring cloud server, so we’ll emulate a commit hook push using the built-in /monitor endpoint of our spring cloud config server.
curl -h "x-github-event: push" -h "content-type: application/json" -x post -d '{"commits": [{"modified": ["accountservice.yml"]}],"name":"some name..."}' -ki http://$managerip:8888/monitor
the spring cloud config server will know what to do with this post and send out a refreshremoteapplicationevent on an exchange on rabbitmq (abstracted by spring cloud bus). if we take a look at the rabbitmq admin gui after having booted spring cloud config successfully, that exchange should have been created:
how does an exchange relate to more traditional messaging constructs such as publisher, consumer, and queue?
publisher -> exchange -> (routing) -> queue -> consumer
for example, a message is published to an exchange , which then distributes message copies to queue(s) based on routing rules and bindings which may have registered consumers .
so in order to consume refreshremoteapplicationevent messages (i prefer to call them refresh tokens ), all we have to do now is make our go service listen for such messages on the springcloudbus exchange and if we are the targeted application, perform a configuration reload. let’s do that.
using the amqp protocol to consume messages in go
the rabbitmq broker can be accessed using the amqp protocol. there’s a good go amqp client we’re going to use called streadway/amqp . most of the amqp/rabbitmq plumbing code should go into some reusable utility, perhaps we’ll refactor that later on. the plumbing code is based on this example from the streadway/amqp repo.
in /goblog/accountservice/main.go , add a new line inside the main() function that will start an amqp consumer for us:
func main() {
fmt.printf("starting %v\n", appname)
config.loadconfigurationfrombranch(
viper.getstring("configserverurl"),
appname,
viper.getstring("profile"),
viper.getstring("configbranch"))
initializeboltclient()
// new
go config.startlistener(appname, viper.getstring("amqp_server_url"), viper.getstring("config_event_bus"))
service.startwebserver(viper.getstring("server_port"))
}
note the new amqp_server_url and config_event_bus properties, they’re loaded from the _accountservice-test.yml configuration file we’re loading.
the startlistener function goes into a new file /goblog/accountservice/config/events.go . this file has a lot of amqp boilerplate which we’ll skip so we concentrate on the interesting parts:
func startlistener(appname string, amqpserver string, exchangename string) {
err := newconsumer(amqpserver, exchangename, "topic", "config-event-queue", exchangename, appname)
if err != nil {
log.fatalf("%s", err)
}
log.printf("running forever")
select {} // yet another way to stop a goroutine from finishing...
}
the newconsumer function is where all the boilerplate goes. we’ll skip down to the code that actually processes an incoming message:
func handlerefreshevent(body []byte, consumertag string) {
updatetoken := &updatetoken{}
err := json.unmarshal(body, updatetoken)
if err != nil {
log.printf("problem parsing updatetoken: %v", err.error())
} else {
if strings.contains(updatetoken.destinationservice, consumertag) {
log.println("reloading viper config from spring cloud config server")
// consumertag is same as application name.
loadconfigurationfrombranch(
viper.getstring("configserverurl"),
consumertag,
viper.getstring("profile"),
viper.getstring("configbranch"))
}
}
}
type updatetoken struct {
type string `json:"type"`
timestamp int `json:"timestamp"`
originservice string `json:"originservice"`
destinationservice string `json:"destinationservice"`
id string `json:"id"`
}
this code tries to parse the inbound message into an updatetoken struct and if the destinationservice matches our consumertag (i.e. the appname “accountservice”), we’ll call the same loadconfigurationfrombranch function initially called when the service started.
please note that in a real-life scenario, the newconsumer function and general message handling code would need more work with error handling, making sure only the appropriate messages are processed etc.
unit testing
let’s write a unit test for the handlerefreshevent() function. create a new test file /goblog/accountservice/config/events_test.go :
var service_name = "accountservice"
func testhandlerefreshevent(t *testing.t) {
// configure initial viper values
viper.set("configserverurl", "http://configserver:8888")
viper.set("profile", "test")
viper.set("configbranch", "master")
// mock the expected outgoing request for new config
defer gock.off()
gock.new("http://configserver:8888").
get("/accountservice/test/master").
reply(200).
bodystring(`{"name":"accountservice-test","profiles":["test"],"label":null,"version":null,"propertysources":[{"name":"file:/config-repo/accountservice-test.yml","source":{"server_port":6767,"server_name":"accountservice reloaded"}}]}`)
convey("given a refresh event received, targeting our application", t, func() {
var body = `{"type":"refreshremoteapplicationevent","timestamp":1494514362123,"originservice":"config-server:docker:8888","destinationservice":"accountservice:**","id":"53e61c71-cbae-4b6d-84bb-d0dcc0aeb4dc"}
`
convey("when handled", func() {
handlerefreshevent([]byte(body), service_name)
convey("then viper should have been re-populated with values from source", func() {
so(viper.getstring("server_name"), shouldequal, "accountservice reloaded")
})
})
})
}
i hope the bdd-style of goconvey conveys (pun intended!) how the test works. note though how we use gock to intercept the outgoing http request for new configuration and that we pre-populate viper with some initial values.
running it
time to test this. redeploy using our trusty copyall.sh script:
> ./copyall.sh
check the log of the accountservice :
> docker logs -f [containerid]
starting accountservice
... [truncated for brevity] ...
successfully loaded configuration for service accountservice test <-- look here!!!!
... [truncated for brevity] ...
2017/05/12 12:06:36 dialing amqp://guest:guest@rabbitmq:5672/
2017/05/12 12:06:36 got connection, getting channel
2017/05/12 12:06:36 got channel, declaring exchange (springcloudbus)
2017/05/12 12:06:36 declared exchange, declaring queue (config-event-queue)
2017/05/12 12:06:36 declared queue (0 messages, 0 consumers), binding to exchange (key 'springcloudbus')
2017/05/12 12:06:36 queue bound to exchange, starting consume (consumer tag 'accountservice')
2017/05/12 12:06:36 running forever
now, we’ll make a change to the accountservice-test.yml file on my git repo and then fake a commit hook using the /monitor api post shown earlier in this blog post:
i’m changing accountservice-test.yml and its service_name property, from accountservice test to temporary test string! and pushing the change.
next, use curl to let our spring cloud config server know about the update:
> curl -h "x-github-event: push" -h "content-type: application/json" -x post -d '{"commits": [{"modified": ["accountservice.yml"]}],"name":"what is this?"}' -ki http://192.168.99.100:8888/monitor
if everything works, this should trigger a refresh token from the config server which our accountservice picks up. check the log again:
> docker logs -f [containerid]
2017/05/12 12:13:22 got 195b consumer: [accountservice] delivery: [1] routingkey: [springcloudbus] {"type":"refreshremoteapplicationevent","timestamp":1494591202057,"originservice":"config-server:docker:8888","destinationservice":"accountservice:**","id":"1f421f58-cdd6-44c8-b5c4-fbf1e2839baa"}
2017/05/12 12:13:22 reloading viper config from spring cloud config server
loading config from http://configserver:8888/accountservice/test/p8
loading config property server_port => 6767
loading config property server_name => temporary test string!
loading config property amqp_server_url => amqp://guest:guest@rabbitmq:5672/
loading config property config_event_bus => springcloudbus
loading config property the_password => password
successfully loaded configuration for service temporary test string! <-- look here!!!!
as you can see, the final line now prints “successfully loaded configuration for service temporary test string!” the source code for that line:
if viper.isset("server_name") {
fmt.printf("successfully loaded configuration for service %s\n", viper.getstring("server_name"))
}
we’ve dynamically changed a property value previously stored in viper during runtime without touching our service! this is really cool!!
important note: while updating properties dynamically is very cool, that in itself won’t update things like the port of our running web server, existing connection objects in pools or (for example) the active connection to the rabbitmq broker. those kinds of “already-running” things take a lot more care to restart with new config values and are out of scope for this particular blog post.
(unless you’ve set things up with your own git repo, this demo isn’t reproducible but i hope you enjoyed it anyway.)
footprint and performance
adding loading of configuration at startup shouldn’t affect runtime performance at all and it doesn’t. 1k req/s yields the same latencies, cpu, and memory use as before. just take my word for it or try yourself. we’ll just take quick peek at memory use after the first startup:
container cpu % mem usage / limit mem % net i/o block i/o pids
accountservice.1.pi7wt0wmh2quwm8kcw4e82ay4 0.02% 4.102mib / 1.955gib 0.20% 18.8kb / 16.5kb 0b / 1.92mb 6
configserver.1.3joav3m6we6oimg28879gii79 0.13% 568.7mib / 1.955gib 28.41% 171kb / 130kb 72.9mb / 225kb 50
rabbitmq.1.kfmtsqp5fnw576btraq19qel9 0.19% 125.5mib / 1.955gib 6.27% 6.2mb / 5.18mb 31mb / 414kb 75
quotes-service.1.q81deqxl50n3xmj0gw29mp7jy 0.05% 340.1mib / 1.955gib 16.99% 2.97kb / 0b 48.1mb / 0b 30
even with amqp integration and viper as configuration framework, we have an initial footprint of ~4 mb. our spring boot-based config server uses over 500 mb of ram while rabbitmq (which i think is written in erlang?) uses 125 mb.
i’m fairly certain we can starve the config server down to 256 mb initial heap size using some standard jvm -xmx args but it’s nevertheless definitely a lot of ram. however, in a production environment, i would expect us to be running ~2 config server instances, not tens or hundreds. when it comes to the supporting services from the spring cloud ecosystem, memory use isn’t such a big deal as we usually won’t have more than one or a few instances of any such service.
summary
in this part of the go microservices blog series, we deployed a spring cloud config server and its rabbitmq dependency into our swarm. then, we wrote a bit of go code that using plain http, json and the viper framework loads config from the config server on startup and feeds it into viper for convenient access throughout our microservice codebase.
in the next part, we’ll continue to explore amqp and rabbitmq, going into more detail and take a look at sending some messages ourselves.
Published at DZone with permission of Erik Lupander, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
Comments