Create Serverless Functions With OpenFaaS
Join the DZone community and get the full member experience.
Join For FreeOpenFaaS is serverless functions framework that runs on top of Docker and Kubernetes. In this tutorial, you'll learn how to:
- Deploy OpenFaaS to a Kubernetes cluster
- Set up the OpenFaaS CLI
- Create, build, and deploy serverless functions using the CLI
- Invoke serverless functions using the CLI
- Update an existing serverless function
- Deploy serverless functions using the web interface
- Monitor your serverless functions with Prometheus and Grafana
Prerequisites
- A Kubernetes cluster. If you don't have a running Kubernetes cluster, follow the instructions from the Set Up a Kubernetes Cluster with Kind section below.
- A Docker Hub Account. See the Docker Hub page for details about creating a new account.
- kubectl. Refer the Install and Set Up kubectl page for details about installing
kubectl
. - Node.js 10 or higher. To check if Node.js is installed on your computer, type the following command:
node --version
The following example output shows that Node.js is installed on your computer:
xxxxxxxxxx
v10.16.3
If Node.js is not installed or you're running an older version, you can download the installer from the Downloads page.
- This tutorial assumes basic familiarity with Docker and Kubernetes.
Set Up a Kubernetes Cluster With Kind (Optional)
With Kind, you can run a local Kubernetes cluster using Docker containers as nodes. The steps in this section are optional. Follow them only if you don't have a running Kubernetes cluster.
- Create a file named
openfaas-cluster.yaml
, and copy in the following spec:
xxxxxxxxxx
# three node (two workers) cluster config
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
- role: worker
2. Use the kind create cluster
command to create a Kubernetes cluster with one control plane and two worker nodes:
xxxxxxxxxx
kind create cluster --config kind-specs/kind-cluster.yaml
xxxxxxxxxx
Creating cluster "kind" ...
✓ Ensuring node image (kindest/node:v1.17.0) ��
✓ Preparing nodes �� �� ��
✓ Writing configuration ��
✓ Starting control-plane ��️
✓ Installing CNI ��
✓ Installing StorageClass ��
✓ Joining worker nodes ��
Set kubectl context to "kind-kind"
You can now use your cluster with:
kubectl cluster-info --context kind-kind
Thanks for using kind! ��
Deploy OpenFaaS to a Kubernetes Cluster
You can install OpenFaaS using Helm, plain YAML files, or its own installer named arkade
which provides a quick and easy way to get OpenFaaS running. In this section, you'll deploy OpenFaaS with arkade
.
- Enter the following command to install
arkade
:
xxxxxxxxxx
curl -sLS https://dl.get-arkade.dev | sudo sh
xxxxxxxxxx
Downloading package https://github.com/alexellis/arkade/releases/download/0.1.10/arkade-darwin as /Users/andrei/Desktop/openFaaS/faas-hello-world/arkade-darwin
Download complete.
Running with sufficient permissions to attempt to move arkade to /usr/local/bin
New version of arkade installed to /usr/local/bin
Creating alias 'ark' for 'arkade'.
_ _
__ _ _ __| | ____ _ __| | ___
/ _` | '__| |/ / _` |/ _` |/ _ \
| (_| | | | < (_| | (_| | __/
\__,_|_| |_|\_\__,_|\__,_|\___|
Get Kubernetes apps the easy way
Version: 0.1.10
Git Commit: cf96105d37ed97ed644ab56c0660f0d8f4635996
2. Now, install openfaas
with:
xxxxxxxxxx
arkade install openfaas
xxxxxxxxxx
Using kubeconfig: /Users/andrei/.kube/config
Using helm3
Node architecture: "amd64"
Client: "x86_64", "Darwin"
2020/03/10 16:20:40 User dir established as: /Users/andrei/.arkade/
https://get.helm.sh/helm-v3.1.1-darwin-amd64.tar.gz
/Users/andrei/.arkade/bin/helm3/darwin-amd64 darwin-amd64/
/Users/andrei/.arkade/bin/helm3/README.md darwin-amd64/README.md
/Users/andrei/.arkade/bin/helm3/LICENSE darwin-amd64/LICENSE
/Users/andrei/.arkade/bin/helm3/helm darwin-amd64/helm
2020/03/10 16:20:43 extracted tarball into /Users/andrei/.arkade/bin/helm3: 3 files, 0 dirs (1.633976582s)
"openfaas" has been added to your repositories
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "ibm-charts" chart repository
...Successfully got an update from the "openfaas" chart repository
...Successfully got an update from the "stable" chart repository
...Successfully got an update from the "bitnami" chart repository
Update Complete. ⎈ Happy Helming!⎈
VALUES values.yaml
Command: /Users/andrei/.arkade/bin/helm3/helm [upgrade --install openfaas openfaas/openfaas --namespace openfaas --values /var/folders/nz/2gtkncgx56sgrpqvr40qhhrw0000gn/T/charts/openfaas/values.yaml --set gateway.directFunctions=true --set faasnetes.imagePullPolicy=Always --set gateway.replicas=1 --set queueWorker.replicas=1 --set clusterRole=false --set operator.create=false --set openfaasImagePullPolicy=IfNotPresent --set basicAuthPlugin.replicas=1 --set basic_auth=true --set serviceType=NodePort]
Release "openfaas" does not exist. Installing it now.
NAME: openfaas
LAST DEPLOYED: Tue Mar 10 16:21:03 2020
NAMESPACE: openfaas
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
To verify that openfaas has started, run:
kubectl -n openfaas get deployments -l "release=openfaas, app=openfaas"
=======================================================================
= OpenFaaS has been installed. =
=======================================================================
# Get the faas-cli
curl -SLsf https://cli.openfaas.com | sudo sh
# Forward the gateway to your machine
kubectl rollout status -n openfaas deploy/gateway
kubectl port-forward -n openfaas svc/gateway 8080:8080 &
# If basic auth is enabled, you can now log into your gateway:
PASSWORD=$(kubectl get secret -n openfaas basic-auth -o jsonpath="{.data.basic-auth-password}" | base64 --decode; echo)
echo -n $PASSWORD | faas-cli login --username admin --password-stdin
faas-cli store deploy figlet
faas-cli list
# For Raspberry Pi
faas-cli store list \
--platform armhf
faas-cli store deploy figlet \
--platform armhf
# Find out more at:
# https://github.com/openfaas/faas
Thanks for using arkade!
3. To verify that the deployments were created, run the kubectl get deployments
command. Specify the namespace and the selector using the -n
and -l
parameters as follows:
xxxxxxxxxx
kubectl get deployments -n openfaas -l "release=openfaas, app=openfaas"
If the deployments are not yet ready, you should see something similar to the following example output:
xxxxxxxxxx
NAME READY UP-TO-DATE AVAILABLE AGE
alertmanager 0/1 1 0 45s
basic-auth-plugin 1/1 1 1 45s
faas-idler 0/1 1 0 45s
gateway 0/1 1 0 45s
nats 1/1 1 1 45s
prometheus 1/1 1 1 45s
queue-worker 1/1 1 1 45s
Once the installation is finished, the output should look like this:
xxxxxxxxxx
NAME READY UP-TO-DATE AVAILABLE AGE
alertmanager 1/1 1 1 75s
basic-auth-plugin 1/1 1 1 75s
faas-idler 1/1 1 1 75s
gateway 1/1 1 1 75s
nats 1/1 1 1 75s
prometheus 1/1 1 1 75s
queue-worker 1/1 1 1 75s
4. Check the rollout status of the gateway
deployment:
xxxxxxxxxx
kubectl rollout status -n openfaas deploy/gateway
The following example output shows that the gateway
deployment has been successfully rolled out:
xxxxxxxxxx
deployment "gateway" successfully rolled out
xxxxxxxxxx
Use the kubectl port-forward command to forward all requests made to http://localhost:8080 to the pod running the gateway service:
xxxxxxxxxx
kubectl port-forward -n openfaas svc/gateway 8080:8080 &
xxxxxxxxxx
[1] 78674
Forwarding from 127.0.0.1:8080 -> 8080
Forwarding from [::1]:8080 -> 8080
Note that the ampersand sign (&
) runs the process in the background. You can use the jobs
command to show the status of your background processes:
xxxxxxxxxx
jobs
xxxxxxxxxx
[1] + running kubectl port-forward -n openfaas svc/gateway 8080:8080
xxxxxxxxxx
Issue the following command to retrieve your password and save it into an environment variable named PASSWORD:
xxxxxxxxxx
PASSWORD=$(kubectl get secret -n openfaas basic-auth -o jsonpath="{.data.basic-auth-password}" | base64 --decode; echo)
Set Up the OpenFaaS CLI
OpenFaaS provides a command-line utility you can use to build and deploy your serverless functions. You can install it by following the steps from the Installation page.
Create a Serverless Function Using the CLI
Now that OpenFaaS and the faas-cli
command-line utility are installed, you can create and deploy serverless functions using the built-in template engine. OpenFaaS provides two types of templates:
- The Classic templates are based on the Classic Watchdog and use
stdio
to communicate with your serverless function. Refer to the Watchdog page for more details about how OpenFaaS Watchdog works. - The of-watchdog templates use HTTP to communicate with your serverless function. These templates are available through the OpenFaaS Incubator GitHub repository.
In this tutorial, you'll use a classic template.
- Run the following command to see the templates available in the official store:
xxxxxxxxxx
faas-cli template store list
xxxxxxxxxx
NAME SOURCE DESCRIPTION
csharp openfaas Classic C# template
dockerfile openfaas Classic Dockerfile template
go openfaas Classic Golang template
java8 openfaas Classic Java 8 template
node openfaas Classic NodeJS 8 template
php7 openfaas Classic PHP 7 template
python openfaas Classic Python 2.7 template
python3 openfaas Classic Python 3.6 template
python3-dlrs intel Deep Learning Reference Stack v0.4 for ML workloads
ruby openfaas Classic Ruby 2.5 template
node10-express openfaas-incubator Node.js 10 powered by express template
ruby-http openfaas-incubator Ruby 2.4 HTTP template
python27-flask openfaas-incubator Python 2.7 Flask template
python3-flask openfaas-incubator Python 3.6 Flask template
python3-http openfaas-incubator Python 3.6 with Flask and HTTP
node8-express openfaas-incubator Node.js 8 powered by express template
golang-http openfaas-incubator Golang HTTP template
golang-middleware openfaas-incubator Golang Middleware template
python3-debian openfaas Python 3 Debian template
powershell-template openfaas-incubator Powershell Core Ubuntu:16.04 template
powershell-http-template openfaas-incubator Powershell Core HTTP Ubuntu:16.04 template
rust booyaa Rust template
crystal tpei Crystal template
csharp-httprequest distantcam C# HTTP template
csharp-kestrel burtonr C# Kestrel HTTP template
vertx-native pmlopes Eclipse Vert.x native image template
swift affix Swift 4.2 Template
lua53 affix Lua 5.3 Template
vala affix Vala Template
vala-http affix Non-Forking Vala Template
quarkus-native pmlopes Quarkus.io native image template
perl-alpine tmiklas Perl language template based on Alpine image
node10-express-service openfaas-incubator Node.js 10 express.js microservice template
crystal-http koffeinfrei Crystal HTTP template
rust-http openfaas-incubator Rust HTTP template
bash-streaming openfaas-incubator Bash Streaming template
☞ Note that you can specify an alternative store for templates. The following example command lists the templates from a repository named andreipope
:
xxxxxxxxxx
faas-cli template store list -u https://raw.githubusercontent.com/andreipope/my-custom-store/master/templates.json
2. Download the official templates locally:
xxxxxxxxxx
faas-cli template pull
xxxxxxxxxx
Fetch templates from repository: https://github.com/openfaas/templates.git at master
2020/03/11 20:51:22 Attempting to expand templates from https://github.com/openfaas/templates.git
2020/03/11 20:51:25 Fetched 19 template(s) : [csharp csharp-armhf dockerfile go go-armhf java11 java11-vert-x java8 node node-arm64 node-armhf node12 php7 python python-armhf python3 python3-armhf python3-debian ruby] from https://github.com/openfaas/templates.git
☞ By default, the above command downloads the templates from the OpenFaaS official GitHub repository. If you want to use a custom repository, then you should specify the URL of your repository. The following example command pulls the templates from a repository named andreipope
:
xxxxxxxxxx
faas-cli template pull https://github.com/andreipope/my-custom-store/
- To create a new serverless function, run the
faas-cli new
command specifying:
- The name of your new function (
appfleet-hello-world
) - The
lang
parameter followed by the programming language template (node
).
xxxxxxxxxx
faas-cli new appfleet-hello-world --lang node
xxxxxxxxxx
Folder: appfleet-hello-world created.
___ _____ ____
/ _ \ _ __ ___ _ __ | ___|_ _ __ _/ ___|
| | | | '_ \ / _ \ '_ \| |_ / _` |/ _` \___ \
| |_| | |_) | __/ | | | _| (_| | (_| |___) |
\___/| .__/ \___|_| |_|_| \__,_|\__,_|____/
|_|
Function created in folder: appfleet-hello-world
Stack file written: appfleet-hello-world.yml
Notes:
You have created a new function which uses Node.js 12.13.0 and the OpenFaaS
Classic Watchdog.
npm i --save can be used to add third-party packages like request or cheerio
npm documentation: https://docs.npmjs.com/
For high-throughput services, we recommend you use the node12 template which
uses a different version of the OpenFaaS watchdog.
At this point, your directory structure should look like the following:
xxxxxxxxxx
tree . -L 2
xxxxxxxxxx
.
├── appfleet-hello-world
│ ├── handler.js
│ └── package.json
├── appfleet-hello-world.yml
└── template
├── csharp
├── csharp-armhf
├── dockerfile
├── go
├── go-armhf
├── java11
├── java11-vert-x
├── java8
├── node
├── node-arm64
├── node-armhf
├── node12
├── php7
├── python
├── python-armhf
├── python3
├── python3-armhf
├── python3-debian
└── ruby
21 directories, 3 files
Things to note:
- The
appfleet-hello-world/handler.js
file contains the code of your serverless function. You can use theecho
command to list the contents of this file:
xxxxxxxxxx
cat appfleet-hello-world/handler.js
xxxxxxxxxx
"use strict"
module.exports = async (context, callback) => {
return {status: "done"}
}
- You can specify the dependencies required by your serverless function in the
package.json
file. The automatically generated file is just an empty shell:
xxxxxxxxxx
cat appfleet-hello-world/package.json
xxxxxxxxxx
{
"name": "function",
"version": "1.0.0",
"description": "",
"main": "handler.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1"
},
"keywords": [],
"author": "",
"license": "ISC"
}
- The spec of the
appfleet-hello-world
function is stored in theappfleet-hello-world.yml
file:
xxxxxxxxxx
cat appfleet-hello-world.yml
xxxxxxxxxx
version: 1.0
provider:
name: openfaas
gateway: http://127.0.0.1:8080
functions:
appfleet-hello-world:
lang: node
handler: ./appfleet-hello-world
image: appfleet-hello-world:latest
Build Your Serverless Function
- Open the
appfleet-hello-world.yml
file in a plain-text editor, and update theimage
field by prepending your Docker Hub user name to it. The following example prepends my username (andrepopescu12
) to the image field:
xxxxxxxxxx
image: andrepopescu12/appfleet-hello-world:latest
Once you've made this change, the appfleet-hello-world.yml
file should look similar to the following:
xxxxxxxxxx
version: 1.0
provider:
name: openfaas
gateway: http://127.0.0.1:8080
functions:
appfleet-hello-world:
lang: node
handler: ./appfleet-hello-world
image: <YOUR-DOCKER-HUB-ACCOUNT>/appfleet-hello-world:latest
2. Build the function. Enter the faas-cli build
command specifying the -f
argument with the name of the YAML file you edited in the previous step (appfleet-hello-world.yml
):
xxxxxxxxxx
faas-cli build -f appfleet-hello-world.yml
xxxxxxxxxx
[0] > Building appfleet-hello-world.
Clearing temporary build folder: ./build/appfleet-hello-world/
Preparing: ./appfleet-hello-world/ build/appfleet-hello-world/function
Building: andreipopescu12/appfleet-hello-world:latest with node template. Please wait..
Sending build context to Docker daemon 10.24kB
Step 1/24 : FROM openfaas/classic-watchdog:0.18.1 as watchdog
---> 94b5e0bef891
Step 2/24 : FROM node:12.13.0-alpine as ship
---> 69c8cc9212ec
Step 3/24 : COPY --from=watchdog /fwatchdog /usr/bin/fwatchdog
---> Using cache
---> ebab4b723c16
Step 4/24 : RUN chmod +x /usr/bin/fwatchdog
---> Using cache
---> 7952724b5872
Step 5/24 : RUN addgroup -S app && adduser app -S -G app
---> Using cache
---> 33c7f04595d2
Step 6/24 : WORKDIR /root/
---> Using cache
---> 77b9dee16c79
Step 7/24 : ENV NPM_CONFIG_LOGLEVEL warn
---> Using cache
---> a3d3c0bb4480
Step 8/24 : RUN mkdir -p /home/app
---> Using cache
---> 65457e03fcb1
Step 9/24 : WORKDIR /home/app
---> Using cache
---> 50ab672e5660
Step 10/24 : COPY package.json ./
---> Using cache
---> 6143e79de873
Step 11/24 : RUN npm i --production
---> Using cache
---> a41566487c6e
Step 12/24 : COPY index.js ./
---> Using cache
---> 566633e78d2c
Step 13/24 : WORKDIR /home/app/function
---> Using cache
---> 04c9de75f170
Step 14/24 : COPY function/*.json ./
---> Using cache
---> 85cf909b646a
Step 15/24 : RUN npm i --production || :
---> Using cache
---> c088cbcad583
Step 16/24 : COPY --chown=app:app function/ .
---> Using cache
---> 192db89e5941
Step 17/24 : WORKDIR /home/app/
---> Using cache
---> ee2b7d7e8bd4
Step 18/24 : RUN chmod +rx -R ./function && chown app:app -R /home/app && chmod 777 /tmp
---> Using cache
---> 81831389293e
Step 19/24 : USER app
---> Using cache
---> ca0cade453f5
Step 20/24 : ENV cgi_headers="true"
---> Using cache
---> afe8d7413349
Step 21/24 : ENV fprocess="node index.js"
---> Using cache
---> 5471cfe85461
Step 22/24 : EXPOSE 8080
---> Using cache
---> caaa8ae11dc7
Step 23/24 : HEALTHCHECK --interval=3s CMD [ -e /tmp/.lock ] || exit 1
---> Using cache
---> 881b4d2adb92
Step 24/24 : CMD ["fwatchdog"]
---> Using cache
---> 82b586f039df
Successfully built 82b586f039df
Successfully tagged andreipopescu12/appfleet-hello-world:latest
Image: andreipopescu12/appfleet-hello-world:latest built.
[0] < Building appfleet-hello-world done in 2.25s.
[0] Worker done.
Total build time: 2.25s
3. You can list your Docker images with:
xxxxxxxxxx
docker images
xxxxxxxxxx
REPOSITORY TAG IMAGE ID CREATED SIZE
andreipopescu12/appfleet-hello-world latest 82b586f039df 25 minutes ago 96MB
Push Your Image to Docker Hub
- Log in to Docker Hub. Run the
docker login
command with the--username
flag followed by your Docker Hub user name. The following example command logs you in asandreipopescu12
:
xxxxxxxxxx
docker login --username andreipopescu12
Next, you will be prompted to enter your Docker Hub password:
xxxxxxxxxx
Password:
Login Succeeded
2. Use the faas-cli push
command to push your serverless function to Docker Hub:
xxxxxxxxxx
faas-cli push -f appfleet-hello-world.yml
xxxxxxxxxx
The push refers to repository [docker.io/andreipopescu12/appfleet-hello-world]
073c41b18852: Pushed
a5c05e98c215: Pushed
f749ad113dce: Pushed
e4f29400b370: Pushed
b7d0eb42e645: Pushed
84fba0eb2756: Pushed
cf2a3f2bc398: Pushed
942d3272b7d4: Pushed
037b653b7d4e: Pushed
966655dc62be: Pushed
08d8e0925a73: Pushed
6ce16b164ed0: Pushed
d76ecd300100: Pushed
77cae8ab23bf: Pushed
latest: digest: sha256:4150d4cf32e7e5ffc8fd15efeed16179bbf166536f1cc7a8c4105d01a4042928 size: 3447
[0] < Pushing appfleet-hello-world [andreipopescu12/appfleet-hello-world:latest] done.
[0] Worker done.
Deploy Your Function Using the CLI
- With your serverless function pushed to Docker Hub, log in to your local instance of the OpenFaaS gateway by entering the following command:
xxxxxxxxxx
echo -n $PASSWORD | faas-cli login --username admin --password-stdin
2. Run the faas-cli deploy
command to deploy your serverless function:
xxxxxxxxxx
faas-cli deploy -f appfleet-hello-world.yml
xxxxxxxxxx
Deploying: appfleet-hello-world.
WARNING! Communication is not secure, please consider using HTTPS. Letsencrypt.org offers free SSL/TLS certificates.
Handling connection for 8080
Handling connection for 8080
Deployed. 202 Accepted.
URL: http://127.0.0.1:8080/function/appfleet-hello-world
☞ OpenFaaS provides an auto-scaling mechanism based on the number of requests per second, which is read from Prometheus. For the sake of simplicity, we won't cover auto-scaling in this tutorial. To further your knowledge, you can refer the Auto-scaling page.
- Use the
faas-cli list
command to list the functions deployed to your local OpenFaaS gateway:
xxxxxxxxxx
faas-cli list
xxxxxxxxxx
Function Invocations Replicas
appfleet-hello-world 0 1
☞ Note that you can also list the functions deployed to a different gateway by providing the URL of the gateway as follows:
xxxxxxxxxx
faas-cli list --gateway https://<YOUR-GATEWAT-URL>:<YOUR-GATEWAY-PORT>
4. You can use the faas-cli describe
method to retrieve more details about the appfleet-hello-world
function:
xxxxxxxxxx
faas-cli describe appfleet-hello-world
xxxxxxxxxx
Name: appfleet-hello-world
Status: Ready
Replicas: 1
Available replicas: 1
Invocations: 1
Image: andreipopescu12/appfleet-hello-world:latest
Function process: node index.js
URL: http://127.0.0.1:8080/function/appfleet-hello-world
Async URL: http://127.0.0.1:8080/async-function/appfleet-hello-world
Labels: faas_function : appfleet-hello-world
Annotations: prometheus.io.scrape : false
Invoke Your Serverless Function Using the CLI
- To see your serverless function in action, issue the
faas-cli invoke
command, specifying:
- The
-f
flag with the name of the YAML file that describes your function (appfleet-hello-world.yml
) - The name of your function (
appfleet-hello-world
)
xxxxxxxxxx
faas-cli invoke -f appfleet-hello-world.yml appfleet-hello-world
xxxxxxxxxx
Reading from STDIN - hit (Control + D) to stop.
2. Type CTRL+D
. The following example output shows that your serverless function works as expected:
xxxxxxxxxx
appfleet
Handling connection for 8080
{"status":"done"}
Update Your Function
The function you created, deployed, and then invoked in the previous sections is just an empty shell. In this section, we'll update it to:
- Read the name of a city from
stdin
- Fetch the weather forecast from the openweathermap.org
- Print to the console the weather forecast
- Create an OpenWeatherMap account by following the instructions from the Sign Up page.
- Log in to OpenWeatherMap and then select API KEYS:
- From here, you can either copy the value of the default key or create a new API key, and then copy its value:
- Now that you have an OpenWeatherMap API key, you must use
npm
to install a few dependencies. The following command moves into theappfleet-hello-world
directory and then installs theget-stdin
andrequest
packages:
xxxxxxxxxx
cd appfleet-hello-world && npm i --save get-stdin request
5. Replace the content of the handler.js
file with:
xxxxxxxxxx
"use strict"
const getStdin = require('get-stdin')
const request = require('request');
let handler = (req) => {
request(`http://api.openweathermap.org/data/2.5/weather?q=${req}&?units=metric&APPID=<YOUR-OPENWEATHERMAP-APP-KEY>`, function (error, response, body) {
console.error('error:', error)
console.log('statusCode:', response && response.statusCode)
console.log('body:', JSON.stringify(body))
})
};
getStdin().then(val => {
handler(val);
}).catch(e => {
console.error(e.stack);
});
module.exports = handler
☞ To try this function, replace <YOUR-OPENWEATHERMAP-API-KEY>
with your OpenWeatherMap API KEY.
- You can use the
faas-cli remove
command to remove the function you've deployed earlier in this tutorial:
xxxxxxxxxx
faas-cli remove appfleet-hello-world
xxxxxxxxxx
Deleting: appfleet-hello-world.
Handling connection for 8080
Removing old function.
7. Now that the old function has been removed, you must rebuild, push, and deploy your modified function. Instead of issuing three separate commands, you can use the openfaas-cli up
command as in the following example:
xxxxxxxxxx
faas-cli up -f appfleet-hello-world.yml
xxxxxxxxxx
[0] > Building appfleet-hello-world.
Clearing temporary build folder: ./build/appfleet-hello-world/
Preparing: ./appfleet-hello-world/ build/appfleet-hello-world/function
Building: andreipopescu12/appfleet-hello-world:latest with node template. Please wait..
Sending build context to Docker daemon 43.01kB
Step 1/24 : FROM openfaas/classic-watchdog:0.18.1 as watchdog
---> 94b5e0bef891
Step 2/24 : FROM node:12.13.0-alpine as ship
---> 69c8cc9212ec
Step 3/24 : COPY --from=watchdog /fwatchdog /usr/bin/fwatchdog
---> Using cache
---> ebab4b723c16
Step 4/24 : RUN chmod +x /usr/bin/fwatchdog
---> Using cache
---> 7952724b5872
Step 5/24 : RUN addgroup -S app && adduser app -S -G app
---> Using cache
---> 33c7f04595d2
Step 6/24 : WORKDIR /root/
---> Using cache
---> 77b9dee16c79
Step 7/24 : ENV NPM_CONFIG_LOGLEVEL warn
---> Using cache
---> a3d3c0bb4480
Step 8/24 : RUN mkdir -p /home/app
---> Using cache
---> 65457e03fcb1
Step 9/24 : WORKDIR /home/app
---> Using cache
---> 50ab672e5660
Step 10/24 : COPY package.json ./
---> Using cache
---> 6143e79de873
Step 11/24 : RUN npm i --production
---> Using cache
---> a41566487c6e
Step 12/24 : COPY index.js ./
---> Using cache
---> 566633e78d2c
Step 13/24 : WORKDIR /home/app/function
---> Using cache
---> 04c9de75f170
Step 14/24 : COPY function/*.json ./
---> Using cache
---> f5765914bd05
Step 15/24 : RUN npm i --production || :
---> Using cache
---> a300be28c096
Step 16/24 : COPY --chown=app:app function/ .
---> 91cd72d8ad7a
Step 17/24 : WORKDIR /home/app/
---> Running in fce50a76475a
Removing intermediate container fce50a76475a
---> 0ff17b0a9faf
Step 18/24 : RUN chmod +rx -R ./function && chown app:app -R /home/app && chmod 777 /tmp
---> Running in 6d0c4c92fac1
Removing intermediate container 6d0c4c92fac1
---> 1e543bfbf6b0
Step 19/24 : USER app
---> Running in 6d33f5ec237d
Removing intermediate container 6d33f5ec237d
---> cb7cf5dfab12
Step 20/24 : ENV cgi_headers="true"
---> Running in 972c23374934
Removing intermediate container 972c23374934
---> 21c6e8198b21
Step 21/24 : ENV fprocess="node index.js"
---> Running in 3be91f9d5228
Removing intermediate container 3be91f9d5228
---> aafb7a756d38
Step 22/24 : EXPOSE 8080
---> Running in da3183bd88c5
Removing intermediate container da3183bd88c5
---> 5f6fd7e66a95
Step 23/24 : HEALTHCHECK --interval=3s CMD [ -e /tmp/.lock ] || exit 1
---> Running in a590c91037ae
Removing intermediate container a590c91037ae
---> fbe20c32941f
Step 24/24 : CMD ["fwatchdog"]
---> Running in 59cd231f0576
Removing intermediate container 59cd231f0576
---> 88cd8ac65ade
Successfully built 88cd8ac65ade
Successfully tagged andreipopescu12/appfleet-hello-world:latest
Image: andreipopescu12/appfleet-hello-world:latest built.
[0] < Building appfleet-hello-world done in 13.95s.
[0] Worker done.
Total build time: 13.95s
[0] > Pushing appfleet-hello-world [andreipopescu12/appfleet-hello-world:latest].
The push refers to repository [docker.io/andreipopescu12/appfleet-hello-world]
04643e0c999f: Pushed
db3ccc4403b8: Pushed
24d1d5a62262: Layer already exists
adfa28db7666: Layer already exists
b7d0eb42e645: Layer already exists
84fba0eb2756: Layer already exists
cf2a3f2bc398: Layer already exists
942d3272b7d4: Layer already exists
037b653b7d4e: Layer already exists
966655dc62be: Layer already exists
08d8e0925a73: Layer already exists
6ce16b164ed0: Layer already exists
d76ecd300100: Layer already exists
77cae8ab23bf: Layer already exists
latest: digest: sha256:818d92b10d276d32bcc459e2918cb537051a14025e694eb59a9b3caa0bb4e41c size: 3456
[0] < Pushing appfleet-hello-world [andreipopescu12/appfleet-hello-world:latest] done.
[0] Worker done.
Deploying: appfleet-hello-world.
WARNING! Communication is not secure, please consider using HTTPS. Letsencrypt.org offers free SSL/TLS certificates.
Handling connection for 8080
Handling connection for 8080
Deployed. 202 Accepted.
URL: http://127.0.0.1:8080/function/appfleet-hello-world
☞ Note that you can skip the push or the deploy steps:
- The following example command skips the push step:
xxxxxxxxxx
faas-cli up -f appfleet-hello-world.yml --skip-push
- The following example command skips the deploy step:
xxxxxxxxxx
faas-cli up -f appfleet-hello-world.yml --skip-deploy
8. To verify that the updated serverless function works as expected, invoke it as follows:
xxxxxxxxxx
faas-cli invoke -f appfleet-hello-world.yml appfleet-hello-world
xxxxxxxxxx
Reading from STDIN - hit (Control + D) to stop.
Berlin
Handling connection for 8080
Hello, you are currently in Berlin
statusCode: 200
body: "{\"coord\":{\"lon\":13.41,\"lat\":52.52},\"weather\":[{\"id\":802,\"main\":\"Clouds\",\"description\":\"scattered clouds\",\"icon\":\"03d\"}],\"base\":\"stations\",\"main\":{\"temp\":282.25,\"feels_like\":270.84,\"temp_min\":280.93,\"temp_max\":283.15,\"pressure\":1008,\"humidity\":61},\"visibility\":10000,\"wind\":{\"speed\":13.9,\"deg\":260,\"gust\":19},\"clouds\":{\"all\":40},\"dt\":1584107132,\"sys\":{\"type\":1,\"id\":1275,\"country\":\"DE\",\"sunrise\":1584077086,\"sunset\":1584119213},\"timezone\":3600,\"id\":2950159,\"name\":\"Berlin\",\"cod\":200}"
9. To clean-up, run the faas-cli remove
command with the name of your serverless function (appfleet-hello-world
as an argument):
xxxxxxxxxx
faas-cli remove appfleet-hello-world
xxxxxxxxxx
Deleting: appfleet-hello-world.
Handling connection for 8080
Removing old function.
Deploy Serverless Functions Using the Web Interface
OpenFaaS provides a web-based user interface. In this section, you'll learn how you can use it to deploy a serverless function.
- First, you must use the
echo
command to retrieve your password:
xxxxxxxxxx
echo $PASSWORD
xxxxxxxxxx
49IoP28G8247MZcj6a1FWUYUx
2. Open a browser and visit http://localhost:8080. To log in, use the admin
username and the password you retrieved in the previous step. You will be redirected to the OpenFaaS home page. Select the DEPLOY NEW FUNCTION button.
- A new window will be displayed. Select the Custom tab, and then type:
docker.io/andreipopescu12/appfleet-hello-world
in the Docker Image input boxappfleet-hello-world
in the Function name input box.
- Once you've filled in the Docker image and Function name input boxes, select the DEPLOY button:
- Your new function will be visible in the left navigation bar. Click on it:
You'll be redirected to the invoke function page:
- In the Request body input box, type in the name of the city you want to retrieve the weather forecast for, and then select the INVOKE button:
If everything works well, the weather forecast will be displayed in the Response Body field:
Monitor Your Serverless Functions with Prometheus and Grafana
The OpenFaaS gateway exposes the following metrics:
In this section, you will learn how to set up Prometheus and Grafana to track the health of your serverless functions.
- Use the following command to list your deployments:
xxxxxxxxxx
kubectl get deployments -n openfaas -l "release=openfaas, app=openfaas"
xxxxxxxxxx
NAME READY UP-TO-DATE AVAILABLE AGE
alertmanager 1/1 1 1 15m
basic-auth-plugin 1/1 1 1 15m
faas-idler 1/1 1 1 15m
gateway 1/1 1 1 15m
nats 1/1 1 1 15m
prometheus 1/1 1 1 15m
queue-worker 1/1 1 1 15m
2. To expose the prometheus
deployment, create a service object named prometheus-ui
:
xxxxxxxxxx
kubectl expose deployment prometheus -n openfaas --type=NodePort --name=prometheus-ui
xxxxxxxxxx
service/prometheus-ui exposed
☞ The --type=NodePort
flag exposes the prometheus-ui
service on each of the node's IP addresses. Also, a ClusterIP
service is created. You'll use this to connect to the prometheus-ui
service from outside of the cluster.
3. To inspect the prometheus-ui
service, enter the following command:
xxxxxxxxxx
kubectl get svc prometheus-ui -n openfaas
xxxxxxxxxx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
prometheus-ui NodePort 10.96.129.204 <none> 9090:31369/TCP 8m1s
4. Forward all requests made to http://localhost:9090 to the pod running the prometheus-ui
service:
xxxxxxxxxx
kubectl port-forward -n openfaas svc/prometheus-ui 9090:9090 &
5. Now, you can point your browser to http://localhost:9090, and you should see a page similar to the following screenshot:
- To deploy Grafana, you'll the
stefanprodan/faas-grafana:4.6.3
image. Run the following command:
xxxxxxxxxx
kubectl run grafana -n openfaas --image=stefanprodan/faas-grafana:4.6.3 --port=3000
xxxxxxxxxx
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
deployment.apps/grafana created
7. Now, you can list your deployments with:
xxxxxxxxxx
kubectl get deployments -n openfaas
xxxxxxxxxx
NAME READY UP-TO-DATE AVAILABLE AGE
alertmanager 1/1 1 1 46m
basic-auth-plugin 1/1 1 1 46m
faas-idler 1/1 1 1 46m
gateway 1/1 1 1 46m
grafana 1/1 1 1 107s
nats 1/1 1 1 46m
prometheus 1/1 1 1 46m
queue-worker 1/1 1 1 46m
8. Use the following kubectl expose deployment
command to create a service object that exposes the grafana
deployment:
xxxxxxxxxx
kubectl expose deployment grafana -n openfaas --type=NodePort --name=grafana
xxxxxxxxxx
service/grafana exposed
9. Retrieve details about your new service with:
xxxxxxxxxx
kubectl get service grafana -n openfaas
xxxxxxxxxx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
grafana NodePort 10.96.194.59 <none> 3000:32464/TCP 60s
10. Forward all requests made to http://localhost:3030 to the pod running the grafana
service:
xxxxxxxxxx
kubectl port-forward -n openfaas svc/grafana 3000:3000 &
[3] 3973
Forwarding from 127.0.0.1:3000 -> 3000
Forwarding from [::1]:3000 -> 3000
11. Now that you set up the port forwarding, you can access Grafana by pointing your browser to http://localhost:3000:
- Log into Grafana using the username
admin
and passwordadmin
. The Home Dashboard page will be displayed:
- From the left menu, select Dashboards --> Import:
- Type
https://grafana.com/grafana/dashboards/3434
in the Grafana.com Dashboard input box. Then, select the Load button:
- In the Import Dashboard dialog box, set the Prometheus data source to
faas
, and then select Import:
An empty dashboard will be displayed:
- Now, you can invoke your function a couple of times using the
faas-cli invoke
command as follows:
xxxxxxxxxx
faas-cli invoke -f appfleet-hello-world.yml appfleet-hello-world
16. Switch back to the browser window that opened Grafana. Your dashboard should be automatically updated and look similar to the following screenshot:
We hope this tutorial was useful for learning the basics of deploying serverless functions with OpenFaaS. For more great tutorials about Docker and Kubernetes, we recommend you visit our blog.
Thanks for reading!
Published at DZone with permission of Sudip Sengupta. See the original article here.
Opinions expressed by DZone contributors are their own.
Comments