Skip to content

Deploying with Docker Compose

This section provides instructions to deploy each service one by one with Docker Compose, using the local, insecure non-HTTPS deployment option for ease of use. OpenZMS components follow common Docker and Docker Compose conventions:

  • Each service provides a deploy/docker-compose.yml service description, and a deploy/env/ subdirectory with environment variables you can override to change configuration.

  • You can create a deploy/.env file with environment variable overrides to change Docker Compose's interpretation of the deploy/docker-compose.yml file, since some of the service definitions are conditionalized on env vars.

  • For instance, each service stores persistent data in /zms unless the DATADIR environment variable is overridden in the aforementioned .env file in the source repo's deploy/ subdirectory. The per-service deploy/docker-compose.yml files create volumes in $DATADIR, usually named with the same name as the service.

  • Most services rely on sidecar database containers (postgres or postgis), and those create their own persistent data volumes in $DATADIR via the Docker Compose deploy/docker-compose.yml.

  • Each source code repository's service file uses the same Docker virtual network names, so you can deploy services one by one using each service's Compose file, and yet keep them on the appropriate networks for internal ("southbound", gRPC) and external ("northbound", RESTful) API communication.

  • Services are prefixed with their name (e.g. zms-frontend) and postfixed with one of three deployment options for the service: -prod, -dev, and -local-dev. The latter option is only appropriate for local development and testing on non-public endpoints, since it has default environment configuration, including tokens and passwords that are committed in the source repositories.

    DANGER

    Do not run the -local-dev variants on public endpoints unless you have overridden the default password and token configuration, and restricted access to debugging information and control (e.g. the server-side of the zms-frontend-local-dev service will leak information to web clients, so do not use it on public endpoints).

  • Most services provide an HTTP_ENDPOINT configuration option that can be overriden the service's .env file, which defines the URL at which the service can be reached from outside the Docker Compose virtual network. This value is sent to the zms-identity service, which maintains a service directory. If you want to make the API endpoints publicly accessible, you will need to change these variables for each service; see the Public deployments section below.

Set up source repo storage

Create SRCDIR for checking out OpenZMS source code repositories. Change SRCDIR to your preferred location.

bash
export SRCDIR=~/openzms
mkdir $SRCDIR

Set up persistent storage

Create DATADIR for per-service persistent data. (If you changed deploy/.env to override this environment variable, change it as you execute the following code block.)

bash
export DATADIR=/zms
sudo mkdir $DATADIR
sudo chown `id -un` $DATADIR

This directory will store per-service database contents and filesystem data (e.g. for the zms-dst digital spectrum twin service, RF observations, indexes, and propagation simulations).

Persisting OpenZMS tokens and URLs

As you deploy services, the instructions will have you append configuration details like API tokens and service URLs into $DATADIR/env.sh, so that if you move to a new shell, you can later source the $DATADIR/env.sh file into your new shell to use the environment variables you captured while deploying:

bash
. $DATADIR/env.sh

Install the Python CLI and library

You will want to install the OpenZMS zms-client-py Python library, which provides 1) an auto-generated library wrapper around the OpenZMS service RESTful APIs and 2) several CLI tools, including the general zmsclient-cli. We describe several installation options in the zms-client-py README.

Deploy zms-identity

Clone the zms-identity service source repository at https://gitlab.flux.utah.edu/openzms/zms-identity :

bash
cd $SRCDIR
git clone https://gitlab.flux.utah.edu/openzms/zms-identity
cd zms-identity

Next, build and deploy the local-dev variant of the zms-identity service, after customizing the environment variable configuration in deploy/env/zms-identity-local-dev.env (see in source repository). (Read the comments above each variable for more information. Note that these environment variables override the service config arguments, whose defaults you can browse in the source repository.)

Before deploying, you may wish to change configuration options, especially the bootstrap tokens.

Bootstrap options

As the zms-identity service starts up, it checks to see if the bootstrap config option is enabled, and if so, will populate the identity database with initial elements, users, and tokens, so that the API may be used.

  • The zms-identity-local-dev.env file contains a bootstrap configuration for the identity service, including admin element and user element API tokens to use with the northbound RESTful API: one for the admin account in the admin element, and another for the powder-owner proto-user account in the powder element. (You should replace powder with the name of your primary non-admin operational element (spectrum-consuming organization).

    WARNING

    After bringing up the zms-identity service the first time, you should set BOOTSTRAP=false before subsequent restarts.

  • OpenZMS's token format is inspired by GitHub. You can change the BOOTSTRAP_TOKEN and BOOTSTRAP_USER_ELEMENT_TOKEN variables to use other tokens by using the zms-identity-cli tool to generate new tokens in the correct format:

    bash
    docker run --rm -it --entrypoint zms-identity-cli gitlab.flux.utah.edu:4567/openzms/zms-identity/zms-identity token-generate pat

Additional options

  • The local-dev env configuration enables automatic database migrations. Migrations are included in the service image you run, and can be automatically applied as the service is starting up.

  • The local-dev defaults to building the container image from the source repository checkout. If you don't want to do a full source build, you can first create zms-identity/deploy/.env with the following contents:

    bash
    PULL_POLICY=missing
    ZMS_IDENTITY_IMAGE=https://gitlab.flux.utah.edu:4567/openzms/zms-identity/zms-identity:latest

Deploy and verify

Build and create the zms-identity-local-dev service:

bash
docker compose -f deploy/docker-compose.yml up -d zms-identity-local-dev

View the containers:

bash
docker compose -f deploy/docker-compose.yml ps

Watch the main service logfile:

bash
docker compose -f deploy/docker-compose.yml logs -f zms-identity-local-dev

Construct a URL to the identity service's RESTful API endpoint on its private IP address, and save it to $DATADIR/env.sh:

bash
export IDENTITY_IP_HTTP=`docker inspect zms-identity-local-dev -f '{{ index . "NetworkSettings" "Networks" "zms-frontend-local-dev-net" "IPAddress"}}'`
export IDENTITY_HTTP=http://${IDENTITY_IP_HTTP}:8010/v1
echo $IDENTITY_HTTP

echo IDENTITY_HTTP="$IDENTITY_HTTP" >> $DATADIR/env.sh

Inspect your admin token via the RESTful API (assumes you have the jq tool installed on your system):

bash
curl -s -k -X GET -H "X-Api-Token: $ADMIN_TOKEN" -H "X-Api-Elaborate: true" ${IDENTITY_HTTP}/tokens/this | jq

Inspect your admin token via zmsclient-cli (may very depending on how you installed it):

bash
zmsclient-cli --token=$ADMIN_TOKEN --elaborate token this get

(or, for instance, if you are running directly from a zms-client-py source tree:)

bash
python -m zmsclient.cli --token=$ADMIN_TOKEN -elaborate token this get

List the bootstrapped OpenZMS users via the RESTful API, but this time in a pretty-printed style instead of raw JSON:

bash
zmsclient-cli --token=$ADMIN_TOKEN --elaborate --output=pretty user list

List the bootstrapped OpenZMS elements via the RESTful API:

bash
zmsclient-cli --token=$ADMIN_TOKEN --elaborate element list

Deploy zms-zmc

Clone the zms-zmc service source repository at https://gitlab.flux.utah.edu/openzms/zms-zmc :

bash
cd $SRCDIR
git clone https://gitlab.flux.utah.edu/openzms/zms-zmc
cd zms-zmc

Next, build and deploy the local-dev variant of the zms-zmc service, after customizing the environment variable configuration in deploy/env/zms-zmc-local-dev.env (see in source repository). (Read the comments above each variable for more information. Note that these environment variables override the service config arguments, whose defaults you can browse in the source repository.)

Before deploying for the first time, you should modify the Zone configuration options and enable the bootstrap option.

Bootstrap options

As the zms-zmc service starts up, it checks to see if the bootstrap config option is enabled, and if so, will populate the zmc database with an initial Zone object and its geographic boundaries.

  • The zms-zmc-local-dev.env file contains a bootstrap configuration for the zmc service which will automatically create a single Zone object. By default, this is the POWDER-RDZ zone and its rectangular polygon; you should adapt this to your local deployment. You should set the ZONE_NAME and ZONE_DESCRIPTION variables to custom values, and change the ZONE_AREA variable to a semicolon-delimited list of lat,long WGS84 coordinates that defines the operational area of your zone. (The first and last point in the list must match to created a bounded polygon.)

    WARNING

    After bringing up the zms-zmc service the first time, you should set BOOTSTRAP=false before subsequent restarts.

Additional options

  • The local-dev env configuration enables automatic database migrations. Migrations are included in the service image you run, and can be automatically applied as the service is starting up.

  • The local-dev defaults to building the container image from the source repository checkout. If you don't want to do a full source build, you can first create zms-zmc/deploy/.env with the following contents:

    bash
    PULL_POLICY=missing
    ZMS_ZMC_IMAGE=https://gitlab.flux.utah.edu:4567/openzms/zms-zmc/zms-zmc:latest

Deploy and verify

Build and create the zms-zmc-local-dev service:

bash
docker compose -f deploy/docker-compose.yml up -d zms-zmc-local-dev

View the containers:

bash
docker compose -f deploy/docker-compose.yml ps

Watch the main service logfile:

bash
docker compose -f deploy/docker-compose.yml logs -f zms-zmc-local-dev

Construct a URL to the zmc service's RESTful API endpoint on its private IP address, and save it to $DATADIR/env.sh:

bash
export ZMC_IP_HTTP=`docker inspect zms-zmc-local-dev -f '{{ index . "NetworkSettings" "Networks" "zms-frontend-local-dev-net" "IPAddress"}}'`
export ZMC_HTTP=http://${ZMC_IP_HTTP}:8010/v1
echo $ZMC_HTTP

echo ZMC_HTTP="$ZMC_HTTP" >> $DATADIR/env.sh

Source the $DATADIR/env.sh file into your current shell if you haven't already done so, and inspect the auto-created Zone object:

bash
. $DATADIR/env.sh

List the bootstrapped OpenZMS Zone via the RESTful API:

bash
zmsclient-cli --token=$ADMIN_TOKEN --elaborate zone list

Deploy zms-dst

Clone the zms-dst service source repository at https://gitlab.flux.utah.edu/openzms/zms-dst :

bash
cd $SRCDIR
git clone https://gitlab.flux.utah.edu/openzms/zms-dst
cd zms-dst

Next, build and deploy the local-dev variant of the zms-dst service, after customizing the environment variable configuration in deploy/env/zms-dst-local-dev.env (see in source repository). (Read the comments above each variable for more information. Note that these environment variables override the service config arguments, whose defaults you can browse in the source repository.)

Configuration options

To use propagation simulations within the zms-dst service, you will want to configure and deploy the zms-dst-geoserver-local-dev service. This GeoServer instance stores the raw propagation simulation data and queries it (e.g., checking for coexistence conflicts, etc). Use of GeoServer is only required to store and query propagation simulation data.

Initialize the GEOSERVER_-prefixed variables according to your configuration:

  • GEOSERVER_API should be set to a full URL. If you are using the default zms-dst-local-dev configuration, this value is simply http://zms-dst-geoserver-local-dev:8080/geoserver/rest. If you are deploying the APIs on a public network, you will want to change the hostname to the FQDN of your machine, and if using the caddy-all SSL-enabled reverse proxy detailed in the Public deployment section, change the URL scheme to https and the port to 8025 instead of 8080.

  • Create a random password in place of the default value for GEOSERVER_PASSWORD.

  • Set GEOSERVER_ANONYMOUS to true so that web map tiles may be accessed without authentication. (The OpenZMS services do not proxy authorization to the GeoServer API yet, so this option is currently required.)

Additional options

  • The local-dev env configuration enables automatic database migrations via, but the zms-dst service has not yet been migrated to Atlas migrations like the others. This is a developing feature.

  • The local-dev defaults to building the container image from the source repository checkout. If you don't want to do a full source build, you can first create zms-dst/deploy/.env with the following contents:

    bash
    PULL_POLICY=missing
    ZMS_DST_IMAGE=https://gitlab.flux.utah.edu:4567/openzms/zms-dst/zms-dst:latest

Deploy and verify

Build and create the zms-dst-local-dev service:

bash
docker compose -f deploy/docker-compose.yml up -d zms-dst-local-dev

View the containers:

bash
docker compose -f deploy/docker-compose.yml ps

Watch the main service logfile:

bash
docker compose -f deploy/docker-compose.yml logs -f zms-dst-local-dev

Construct a URL to the dst service's RESTful API endpoint on its private IP address, and save it to $DATADIR/env.sh:

bash
export DST_IP_HTTP=`docker inspect zms-dst-local-dev -f '{{ index . "NetworkSettings" "Networks" "zms-frontend-local-dev-net" "IPAddress"}}'`
export DST_HTTP=http://${DST_IP_HTTP}:8010/v1
echo $DST_HTTP

echo DST_HTTP="$DST_HTTP" >> $DATADIR/env.sh

Source the $DATADIR/env.sh file into your current shell if you haven't already done so:

bash
. $DATADIR/env.sh

List the bootstrapped OpenZMS Observation objects via the RESTful API:

bash
zmsclient-cli --token=$ADMIN_TOKEN --elaborate observation list

(Initially this will return zero results, until you have connected a monitor.)

Deploy zms-alarm

Clone the zms-alarm service source repository at https://gitlab.flux.utah.edu/openzms/zms-alarm :

bash
cd $SRCDIR
git clone https://gitlab.flux.utah.edu/openzms/zms-alarm
cd zms-alarm

Next, build and deploy the local-dev variant of the zms-alarm service, after customizing the environment variable configuration in deploy/env/zms-alarm-local-dev.env.

Configuration options

The zms-alarm service does not current offer extra configuration beyond the typical OpenZMS parameters. Simply change HTTP_ENDPOINT if you are creating a public deployment; otherwise, continue.

Additional options

  • The local-dev env configuration enables automatic database migrations via, but the zms-alarm service has not yet been migrated to Atlas migrations like the others. This is a developing feature.

  • The local-dev defaults to building the container image from the source repository checkout. If you don't want to do a full source build, you can first create zms-alarm/deploy/.env with the following contents:

    bash
    PULL_POLICY=missing
    ZMS_ALARM_IMAGE=https://gitlab.flux.utah.edu:4567/openzms/zms-alarm/zms-alarm:latest

Deploy and verify

Build and create the zms-alarm-local-dev service:

bash
docker compose -f deploy/docker-compose.yml up -d zms-alarm-local-dev

View the containers:

bash
docker compose -f deploy/docker-compose.yml ps

Watch the main service logfile:

bash
docker compose -f deploy/docker-compose.yml logs -f zms-alarm-local-dev

Construct a URL to the alarm service's RESTful API endpoint on its private IP address, and save it to $DATADIR/env.sh:

bash
export ALARM_IP_HTTP=`docker inspect zms-alarm-local-dev -f '{{ index . "NetworkSettings" "Networks" "zms-frontend-local-dev-net" "IPAddress"}}'`
export ALARM_HTTP=http://${ALARM_IP_HTTP}:8010/v1
echo $ALARM_HTTP

echo ALARM_HTTP="$ALARM_HTTP" >> $DATADIR/env.sh

Deploy zms-frontend

Clone the zms-frontend service source repository at https://gitlab.flux.utah.edu/openzms/zms-frontend :

bash
cd $SRCDIR
git clone https://gitlab.flux.utah.edu/openzms/zms-frontend
cd zms-frontend

You can deploy the OpenZMS frontend in a number of ways:

  • If you are creating a local-dev backend service deployment, but want to make your frontend service publicly accessible, you should deploy the local-dev-prod variant. This allows you to enable debugging in the backend services by deploying their local-dev variants; but not in the frontend.

  • If you are creating a local-dev private deployment, you can deploy the zms-frontend-local-dev variant, which enables debugging and mounts the source tree paths read-only into the container to enable hot module reloading. This is a useful feature when debugging.

  • If you are creating a local-dev private deployment and want to run directly out of the source tree, you can run

    bash
    npx nuxi dev --dotenv .env.local

    First populate .env.local with variables set as in deploy/env/zms-frontend-local-dev.env, modified to your configuration as necessary.

Configuration options

Before deploying the frontend, customize the environment variable configuration in deploy/env/zms-dst-local-dev.env (see in source repository). Read the comments above each variable for more information.

  • If you are creating a public deployment, update the values of the API endpoints in the NUXT_PUBLIC_*_URL variables to use the publicly-accessible FQDN and ports.

  • If you are creating a public deployment, update the NUXT_AUTH_ORIGIN and AUTH_ORIGIN variables to point to your site (e.g. https://demo.openzms.net). Make sure that this FQDN is aligned with your reverse proxy configuration, as we describe below in the instructions for creating public deployments.

Additional options

  • The local-dev and local-dev-prod variants default to building the container image from the source repository checkout. If you don't want to do a full source build, you can first create zms-frontend/deploy/.env with the following contents:
    bash
    PULL_POLICY=missing
    ZMS_FRONTEND_IMAGE=https://gitlab.flux.utah.edu:4567/openzms/zms-frontend/zms-frontend:latest

Deploy and verify

Build and create the zms-frontend-local-dev-prod service:

bash
docker compose -f deploy/docker-compose.yml up -d

View the containers:

bash
docker compose -f deploy/docker-compose.yml ps

Watch the main service logfile:

bash
docker compose -f deploy/docker-compose.yml logs -f zms-zmc-local-dev

Public deployments

If you want to deploy OpenZMS in production and make the API endpoints accessible via reverse proxy (e.g. on the public Internet, etc), you will want to set the HTTP_ENDPOINT variables in each component's deploy/env/<service-name>.env accordingly. When each service starts up, it registers with the zms-identity service, which maintains a service directory that users and services can both use to discover API endpoints. For instance, if you want to make your zms-zmc service reachable to users via demo.openzms.net:8010, you would set HTTP_ENDPOINT=demo.openzms.net:8010 in zms-zmc/deploy/env/zms-zmc-local-dev.env. Each time you (re)start your zms-zmc service, it will register or update itself at the zms-identity service.

You can easily deploy an automatic, SSL-enabled reverse proxy with caddy. caddy automatically obtains and renews SSL certificates for your hostname if ports 80 and 443 on your hostname are available for it to use. The zms-identity/deploy/docker-compose.yml file provides an example caddy-all reverse proxy service and an example Caddyfile-example that you would need to modify with your public API endpoints (hostname and ports) and copy to $DATADIR/caddy-all/Caddyfile. This configuration is straightforward to deploy on a single host with a valid hostname and public IP with ports 80, 443, 8000, 8010, 8020, 8025, and 8030 available. If you change the ports, you will need to change the caddy-all ports block correspondingly, as well the per-service HTTP_ENDPOINT and HTTP_ENDPOINT_LISTEN variables in the individual service .env files.

OpenZMS is supported by the National Science Foundation under Awards 2232463 and 2431961.