Digital Ocean
It’s a step-by-step Ship deployment guide. We will use Digital Ocean Managed Kubernetes and MongoDB, Container Registry, GitHub Actions for automated deployment, and CloudFlare for DNS and SSL configuration.
You need to create GitHub, CloudFlare, Digital Ocean accounts and install the next tools on your machine before starting:
- kubectl - CLI tool for accessing Kubernetes cluster (We recommend installing it via Docker Desktop);
- helm - CLI tool for managing Kubernetes deployments;
- kubectx - CLI tool for easier switching between Kubernetes contexts;
- jq - command-line JSON processor used to manipulate JSON data;
Try the next commands to ensure that everything is installed correctly:
Also, you need git and Node.js if you already haven’t.
Setup project
First, initialize your project. Type npx create-ship-app init
in the terminal then choose Digital Ocean Managed Kubernetes deployment type.
You will have next project structure.
Create GitHub private repository and upload source code.
Container registry
You need to create Container Registry for storing Docker images. The deployment script will upload images to Container Registry during the build step, and Kubernetes will automatically pull these images from Container Registry to run a new version of service during the deployment step.
After some time, you will get registry endpoint.
Now you should configure the deployment script to point Container Registry.
Need to update dockerRegistry.name
value to registry.digitalocean.com/oigen43/my-app
.
registry.digitalocean.com/oigen43/my-app
consists of 2 values:
registry.digitalocean.com/oigen43
- registry endpoint;my-app
- project name;
Docker images for each service are stored in separate repository. In Digital Ocean repositories are created automatically when something is uploaded by specific paths. During deployment process script will automatically create paths to repositories in next format:
- API - registry.digitalocean.com/oigen43/my-app-api;
- Scheduler - registry.digitalocean.com/oigen43/my-app-scheduler;
- Migrator - registry.digitalocean.com/oigen43/my-app-migrator;
- Web - registry.digitalocean.com/oigen43/my-app-web;
Images for all environments will be uploaded to the same repository for each service.
Kubernetes cluster
Now let’s create Managed Kubernetes cluster.
We recommend you to create a cluster in the region where your end-users are located, it will reduce response time to incoming requests to all services. Also, if your cluster will be located in one region with a Container Registry deployment process will be faster.
Set Node pool name and configure Nodes. Digital Ocean recommends creating at least 2 nodes for the production environment.
The last step is to set a cluster name. A common practice is to use the project name for it. Also, you can add an environment prefix if you have separate clusters for each environment: my-app-staging
, my-app-production
.
Now you need to configure node pool in deployment script that we created on second screenshot.
Need to update nodePool
value to pool-app
.
Accessing cluster from a local machine
You need to download cluster’s kubeconfig, this file includes information for accessing cluster through kubectl
.
Kubeconfig files contain information about several clusters, you have your own on the local machine, it should have been created after kubectl
installation.
You need to add information about the new cluster to your config.
Find .kube/config
file on your machine, and add cluster
, context
and user
values from the downloaded config to it.
If everything is ok you will be able to switch to your cluster.
Type kubectx
in the terminal and select your cluster.
Then try to get information about installed pods in the cluster. Type kubectl get pods -A
in the terminal.
If you did all steps correctly you will see the next info in a terminal.
Personal access token
To upload docker images in Container Registry and pull them after from cluster we need Digital Ocean Personal Access Token. When you created cluster, this token was automatically created.
Add Write scope to the token and change token’s name to app name, it will be easier to find it in the future.
You can grab this token from kubeconfig that we downloaded from Digital Ocean.
Be careful with Personal Access Token, if someone steals it he will get access to all resources from your Digital Ocean account.
Dependencies
Now we need to install our dependencies in the cluster.
Dependency | Description |
---|---|
ingress-nginx | Ingress controller for Kubernetes using Nginx as a reverse proxy and load balancer |
redis | Open source, advanced key-value store. Redis needed for API service |
regcred | Bash script for creating Kubernetes Secret. Secret needs for authorizing in Container Registry when pulling images from cluster |
You can read here how ingress-nginx works.
Configure Helm Values for ingress-nginx. Need to update doks.digitalocean.com/node-pool
value to pool-app
.
Open deploy/bin
folder and run the bash script.
You will be prompted to enter some values when installing regcred
dependency
DNS and SSL
Once you deploy ingress-nginx, it will create a Load Balancer with external IP. All incoming requests to services should be sent to Load Balancer external IP, then requests to our services will be routed to domains from Ingresses configuration by ingress-nginx.
To get Load Balancer IP type kubectl get services -n ingress-nginx
in the terminal and copy EXTERNAL-IP
of ingress-nginx-controller
.
It take some time while ingress-nginx will configure everything and
provide EXTERNAL-IP
.
We are using CloudFlare for setting DNS records. You can register a domain in CloudFlare or transfer it from another service.
Open the DNS tab in CloudFlare and create two A
records for Web and API that points Load Balancer external IP.
Select the Proxied option that will proxy all traffic through Cloudflare. It does a lot of awesome work, you can read more about it here. In our case we use it for automatic SSL certificates generation.
If you are deploying on a staging/demo environment add the corresponding
postfix in the domain. Example: my-app-staging
Now add your domains in helm templates. In example, we are deploying on production environment, if you are deploying on staging you will need to update staging.yaml
file.
Database
Now, let’s create Managed MongoDB cluster. Select the latest MongoDB version and choose the same region as the Kubernetes cluster, it will increase database requests speed.
Choose a database configuration.
The last step is to set a cluster name. A common practice is to use the project name for it. Also, you can add an environment prefix if you have separate clusters for each environment: my-app-staging
, my-app-production
.
After some time database cluster will be created. Copy connection string and add it in API config.
Change database name from admin
to api-production
or api-staging
in
connection string.
MongoDB cluster is open to all incoming connections by default, which is not secure. We need to select sources that will be allowed to connect to the database. Open the Settings tab, and select your k8s cluster. If you want to connect to the database from your machine also add your IP to Trusted Sources.
Environment variables
API
For the API deployment, you need to set up environment variables using Kubernetes secrets and configMaps.
Secrets in Kubernetes are used to store sensitive information, such as passwords, API tokens, and keys. They are encoded in Base64 format to provide a level of security. These can be mounted into containers as data volumes or used as environment variables.
ConfigMaps in Kubernetes are used to store configuration data in key-value pairs, such as environment variables, command-line arguments, or configuration files. They help decouple configuration from containerized applications.
Before deploying the app, make sure all necessary variables from the API config are set. Here are the default required variables:
- MONGO_URI (requires encoding)
- MONGO_DB_NAME
- API_URL
- WEB_URL
Open deploy/bin
folder and run the bash script.
Enter the stage name. Then, you’ll be asked for variable name, its value, and whether it should be encoded.
Repeat this process as required to create all the essential variables.
If you pick that variable should be encoded, it will be stored as a secret in Kubernetes. Otherwise, it gets stored in a configMap.
The script works for initially creating secrets and configMaps, as well as for updates. When updating, you only need to input the variables that require changes, not all of them.
After updating variables, initiate a new deployment. Pods cache variable values during startup, requiring a refresh for changes to apply.
WEB
To modify environment variables in web, access the .env.staging
or .env.production
file:
Avoid storing sensitive information in web environment files as they are not secure.
Port
To configure the web application to use port 3002, add the line ENV PORT=3002 to the web Dockerfile:
CI/CD
To automate deployment through Github Actions you need to configure Github Secrets inside workflow files.
The deployment will be triggered on each commit. Committing to the main branch will trigger a deployment in the staging environment, and committing to the production branch will trigger a deployment in the production environment.
To check required Secrets you can open workflows in the .github
folder at root of your project.
To automate deployment to the production environment you need to create DIGITAL_OCEAN_TOKEN
and KUBE_CONFIG_PRODUCTION
secrets for api-production.yml
and web-production.yml
workflows.
DIGITAL_OCEAN_TOKEN
KUBE_CONFIG_PRODUCTION
Now commit all changes to GitHub that will trigger deployment.
Done! Application deployed and can be accessed by provided domain.
If something went wrong you can check the workflows logs on GitHub and use kubectl logs, kubectl describe commands.
Manual deployment
To deploy services in the cluster manually you need to set cluster authorization credentials inside the config.
Set environment
and namespace
to production/staging
and set your Personal Access Token in dockerRegistry.username
and dockerRegistry.password
.
Run the deployment script. It will do the same as the CI deployment, but you run it manually.