It’s a step-by-step Ship deployment guide. We will use Digital Ocean Managed Kubernetes, Container Registry, Mongo Atlas, GitHub Actions for automated deployment, and CloudFlare for DNS and SSL configuration. You need to create GitHub, CloudFlare, Digital Ocean and MongoDB Atlas accounts and install the next tools on your machine before starting:
  • kubectl - CLI tool for accessing Kubernetes cluster;
  • kubectx - CLI tool for easier switching between Kubernetes contexts;
  • helm - CLI tool for managing Kubernetes deployments;
  • k8sec - CLI tool for managing Kubernetes Secrets easily;
Try the next commands to ensure that everything is installed correctly:
kubectl

kubectx

helm

k8sec
Also, you need git and Node.js if you already haven’t.

Setup project

First, initialize your project. Type npx create-ship-app@latest in the terminal then choose Digital Ocean Managed Kubernetes deployment type. Init project You will have next project structure.
/my-ship-app
  /.github
  /apps
    /api
    /web
  /deploy
  ...

Container registry

You need to create Container Registry for storing Docker images. The deployment script will upload images to Container Registry during the build step, and Kubernetes will automatically pull these images from Container Registry to run a new version of service during the deployment step. Name container registry as the name of organization, which usually is equals to the name of the project: my-ship-app. Container Registry creation After some time, you will get registry endpoint. Container Registry creation registry.digitalocean.com/my-ship-app is registry endpoint, where my-ship-app is registry name. Docker images for each service are stored in separate repository. In Digital Ocean repositories are created automatically when something is uploaded by specific paths. During deployment process script will automatically create paths to repositories in next format:
  • API - registry.digitalocean.com/my-ship-app/api;
  • Scheduler - registry.digitalocean.com/my-ship-app/scheduler;
  • Migrator - registry.digitalocean.com/my-ship-app/migrator;
  • Web - registry.digitalocean.com/my-ship-app/web;
Images for all environments will be uploaded to the same repository for each service.

Kubernetes cluster

Now let’s create Managed Kubernetes cluster.
1

Select a region

Navigate to the cluster creation page here
We recommend you to create a cluster in the region where your end-users are located, it will reduce response time to incoming requests to all services.
Also, if your cluster will be located in one region with a Container Registry deployment process will be faster. You can find more information about regions here.
Cluster Region
2

Set Node pool name

Set Node pool name (e.g. pool-app) and configure Nodes. Digital Ocean recommends creating at least 2 nodes for the production environment. These settings will have an impact on the price of the cluster.
Cluster Capacity
3

Set cluster name

Set cluster name (e.g. my-ship-app). A common practice is to use the project name for it.
Cluster Name
4

Review and Create

Click on Create Kubernetes Cluster button to create a cluster and wait for cluster to be ready.
5

Integrate with created Container Registry

After cluster is created, go to the Container Registry’s settings and find DigitalOcean Kubernetes integration section.
Registry Settings
You need to select your newly created my-ship-app cluster.
Registry Check Cluster

Personal access token

To upload docker images in Container Registry and pull them after from cluster we need Digital Ocean Personal Access Token. When you created cluster - one with Read Only scope was automatically created. But we need to generate a new one with:
  • Name (e.g. my-ship-app-admin-deploy)
  • Full Access scope
  • No expiration
You cannot change scope of already generated token.
Digital Ocean Token We will need this token soon, so don’t close this page yet. Digital Ocean Token
Be very careful with Personal Access Token, if someone steals it he will get access to all resources from your Digital Ocean account.

Accessing cluster from a local machine

1

Download cluster's kubeconfig

Download cluster’s kubeconfig, this file includes information for accessing cluster through kubectl.
Kubeconfig Download
And replace initial Read only token with new Full access token from Personal access token section.
my-ship-app-kubeconfig.yaml
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: ...
    server: https://...
  name: do-nyc3-my-ship-app
contexts:
- context:
    cluster: do-nyc3-my-ship-app
    user: do-nyc3-my-ship-app-admin
  name: do-nyc3-my-ship-app
current-context: do-nyc3-my-ship-app
kind: Config
preferences: {}
users:
- name: do-nyc3-my-ship-app-admin
  user:
    # replace this token for full access token
    token: dop_v1_...
2

Add cluster, context and user to kubeconfig

Kubeconfig files contain information about several clusters, you have your own on the local machine, it should have been created after kubectl installation.
You need to add information about the new cluster to your kubeconfig. Find .kube/config file on your machine, and add cluster, context and user values from my-ship-app-kubeconfig.yaml.
~/.kube/config
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: ...
    server: https://...
  name: some-cluster
# your new cluster from my-ship-app-kubeconfig.yaml goes here
- cluster:
    certificate-authority-data: ...
    server: https://...
  name: do-nyc3-my-ship-app
contexts:
- context:
    cluster: some-cluster
    user: some-user
  name: some-cluster
# your new context from my-ship-app-kubeconfig.yaml goes here
- context:
    cluster: do-nyc3-my-ship-app
    user: do-nyc3-my-ship-app-admin
  name: do-nyc3-my-ship-app
current-context: some-cluster
kind: Config
preferences: {}
users:
- name: some-user
  user:
    token: dop_v1_...
# your new user from my-ship-app-kubeconfig.yaml goes here
- name: do-nyc3-my-ship-app-admin
  user:
    token: dop_v1_...
3

Switch to cluster context

Execute kubectx in your terminal and select your cluster from the list.
kubectx
You will see the list of available clusters.
some-cluster
do-nyc3-my-ship-app
Select your cluster from the list:
kubectx do-nyc3-my-ship-app
4

Verify cluster access

Check the installed pods by running:
kubectl get pods -A
You should see a list of system pods in your cluster:
NAMESPACE     NAME                            READY   STATUS    RESTARTS   AGE
kube-system   cilium-tb8td                    1/1     Running   0          18m
kube-system   cilium-x5w8n                    1/1     Running   0          19m
kube-system   coredns-5679ffb5c8-b7dzj        1/1     Running   0          17m
kube-system   coredns-5679ffb5c8-d465r        1/1     Running   0          17m
kube-system   cpc-bridge-proxy-ebpf-2gzfr     1/1     Running   0          17m
kube-system   cpc-bridge-proxy-ebpf-jknzh     1/1     Running   0          17m
kube-system   csi-do-node-jcqd2               2/2     Running   0          17m
kube-system   csi-do-node-rpx6q               2/2     Running   0          17m
kube-system   do-node-agent-ldhxq             1/1     Running   0          17m
kube-system   do-node-agent-pdksz             1/1     Running   0          17m
kube-system   hubble-relay-66f54dcd57-l7xjb   1/1     Running   0          21m
kube-system   hubble-ui-785bdbc45b-6xd57      2/2     Running   0          18m
kube-system   konnectivity-agent-h79mt        1/1     Running   0          17m
kube-system   konnectivity-agent-hvv67        1/1     Running   0          17m

Ingress NGINX Controller

ingress-nginx is an Ingress controller for Kubernetes using NGINX as a reverse proxy and load balancer.
Learn more about ingress-nginx functionality in the official documentation.
1

Navigate to dependencies directory

Change to the deploy/dependencies directory in your terminal.
2

Configure Helm Values (Optional)

This step is required only if you specified a custom node pool name in your Digital Ocean Kubernetes cluster.If you did, update the doks.digitalocean.com/node-pool value in values.yaml.gotmpl:
deploy/dependencies/ingress-nginx/values.yaml.gotmpl
controller:
  publishService:
    enabled: true
  nodeSelector:
    doks.digitalocean.com/node-pool: pool-app

rbac:
  create: true

defaultBackend:
  enabled: false
3

Install dependencies

Install helm dependencies using helmfile:
helmfile deps
4

Review and apply changes

Preview the changes first:
helmfile diff
If the preview looks correct, apply the configuration:
helmfile apply

DNS and SSL

1

Get Load Balancer Address

After deploying ingress-nginx, retrieve the Load Balancer’s external ip:
kubectl get svc ingress-nginx-controller -n ingress-nginx
Copy the value from the EXTERNAL-IP column.
NAME                       TYPE           CLUSTER-IP       EXTERNAL-IP      PORT(S)                      AGE
ingress-nginx-controller   LoadBalancer   10.245.201.160   138.68.124.241   80:30186/TCP,443:32656/TCP   28m
It take some time while ingress-nginx will configure everything and provide EXTERNAL-IP.
2

Domain Naming Convention

You can follow this recommended naming pattern for different environments:
EnvironmentAPI DomainWeb Domain
Productionapi.ship.comapp.ship.com
Stagingapi.staging.ship.comapp.staging.ship.com
3

Configure DNS in Cloudflare

  1. First, ensure you have a domain in Cloudflare. You can either:
  1. In the Cloudflare DNS tab, create 2 A records:
  • One for Web interface
  • One for API endpoint
Both should point to your Load Balancer’s external hostname.Enable the Proxied option to:
  • Route traffic through Cloudflare
  • Generate SSL certificates automatically
CloudFlare API DNS Configuration

CloudFlare Web DNS Configuration
Cloudflare’s free Universal SSL certificates only cover the apex domain and one subdomain level. For multiple subdomain levels, you’ll need an Advanced Certificate.
4

Update Configuration Files

Update your domain settings in the appropriate environment configuration files:For API service:
service: api
port: 3001
domain: api.my-ship-app.paralect.com
For Web service:
service: web
port: 3002
domain: my-ship-app.paralect.com

MongoDB Atlas

MongoDB Atlas is a fully managed cloud database service that provides automated backups, scaling, and security features. It offers 99.995% availability with global deployment options and seamless integration with AWS infrastructure.

Cluster Creation

1

Access MongoDB Atlas

Sign in to your MongoDB Atlas account and create a new project if needed.
2

Deploy New Cluster

Click Create to start cluster deployment.Cluster Tier Selection:
  • Staging: M0 (Free tier) - Suitable for development and testing
  • Production: M10 or higher - Includes automated backups and advanced features
Deploy MongoDB Atlas cluster
3

Configure Cluster Name

Enter a descriptive cluster name (e.g., ship-production-cluster, ship-staging-cluster)

Security Configuration

1

Create Database User

Navigate to Database AccessAdd New Database User
  • Authentication Method: Password
  • Username: Use environment-specific names (e.g., api-production, api-staging)
  • Password: Generate a strong password
  • Database User Privileges: Read and write to any database
Add MongoDB database user
Password Requirements: Ensure the password starts with a letter and contains only alphanumeric characters and common symbols. Special characters at the beginning can cause URI parsing issues.
2

Configure Network Access

Navigate to Network AccessAdd IP Address
  • Click Allow access from anywhere to allow connections from any IP with valid credentials
  • For production, consider restricting to specific IP ranges for enhanced security
Configure MongoDB network access

Get Connection String

1

Access Connection Details

Go to your cluster dashboard and click the Connect button.
MongoDB Atlas dashboard
2

Copy Connection String

  1. Select Drivers in the “Connect your application” section
  2. Choose Node.js driver and latest version
  3. Copy the connection string and replace <db_password> with your actual password
MongoDB connection string
Example Connection String:
mongodb+srv://api-production:[email protected]/?retryWrites=true&w=majority
3

Save Connection Details

Store the connection string securely - you’ll need it for environment configuration later
Before deploying to production, configure automated backups in the Atlas console to ensure data recovery capabilities.

Environment variables

Kubernetes applications require proper environment variable configuration for both API and Web components. This section covers how to set up and manage environment variables securely using Kubernetes secrets and configuration files.

API Environment Variables

For the API deployment, you need to set up environment variables using Kubernetes secrets to securely manage sensitive configuration data.
Secrets in Kubernetes are used to store sensitive information, such as passwords, API tokens, and keys. They are encoded in Base64 format to provide a level of security. These can be mounted into containers as data volumes or used as environment variables.
Before deploying the app, make sure all necessary variables from the API config are exist. Here are the minimal set of required variables:
NameDescriptionExample value
APP_ENVApplication environmentproduction
MONGO_URIDatabase connection stringmongodb://<username>:<password>@ship.mongodb.net
MONGO_DB_NAMEDatabase nameapi-production
API_URLAPI domain URLhttps://api.my-ship-app.paralect.com
WEB_URLWeb app domain URLhttps://my-ship-app.paralect.com

Environment Variable Details

Setting up Kubernetes Secrets

1

Create namespaces and secret objects

Create Kubernetes namespaces and secret objects for staging and production environments:
kubectl create namespace staging
kubectl create secret generic api-staging-secret -n staging
kubectl create namespace production
kubectl create secret generic api-production-secret -n production
2

Initialize secret storage

First, create an APP_ENV variable to initialize secret storage for k8sec:
k8sec set api-production-secret APP_ENV=production -n production
3

Verify secret creation

Run the following command to check the created secret:
k8sec list api-production-secret -n production
4

Prepare environment file

Create a .env.production file with all required variables:
APP_ENV=production
MONGO_URI=mongodb://username:[email protected]
MONGO_DB_NAME=api-production
API_URL=https://api.my-ship-app.paralect.com
WEB_URL=https://my-ship-app.paralect.com
Replace all example values with your actual configuration. Never use production secrets in documentation or version control.
5

Import secrets to Kubernetes

Import secrets from the .env file to Kubernetes secret using k8sec:
k8sec load -f .env.production api-production-secret -n production
After updating environment variables, you must initiate a new deployment for changes to take effect. Kubernetes pods cache variable values during startup, requiring a pod restart or rolling update to apply changes.

Web Environment Variables

The web application uses Next.js environment variables that are embedded at build time and made available in the browser. Unlike API secrets, these variables are stored directly in the GitHub repository.
Why Web Environment Variables Are Safe in Git: Web environment variables (prefixed with NEXT_PUBLIC_) contain only public configuration like URLs and API endpoints. They don’t include sensitive data like passwords or API keys, making them safe to store in version control. These values are already exposed to users in the browser, so repository storage doesn’t create additional security risks.
Security Notice: Never store sensitive information (passwords, API keys, secrets) in web environment files as they will be accessible on the client side. Only use public configuration values that are safe to expose to end users.

Configuration Files

Web environment variables are stored in separate files for each deployment environment:
NEXT_PUBLIC_API_URL=https://api.my-ship-app.paralect.com
NEXT_PUBLIC_WS_URL=https://api.my-ship-app.paralect.com
NEXT_PUBLIC_WEB_URL=https://my-ship-app.paralect.com

Environment Variables Reference

VariableDescriptionExample
NEXT_PUBLIC_API_URLBase URL for API requestshttps://api.my-ship-app.paralect.com
NEXT_PUBLIC_WS_URLWebSocket server URL for real-timehttps://api.my-ship-app.paralect.com
NEXT_PUBLIC_WEB_URLApp’s own URL for redirects/metadatahttps://my-ship-app.paralect.com
Best Practice: Keep web environment files in your repository and ensure all values are non-sensitive. If you need to reference sensitive data from the frontend, create a secure API endpoint that returns the necessary information after proper authentication.

Setting up GitHub Actions CI/CD

To automate deployment through Github Actions you need to configure Github Secrets inside workflow files.

Configuring GitHub Actions secrets and variables

Before starting, make sure you have created a GitHub repository for your project.
GitHub Secrets and variables allow you to manage reusable configuration data. Secrets are encrypted and are used for sensitive data. Learn more about encrypted secrets. Variables are shown as plain text and are used for non-sensitive data. Learn more about variables.
The deployment will be triggered on each commit:
  • Commits to main branch → deploy to staging environment
  • Commits to production branch → deploy to production environment
Configure the following secrets and variables in your GitHub repository:
NameTypeDescription
DO_PERSONAL_ACCESS_TOKENsecretThe secret access user created for CI/CD. This allows GitHub Actions to authenticate with DO services
CLUSTER_NAME_STAGINGvariableName of the staging cluster. (our case: my-ship-app)
CLUSTER_NAME_PRODUCTIONvariableName of the production cluster. (our case: my-ship-app, same as staging cluster since we have only one cluster)
CLUSTER_NODE_POOLvariableName of the node pool. (our case: pool-app)
REGISTRY_NAMEvariableName of the Digital Ocean Container Registry. (our case: my-ship-app)
Never commit sensitive credentials directly to your repository.
Always use GitHub Secrets for sensitive information like DO keys.
Variables (unlike secrets) are visible in logs and can be used for non-sensitive configuration values that may need to be referenced or modified.
We set up DO_PERSONAL_ACCESS_TOKEN to be universal for both production and staging environments with Full access scope.
Your KUBE_CONFIG_PRODUCTION and KUBE_CONFIG_STAGING will be the same if you have only one cluster for both environments.
GitHub Secrets
GitHub Variables
Now commit all changes to GitHub that will trigger deployment, or you can run a workflow manually
CI start
Done! Application deployed and can be accessed by the provided domain.
CI finish
Deployment finish
kubectl get pods -A

NAMESPACE       NAME                                        READY   STATUS      RESTARTS   AGE
ingress-nginx   ingress-nginx-controller-6bdff8c8fd-kwxcn   1/1     Running     0          6h50m
kube-system     cilium-tb8td                                1/1     Running     0          8h
kube-system     cilium-x5w8n                                1/1     Running     0          8h
kube-system     coredns-5679ffb5c8-b7dzj                    1/1     Running     0          8h
kube-system     coredns-5679ffb5c8-d465r                    1/1     Running     0          8h
kube-system     cpc-bridge-proxy-ebpf-2gzfr                 1/1     Running     0          8h
kube-system     cpc-bridge-proxy-ebpf-jknzh                 1/1     Running     0          8h
kube-system     csi-do-node-jcqd2                           2/2     Running     0          8h
kube-system     csi-do-node-rpx6q                           2/2     Running     0          8h
kube-system     do-node-agent-ldhxq                         1/1     Running     0          8h
kube-system     do-node-agent-pdksz                         1/1     Running     0          8h
kube-system     hubble-relay-66f54dcd57-l7xjb               1/1     Running     0          8h
kube-system     hubble-ui-785bdbc45b-6xd57                  2/2     Running     0          8h
kube-system     konnectivity-agent-h79mt                    1/1     Running     0          8h
kube-system     konnectivity-agent-hvv67                    1/1     Running     0          8h
production      api-57d7787d98-cj75s                        1/1     Running     0          2m15s
production      migrator-286bq                              0/1     Completed   0          2m54s
production      scheduler-6c497dfbcc-n6b5l                  1/1     Running     0          2m6s
production      web-54c6674974-lv94b                        1/1     Running     0          71m
redis           redis-master-0                              1/1     Running     0          6h49m
staging         api-689b75c786-97c4l                        1/1     Running     0          71m
staging         scheduler-57b984f6c-zcc44                   1/1     Running     0          71m
staging         web-55bdd955b-chswp                         1/1     Running     0          70m
If something went wrong you can check the workflows logs on GitHub and use kubectl logs, kubectl describe commands.

Upstash Redis Integration

Upstash Redis is a highly available, infinitely scalable Redis-compatible database that provides enterprise-grade features without the operational complexity.

How Ship Uses Redis

Ship leverages Redis for several critical functionalities:
Use CaseDescriptionImplementation
Real-time CommunicationPub/Sub mechanism for WebSocket functionalitySocket.io Redis Adapter
Rate LimitingAPI request throttling and abuse preventionRedis counters with TTL
CachingApplication data caching for improved performanceKey-value storage with expiration
Redis as a Message Broker: When scaling to multiple server instances, Redis acts as a message broker between Socket.io servers, ensuring real-time messages reach all connected clients regardless of which server they’re connected to.

Setting Up Upstash Redis

Create Your Database

1

Access Upstash Console

Log in to your Upstash account and navigate to the Redis section.
2

Create New Database

Click Create Database in the upper right corner to open the configuration dialog.
Create Upstash Redis Database
3

Configure Database Settings

Database Name: Choose a descriptive name for your database (e.g., my-ship-app-production)Primary Region: Select the region closest to your main application deployment for optimal write performance.Read Regions: Choose additional regions where you expect high read traffic for better global performance.
4

Select Plan & Deploy

Choose your pricing plan based on expected usage and click Create to deploy your database.
Once your database is created, you’ll need the connection string for your application:
1

Navigate to Connection Info

Go to your database dashboard and find the Connect to your database section.
Upstash Redis Connection Details
2

Copy Connection String

  1. Select the Node tab for the appropriate connection string format
  2. Click Reveal to show the hidden password
  3. Copy the complete Redis URI (format: rediss://username:password@host:port)
3

Add to Environment Variables through k8sec

Using k8sec, add the Redis connection string to your environment configuration:
k8sec set api-production-secret -n production REDIS_URI=$REDIS_URI
After updating environment variables, restart your API pod using:
kubectl delete pod <pod-name> -n <namespace>
This will trigger Kubernetes to create a new pod with the updated environment variables.

Verify Connection with Redis Insight

Redis Insight is a powerful GUI tool for managing and debugging Redis databases.
1

Install Redis Insight

Download and install Redis Insight on your local machine.
2

Add Database Connection

  1. Open Redis Insight
  2. Click Add Database
  3. Paste your Upstash Redis connection string in the Connection URL field
  4. Click Add Database
Redis Insight Connection Setup
3

Explore Your Database

Once connected, you can use Upstash Redis Console to:
  • Browse keys and data structures
  • Execute Redis commands directly
  • Monitor real-time performance metrics
  • Debug application data storage
Upstash Redis Metrics Dashboard
Real-time Monitoring: Upstash Redis updates database metrics automatically every 10 seconds, giving you near real-time visibility into your Redis performance and usage.