It’s a step-by-step Ship deployment guide. We will use Amazon Elastic Kubernetes Service (EKS), Mongo Atlas, Amazon Elastic Container Registry (ECR), GitHub Actions for automated deployment, and CloudFlare for DNS and SSL configuration.

You need to create GitHub, AWS, MongoDB Atlas and CloudFlare accounts and install the next tools on your machine before starting:

  • kubectl - CLI tool for accessing Kubernetes cluster (We recommend installing it via Docker Desktop);
  • kubectx - CLI tool for easier switching between Kubernetes contexts;
  • helm - CLI tool for managing Kubernetes deployments;
  • aws-cli - CLI tool for managing AWS resources;
  • eksctl - CLI tool for managing EKS clusters;
  • jq - command-line JSON processor used to manipulate JSON data;

Try the next commands to ensure that everything is installed correctly:

kubectl get pods -A

kubectx

helm list

aws sts get-caller-identity

eksctl

Also, you need git and Node.js if you already haven’t.

Setup project

First, initialize your project. Type npx create-ship-app init in the terminal then choose AWS EKS deployment type.

Init project

You will have the next project structure.

/my-app
  /.github
  /apps
    /api
    /web
  /deploy
  ...

Create GitHub private repository and upload the source code.

Private repo

cd my-app
git remote add origin [email protected]:fruneen/my-app.git
git branch -M main
git push -u origin main

AWS Regions

AWS Regions are physical locations of AWS clusters data centers. Each group of logical data centers calls Availability Zone (AZ). AZs allow the operation of production applications and databases that are more highly available, fault-tolerant, and scalable.

Now you need to select an AWS region for future use of the services. You can read more about region selection for your workloads here: What to Consider when Selecting a Region for your Workloads.

For this deployment guide, we will use the us-east-1.

Usually, you have to create AWS resources in a single region. If you don’t see created resources, you may need to switch to the appropriate AWS region.

Container registry

You need to create private repositories for storing Docker images. The deployment script will upload images to Container Registry during the build step, and Kubernetes will automatically pull these images from Container Registry to run a new version of the service during the deployment step.

Now we should create a repository for each service.

For Ship, we need to create repositories for the next services:

Container Registry creation

You should create a private repository for each service manually.

After creation, you should have the following 4 services in ECR

Container Registry creation

Docker images for each service are stored in a separate repository. During the deployment process script will automatically create paths to repositories in next format:

  • API - 402167441269.dkr.ecr.us-east-1.amazonaws.com/api;
  • Scheduler - r402167441269.dkr.ecr.us-east-1.amazonaws.com/scheduler;
  • Migrator - 402167441269.dkr.ecr.us-east-1.amazonaws.com/migrator;
  • Web - 402167441269.dkr.ecr.us-east-1.amazonaws.com/web;

Repository name402167441269.dkr.ecr.us-east-1.amazonaws.com/api consists of 5 values:

  • 402167441269 - AWS account ID;
  • us-east-1 - AWS region.
  • dkr.ecr - AWS service.
  • amazonaws.com - AWS domain.
  • api - service name.

Images for all environments will be uploaded to the same repository for each service.

Kubernetes cluster

Now let’s create EKS cluster.

In the first step, we need to set the cluster name. A common practice is to use the project name for it. Also, you can add an environment prefix if you have separate clusters for each environment: my-app-staging, and my-app-production. We can leave other parameters by default.

Cluster Creation

After creation, you need to wait a bit until the cluster status becomes Active.

Cluster Created

After cluster creation, you should attach EC2 instances to the cluster. You can do it by clicking on the Add Node Group button on the Compute tab.

Add Node Group

Set the node group name and select the only Node IAM role from the list.

Node Group Configuration

AWS recommends creating at least 2 nodes t3.medium instance type for the production environment.

Node Group Instance Configuration

Now you need to configure the node group in the deployment script that we created on the second screenshot. Need to update the nodeGroup value to pool-app.

deploy/script/src/config.js
const config = {
  nodeGroup: 'pool-app'
};

Accessing cluster from a local machine

Before working with the cluster, you need to configure the AWS CLI.

For accessing the cluster we need to run the following command:

aws eks update-kubeconfig --region us-east-1 --name my-app

Where us-east-1 is the cluster region and my-app is the cluster name.

If everything is ok you will be able to switch to your cluster. Type kubectx in the terminal and select your cluster.

Kubectx

By default, the cluster name will have an arn-name type like arn:aws:eks:us-east-1:402167441269:cluster/my-app.

To change it to a more convenient name, you can use the command kubectx my-app=arn:aws:eks:us-east-1:402167441269:cluster/my-app.

Then try to get information about installed pods in the cluster. Type kubectl get pods -A in the terminal.

If you did all steps correctly you will see the next info in a terminal.

Pods List

Dependencies

Now we need to install our dependencies in the cluster.

DependencyDescription
ingress-nginxIngress controller for Kubernetes using Nginx as a reverse proxy and load balancer
redisOpen source, advanced key-value store. Redis needed for API service

You can read here how ingress-nginx works.

Configure Helm Values for ingress-nginx and redis. Need to update the eks.amazonaws.com/nodegroup value to pool-app.

deploy/dependecies/ingress-nginx/values/values.yml
controller:
  publishService:
    enabled: true
  nodeSelector:
    eks.amazonaws.com/nodegroup: pool-app

rbac:
  create: true

defaultBackend:
  enabled: false
deploy/dependecies/redis/values/values.yml
redis:
  nodeSelector:
    eks.amazonaws.com/nodegroup: pool-app

auth:
  password: super-secured-password

architecture: standalone

Open deploy/bin folder and run the bash script.

bash deploy-dependencies.sh

DNS and SSL

Once you deploy ingress-nginx, it will create a Load Balancer with external IP. All incoming requests to services should be sent to Load Balancer external IP, then requests to our services will be routed to domains from Ingresses configuration by ingress-nginx.

To get Load Balancer IP type kubectl get services -n ingress-nginx in the terminal and copy EXTERNAL-IP of ingress-nginx-controller.

~ % kubectl get services -n ingress-nginx
NAME                                 TYPE           CLUSTER-IP       EXTERNAL-IP                                                              PORT(S)                      AGE
ingress-nginx-controller             LoadBalancer   10.100.220.127   a71ee1af9a5bc45e494df110e2b22fdb-633881399.us-east-1.elb.amazonaws.com   80:30799/TCP,443:31358/TCP   30m

It takes some time while ingress-nginx will configure everything and provide EXTERNAL-IP.

We are using Cloudflare for setting DNS records. You can register a domain in Cloudflare or transfer it from another service.

Open the DNS tab in Cloudflare and create two CNAME records for Web and API that point to Load Balancer external IP.

CloudFlare Web

CloudFlare API

Select the Proxied option that will proxy all traffic through Cloudflare. It does a lot of awesome work, you can read more about it here. In our case, we use it for automatic SSL certificate generation.

If you are deploying on a staging/demo environment add the corresponding postfix in the domain. Example: my-app-staging

Now add your domains in helm templates and code. For example, we are deploying on a production environment, if you are deploying on staging you will need to update staging.yaml and staging.json files.

deploy/app/api/production.yaml
service: api
port: 3001
domain: my-app-api.paralect.com
deploy/app/web/production.yaml
service: web
port: 3002
domain: my-app.paralect.com

MongoDB Atlas

Navigate to MongoDB Atlas, sign in to your account and create a new database.

Database creation

  1. Select the appropriate type. Dedicated for a production environment, shared for staging/demo.
  2. Select provider and region. We recommend selecting the same or closest region to the AWS EKS cluster.
  3. Select cluster tier. Free M0 Sandbox should be enough for staging/demo environments. For production environment we recommended selecting the option that supports cloud backups, M2 or higher.
  4. Enter cluster name

Mongo cluster

Security and connection

After cluster creation, you’ll need to set up security. Select the authentication type (username and password) and create a user.

Please be aware that the initial character of the generated password should be a letter. If it isn’t, you’ll need to create a new password. Failing to do this may lead to DigitalOcean parsing the MONGO_URI variable incorrectly.

Mongo setup authentication

Add IP addresses list, which should have access to your cluster. Add 0.0.0.0/0 IP address to allow anyone with credentials to connect.

Mongo setup ip white list

After database creation, go to the dashboard page and get the URI connection string by pressing the connect button.

Mongo dashboard

Select Connect your application option. Choose driver and mongo version, and copy connection string. Don’t forget to replace <password> with your credentials.

Mongo connect dialog

Now save this string, you will need it later.

Before moving to production, it’s crucial to set up MongoDB backup methods.

This ensures that you can reliably restore your data in the event of unforeseen circumstances.

CI/CD Preparation

Before setup CI/CD you need to create a separate user in AWS IAM with certain permissions, let’s create this user.

First of all, we need to create a policy for our user and move to IAM dashboard. Open the Policies page in the sidebar and click Create policy. After choosing the JSON tab, insert the following config:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "EKS",
            "Effect": "Allow",
            "Action": [
                "eks:DescribeCluster"
            ],
            "Resource": "*"
        },
        {
            "Sid": "ECR",
            "Effect": "Allow",
            "Action": [
                "ecr:CompleteLayerUpload",
                "ecr:GetAuthorizationToken",
                "ecr:UploadLayerPart",
                "ecr:InitiateLayerUpload",
                "ecr:BatchCheckLayerAvailability",
                "ecr:PutImage"
            ],
            "Resource": "*"
        }
    ]
}

Policy Configuration

In the second step, you can optionally add tags to your policy.

And at the last step, you need to give your policy a name and review summary.

Policy Review

Now we need to create a user, open the Users page in the sidebar and click Add user.

User Creating

In the next step, you need to attach your policy to the user. Click Attach existing policies directly and select the policy, which we created recently.

User Policy

At the next step, you can optionally add tags to your user.

The fourth step is to review your user and click Create user.

Once you’re done, you’ll see a list of users. Find yours and click Create access key.

User Access Key

Then, pick Application running on an AWS compute service as the use case. You can also add a tag if needed.

User Access Key

After that, you’ll get your Access Key ID and Secret Access Key. Remember to save them because you won’t see them again.

User Credentials

Now we need to give EKS permissions to our user. Use the following command to attach the user to kubernetes masters group:

eksctl create iamidentitymapping --cluster my-app --arn arn:aws:iam::402167441269:user/cicd --group system:masters --username cicd

In the —arn parameter you need to specify your user ARN, which you can find in the IAM dashboard.

Environment variables

API

For the API deployment, you need to set up environment variables using Kubernetes secrets and configMaps.

Secrets in Kubernetes are used to store sensitive information, such as passwords, API tokens, and keys. They are encoded in Base64 format to provide a level of security. These can be mounted into containers as data volumes or used as environment variables.

ConfigMaps in Kubernetes are used to store configuration data in key-value pairs, such as environment variables, command-line arguments, or configuration files. They help decouple configuration from containerized applications.

Before deploying the app, make sure all necessary variables from the API config are set. Here are the default required variables:

  • MONGO_URI (requires encoding)
  • MONGO_DB_NAME
  • API_URL
  • WEB_URL

Open deploy/bin folder and run the bash script. Enter the stage name. Then, you’ll be asked for variable name, its value, and whether it should be encoded. Repeat this process as required to create all the essential variables.

If you pick that variable should be encoded, it will be stored as a secret in Kubernetes. Otherwise, it gets stored in a configMap.

bash deploy-secrets.sh

The script works for initially creating secrets and configMaps, as well as for updates. When updating, you only need to input the variables that require changes, not all of them.

After updating variables, initiate a new deployment. Pods cache variable values during startup, requiring a refresh for changes to apply.

WEB

To modify environment variables in web, access the .env.staging or .env.production file:

apps/web/.env.production
NEXT_PUBLIC_API_URL=https://my-app-api.paralect.com
NEXT_PUBLIC_WS_URL=https://my-app-api.paralect.com
NEXT_PUBLIC_WEB_URL=https://my-app.paralect.com

Avoid storing sensitive information in web environment files as they are not secure.

Port

To configure the web application to use port 3002, add the line ENV PORT=3002 to the web Dockerfile:

apps/web/Dockerfile
ENV PORT=3002

CMD ["node", "apps/web/server.js"]

CI/CD

To automate deployment through GitHub Actions you need to configure GitHub Secrets inside workflow files.

The deployment will be triggered on each commit. Committing to the main branch will trigger a deployment in the staging environment, and committing to the production branch will trigger a deployment in the production environment.

To check required Secrets you can open workflows in the .github folder at the root of your project.

To automate deployment to the production environment you need to create AWS_ACCESS_KEY, AWS_SECRET_ACCESS_KEY, AWS_ACCOUNT_ID, AWS_REGION and CLUSTER_NAME_PRODUCTION secrets for api-production.yml and web-production.yml workflows.

AWS_ACCESS_KEY and AWS_SECRET_ACCESS_KEY you can get information about secrets from the downloaded credentials file when creating a user for CI/CD.

CI/CD user credentials

AWS_ACCOUNT_ID you can get from the user menu in the upper right corner of the AWS Management Console.

Account ID location

AWS_REGION and CLUSTER_NAME_PRODUCTION you set according to your project, for this guide we use us-east-1 and my-app respectively.

After adding all the secrets, you should have the same secrets as in the following screenshot.

GitHub secrets

Now commit all changes to GitHub that will trigger deployment.

CI start

Done! Application deployed and can be accessed by provided domain.

CI finish

Deployment finish

Deployed pods

If something went wrong you can check the workflows logs on GitHub and use kubectl logs, kubectl describe commands.

Manual deployment

To deploy services in the cluster manually you need to set cluster authorization credentials inside the config. Set environment and namespace to production/staging and set your AWS credentials into config.

deploy/src/config.js
const config = {
  rootDir,

  service: ENV.SERVICE,

  environment: ENV.ENVIRONMENT || 'production',

  namespace: ENV.NAMESPACE || 'production',

  kubeConfig: ENV.KUBE_CONFIG,

  home: ENV.HOME,

  AWS: {
    clusterName: ENV.CLUSTER_NAME || 'my-app',
    accessKey: ENV.AWS_ACCESS_KEY || 'AKIAV...',
    secretAccessKey: ENV.AWS_SECRET_ACCESS_KEY || 'a+frRW...',
    region: ENV.AWS_REGION || 'us-east-1',
    accountId: ENV.AWS_ACCOUNT_ID || '40216...',
  },
};

Run the deployment script. It will do the same as the CI deployment, but you run it manually.

title=deploy/script/src
node index

? What service to deploy? (Use arrow keys)
  api
  web