AWS
It’s a step-by-step Ship deployment guide. We will use Amazon Elastic Kubernetes Service (EKS), Mongo Atlas, Amazon Elastic Container Registry (ECR), GitHub Actions for automated deployment, and CloudFlare for DNS and SSL configuration.
You need to create GitHub, AWS, MongoDB Atlas and CloudFlare accounts and install the next tools on your machine before starting:
- kubectl - CLI tool for accessing Kubernetes cluster (We recommend installing it via Docker Desktop);
- kubectx - CLI tool for easier switching between Kubernetes contexts;
- helm - CLI tool for managing Kubernetes deployments;
- aws-cli - CLI tool for managing AWS resources;
- eksctl - CLI tool for managing EKS clusters;
- jq - command-line JSON processor used to manipulate JSON data;
Try the next commands to ensure that everything is installed correctly:
Also, you need git and Node.js if you already haven’t.
Setup project
First, initialize your project. Type npx create-ship-app init
in the terminal then choose AWS EKS deployment type.
You will have the next project structure.
Create GitHub private repository and upload the source code.
AWS Regions
AWS Regions are physical locations of AWS clusters data centers. Each group of logical data centers calls Availability Zone (AZ). AZs allow the operation of production applications and databases that are more highly available, fault-tolerant, and scalable.
Now you need to select an AWS region for future use of the services. You can read more about region selection for your workloads here: What to Consider when Selecting a Region for your Workloads.
For this deployment guide, we will use the us-east-1.
Usually, you have to create AWS resources in a single region. If you don’t see created resources, you may need to switch to the appropriate AWS region.
Container registry
You need to create private repositories for storing Docker images. The deployment script will upload images to Container Registry during the build step, and Kubernetes will automatically pull these images from Container Registry to run a new version of the service during the deployment step.
Now we should create a repository for each service.
For Ship, we need to create repositories for the next services:
You should create a private repository for each service manually.
After creation, you should have the following 4 services in ECR
Docker images for each service are stored in a separate repository. During the deployment process script will automatically create paths to repositories in next format:
- API - 402167441269.dkr.ecr.us-east-1.amazonaws.com/api;
- Scheduler - r402167441269.dkr.ecr.us-east-1.amazonaws.com/scheduler;
- Migrator - 402167441269.dkr.ecr.us-east-1.amazonaws.com/migrator;
- Web - 402167441269.dkr.ecr.us-east-1.amazonaws.com/web;
Repository name402167441269.dkr.ecr.us-east-1.amazonaws.com/api
consists of 5 values:
402167441269
- AWS account ID;us-east-1
- AWS region.dkr.ecr
- AWS service.amazonaws.com
- AWS domain.api
- service name.
Images for all environments will be uploaded to the same repository for each service.
Kubernetes cluster
Now let’s create EKS cluster.
In the first step, we need to set the cluster name. A common practice is to use the project name for it. Also, you can add an environment prefix if you have separate clusters for each environment: my-app-staging
, and my-app-production
.
We can leave other parameters by default.
After creation, you need to wait a bit until the cluster status becomes Active.
After cluster creation, you should attach EC2 instances to the cluster. You can do it by clicking on the Add Node Group button on the Compute tab.
Set the node group name and select the only Node IAM role from the list.
AWS recommends creating at least 2 nodes t3.medium instance type for the production environment.
Now you need to configure the node group in the deployment script that we created on the second screenshot.
Need to update the nodeGroup
value to pool-app
.
Accessing cluster from a local machine
Before working with the cluster, you need to configure the AWS CLI.
For accessing the cluster we need to run the following command:
Where us-east-1 is the cluster region and my-app is the cluster name.
If everything is ok you will be able to switch to your cluster.
Type kubectx
in the terminal and select your cluster.
By default, the cluster name will have an arn-name type like arn:aws:eks:us-east-1:402167441269:cluster/my-app
.
To change it to a more convenient name, you can use the command kubectx my-app=arn:aws:eks:us-east-1:402167441269:cluster/my-app
.
Then try to get information about installed pods in the cluster. Type kubectl get pods -A
in the terminal.
If you did all steps correctly you will see the next info in a terminal.
Dependencies
Now we need to install our dependencies in the cluster.
Dependency | Description |
---|---|
ingress-nginx | Ingress controller for Kubernetes using Nginx as a reverse proxy and load balancer |
redis | Open source, advanced key-value store. Redis needed for API service |
You can read here how ingress-nginx works.
Configure Helm Values for ingress-nginx and redis. Need to update the eks.amazonaws.com/nodegroup
value to pool-app
.
Open deploy/bin
folder and run the bash script.
DNS and SSL
Once you deploy ingress-nginx, it will create a Load Balancer with external IP. All incoming requests to services should be sent to Load Balancer external IP, then requests to our services will be routed to domains from Ingresses configuration by ingress-nginx.
To get Load Balancer IP type kubectl get services -n ingress-nginx
in the terminal and copy EXTERNAL-IP
of ingress-nginx-controller
.
It takes some time while ingress-nginx will configure everything and provide EXTERNAL-IP
.
We are using Cloudflare for setting DNS records. You can register a domain in Cloudflare or transfer it from another service.
Open the DNS tab in Cloudflare and create two CNAME
records for Web and API that point to Load Balancer external IP.
Select the Proxied option that will proxy all traffic through Cloudflare. It does a lot of awesome work, you can read more about it here. In our case, we use it for automatic SSL certificate generation.
If you are deploying on a staging/demo environment add the corresponding postfix in the domain.
Example: my-app-staging
Now add your domains in helm templates and code. For example, we are deploying on a production environment, if you are deploying on staging you will need to update staging.yaml
and staging.json
files.
MongoDB Atlas
Navigate to MongoDB Atlas, sign in to your account and create a new database.
Database creation
- Select the appropriate type. Dedicated for a production environment, shared for staging/demo.
- Select provider and region. We recommend selecting the same or closest region to the AWS EKS cluster.
- Select cluster tier. Free M0 Sandbox should be enough for staging/demo environments. For production environment we recommended selecting the option that supports cloud backups, M2 or higher.
- Enter cluster name
Security and connection
After cluster creation, you’ll need to set up security. Select the authentication type (username and password) and create a user.
Please be aware that the initial character of the generated password should be a letter. If it isn’t, you’ll need to create a new password.
Failing to do this may lead to DigitalOcean parsing the MONGO_URI
variable incorrectly.
Add IP addresses list, which should have access to your cluster. Add 0.0.0.0/0 IP address to allow anyone with credentials to connect.
After database creation, go to the dashboard page and get the URI connection string by pressing the connect
button.
Select Connect your application
option. Choose driver and mongo version, and copy connection string. Don’t forget to replace <password>
with your credentials.
Now save this string, you will need it later.
Before moving to production, it’s crucial to set up MongoDB backup methods.
This ensures that you can reliably restore your data in the event of unforeseen circumstances.
CI/CD Preparation
Before setup CI/CD you need to create a separate user in AWS IAM with certain permissions, let’s create this user.
First of all, we need to create a policy for our user and move to IAM dashboard. Open the Policies page in the sidebar and click Create policy. After choosing the JSON tab, insert the following config:
In the second step, you can optionally add tags to your policy.
And at the last step, you need to give your policy a name and review summary.
Now we need to create a user, open the Users page in the sidebar and click Add user.
In the next step, you need to attach your policy to the user. Click Attach existing policies directly and select the policy, which we created recently.
At the next step, you can optionally add tags to your user.
The fourth step is to review your user and click Create user.
Once you’re done, you’ll see a list of users. Find yours and click Create access key.
Then, pick Application running on an AWS compute service as the use case. You can also add a tag if needed.
After that, you’ll get your Access Key ID and Secret Access Key. Remember to save them because you won’t see them again.
Now we need to give EKS permissions to our user. Use the following command to attach the user to kubernetes masters group:
In the —arn parameter you need to specify your user ARN, which you can find in the IAM dashboard.
Environment variables
API
For the API deployment, you need to set up environment variables using Kubernetes secrets and configMaps.
Secrets in Kubernetes are used to store sensitive information, such as passwords, API tokens, and keys. They are encoded in Base64 format to provide a level of security. These can be mounted into containers as data volumes or used as environment variables.
ConfigMaps in Kubernetes are used to store configuration data in key-value pairs, such as environment variables, command-line arguments, or configuration files. They help decouple configuration from containerized applications.
Before deploying the app, make sure all necessary variables from the API config are set. Here are the default required variables:
- MONGO_URI (requires encoding)
- MONGO_DB_NAME
- API_URL
- WEB_URL
Open deploy/bin
folder and run the bash script.
Enter the stage name. Then, you’ll be asked for variable name, its value, and whether it should be encoded.
Repeat this process as required to create all the essential variables.
If you pick that variable should be encoded, it will be stored as a secret in Kubernetes. Otherwise, it gets stored in a configMap.
The script works for initially creating secrets and configMaps, as well as for updates. When updating, you only need to input the variables that require changes, not all of them.
After updating variables, initiate a new deployment. Pods cache variable values during startup, requiring a refresh for changes to apply.
WEB
To modify environment variables in web, access the .env.staging
or .env.production
file:
Avoid storing sensitive information in web environment files as they are not secure.
Port
To configure the web application to use port 3002, add the line ENV PORT=3002 to the web Dockerfile:
CI/CD
To automate deployment through GitHub Actions you need to configure GitHub Secrets inside workflow files.
The deployment will be triggered on each commit. Committing to the main branch will trigger a deployment in the staging environment, and committing to the production branch will trigger a deployment in the production environment.
To check required Secrets you can open workflows in the .github
folder at the root of your project.
To automate deployment to the production environment you need to create AWS_ACCESS_KEY
, AWS_SECRET_ACCESS_KEY
, AWS_ACCOUNT_ID
, AWS_REGION
and CLUSTER_NAME_PRODUCTION
secrets for api-production.yml
and web-production.yml
workflows.
AWS_ACCESS_KEY
and AWS_SECRET_ACCESS_KEY
you can get information about secrets from the downloaded credentials file when creating a user for CI/CD.
AWS_ACCOUNT_ID
you can get from the user menu in the upper right corner of the AWS Management Console.
AWS_REGION
and CLUSTER_NAME_PRODUCTION
you set according to your project, for this guide we use us-east-1
and my-app
respectively.
After adding all the secrets, you should have the same secrets as in the following screenshot.
Now commit all changes to GitHub that will trigger deployment.
Done! Application deployed and can be accessed by provided domain.
If something went wrong you can check the workflows logs on GitHub and use kubectl logs, kubectl describe commands.
Manual deployment
To deploy services in the cluster manually you need to set cluster authorization credentials inside the config.
Set environment
and namespace
to production/staging
and set your AWS credentials into config.
Run the deployment script. It will do the same as the CI deployment, but you run it manually.