First, initialize your project. Type npx create-ship-app@latest in the terminal then choose AWS EKS deployment type.You will have the next project structure.
AWS Regions are physical locations of AWS cluster data centers. Each group of logical data centers calls Availability Zone (AZ). AZs allow the operation of production applications and databases that are more highly available, fault-tolerant and scalable.Now you need to select an AWS region for future use of the services. You can read more about region selection for your workloads here: What to Consider when Selecting a Region for your Workloads.For this deployment guide, we will use the us-east-1 region.
Usually, you have to create AWS resources in a single region. If you don’t see created resources, you may need to switch to the appropriate AWS region.
You need to create private repositories for storing Docker images. The deployment script will upload images to Container Registry during the build step, and Kubernetes will automatically pull these images from Container Registry to run a new version of the service during the deployment step.Now we should create a repository for each service.For Ship, we need to create repositories for the next services:
You should create a private repository for each service manually.
After creation, you should have the following 4 services in ECRDocker images for each service are stored in a separate repository.
During the deployment process script will automatically create paths to repositories in the next format:
API - 276472736030.dkr.ecr.us-east-1.amazonaws.com/api;
Navigate to the cluster creation page and choose Custom configuration
Make sure to disable EKS Auto Mode
2
Name Your Cluster
Enter a name for your cluster. It’s recommended to use your project name.
For multi-environment setups, append the environment name to your cluster:
my-ship-app-staging
my-ship-app-production
3
Configure Cluster IAM Role
For the Cluster IAM role:
Click the Create recommended role button
AWS will automatically create IAM roles with necessary EKS cluster permissions
Return to cluster creation page and select the created policy
4
Set Authentication Mode
In the Cluster access section:
Set Cluster authentication mode to EKS API and ConfigMap
5
Configure Add-ons
Navigate to ‘Select add-ons’ and verify these required add-ons are selected:
CoreDNS
kube-proxy
Amazon VPC CNI
Node monitoring agent
6
Review and Create
Move to the review section and verify all configuration parameters are correct before creating the cluster.
Default values for other configuration parameters are suitable unless you have specific requirements.
After creation, you need to wait a few minutes until the cluster status becomes Active.After cluster creation, you should attach EC2 instances to the cluster. You can do it by clicking on the Add Node Group button on the Compute tab.Set the node group name as pool-app and select the relevant Node IAM role from the list.If you don’t have any IAM roles here, click the Create recommended role button. You will be prompted to create properly configured IAM roles with all necessary permissions.AWS recommends creating at least 2 nodes t3.medium instance type for the production environment.
Default values for other configuration parameters are suitable unless you have specific requirements.
Change to the deploy/dependencies directory in your terminal.
2
Configure Helm Values (Optional)
This step is required only if you specified a custom node group name in your EKS cluster.If you did, update the eks.amazonaws.com/nodegroup value in values.yaml.gotmpl:
In the Cloudflare DNS tab, create 2 CNAME records:
One for Web interface
One for API endpoint
Both should point to your Load Balancer’s external hostname.Enable the Proxied option to:
Route traffic through Cloudflare
Generate SSL certificates automatically
Cloudflare’s free Universal SSL certificates only cover the apex domain and one subdomain level. For multiple subdomain levels, you’ll need an Advanced Certificate.
4
Update Configuration Files
Update your domain settings in the appropriate environment configuration files:For API service:
MongoDB Atlas is a fully managed cloud database service that provides automated backups, scaling, and security features. It offers 99.995% availability with global deployment options and seamless integration with AWS infrastructure.
Navigate to Database Access → Add New Database User
Authentication Method: Password
Username: Use environment-specific names (e.g., api-production, api-staging)
Password: Generate a strong password
Database User Privileges: Read and write to any database
Password Requirements: Ensure the password starts with a letter and contains only alphanumeric characters and common symbols. Special characters at the beginning can cause URI parsing issues.
2
Configure Network Access
Navigate to Network Access → Add IP Address
Click Allow access from anywhere to allow connections from any IP with valid credentials
For production, consider restricting to specific IP ranges for enhanced security
Kubernetes applications require proper environment variable configuration for both API and Web components. This section covers how to set up and manage environment variables securely using Kubernetes secrets and configuration files.
For the API deployment, you need to set up environment variables using Kubernetes secrets to securely manage sensitive configuration data.
Secrets in Kubernetes are used to store sensitive information, such as passwords, API tokens, and keys.
They are encoded in Base64 format to provide a level of security.
These can be mounted into containers as data volumes or used as environment variables.
Before deploying the app, make sure all necessary variables from the API config are exist. Here are the minimal set of required variables:
Specifies the application environment (development, staging, production). This controls logging levels, debugging features, error reporting, and other environment-specific behaviors. The API uses this to determine which configuration settings to load.
MONGO_URI
MongoDB connection string including authentication credentials and cluster information. This is the primary database connection for the API. Format: mongodb+srv://username:[email protected]. Each environment should use a separate database cluster or at minimum separate credentials.
MONGO_DB_NAME
Name of the MongoDB database to use for this environment. Each environment (development, staging, production) should have its own database to prevent data conflicts and ensure proper isolation.
API_URL
The fully qualified domain name where the API will be accessible. This must be a valid HTTPS URL and should match your Kubernetes ingress configuration. Used for CORS settings and internal service communication.
WEB_URL
The fully qualified domain name where the web application will be accessible. Used for CORS configuration, redirect URLs, email templates, and social sharing metadata. Must be a valid HTTPS URL.
Replace all example values with your actual configuration. Never use production secrets in documentation or version control.
5
Import secrets to Kubernetes
Import secrets from the .env file to Kubernetes secret using k8sec:
Copy
Ask AI
k8sec load -f .env.production api-production-secret -n production
After updating environment variables, you must initiate a new deployment for changes to take effect.
Kubernetes pods cache variable values during startup, requiring a pod restart or rolling update to apply changes.
The web application uses Next.js environment variables that are embedded at build time and made available in the browser. Unlike API secrets, these variables are stored directly in the GitHub repository.
Why Web Environment Variables Are Safe in Git: Web environment variables (prefixed with NEXT_PUBLIC_) contain only public configuration like URLs and API endpoints. They don’t include sensitive data like passwords or API keys, making them safe to store in version control. These values are already exposed to users in the browser, so repository storage doesn’t create additional security risks.
Security Notice: Never store sensitive information (passwords, API keys, secrets) in web environment files as they will be accessible on the client side. Only use public configuration values that are safe to expose to end users.
Best Practice: Keep web environment files in your repository and ensure all values are non-sensitive. If you need to reference sensitive data from the frontend, create a secure API endpoint that returns the necessary information after proper authentication.
To set up CI/CD with GitHub Actions securely, we need to create a dedicated IAM user in AWS with specific permissions.This separate user will be used exclusively for CI/CD operations, following the principle of least privilege and keeping deployment credentials isolated from other system users.
Before starting, make sure you have created a GitHub repository for your project.
GitHub Secrets and variables allow you to manage reusable configuration data.Secrets are encrypted and are used for sensitive data. Learn more about encrypted secrets.Variables are shown as plain text and are used for non-sensitive data. Learn more about variables.
The deployment will be triggered on each commit:
Commits to main branch → deploy to staging environment
Commits to production branch → deploy to production environment
Configure the following secrets and variables in your GitHub repository:
Name
Type
Description
AWS_SECRET_ACCESS_KEY
secret
The secret access key from the AWS IAM user created for CI/CD. This allows GitHub Actions to authenticate with AWS services
AWS_ACCESS_KEY_ID
variable
The access key ID from the AWS IAM user. Used in conjunction with the secret key for AWS authentication
AWS_REGION
variable
The AWS region where your EKS cluster and ECR registry are located (e.g. us-east-1)
CLUSTER_NODE_GROUP
variable
The name of the EKS node group where your application pods will be scheduled (e.g. pool-app)
CLUSTER_NAME_PRODUCTION
variable
The name of your production EKS cluster. Required when deploying to the production environment
CLUSTER_NAME_STAGING
variable
The name of your staging EKS cluster. Required when deploying to the staging environment
Never commit sensitive credentials directly to your repository.
Always use GitHub Secrets for sensitive information like AWS keys.
Variables (unlike secrets) are visible in logs and can be used for non-sensitive configuration values that may need to be referenced or modified.
Now commit all changes to GitHub that will trigger deployment, or you can run a workflow manually
Done! Application deployed and can be accessed by the provided domain.
If something went wrong you can check the workflows logs on GitHub and use kubectl logs, kubectl describe commands.
Upstash Redis is a highly available, infinitely scalable Redis-compatible database that provides enterprise-grade features without the operational complexity.
Redis as a Message Broker: When scaling to multiple server instances, Redis acts as a message broker between Socket.io servers, ensuring real-time messages reach all connected clients regardless of which server they’re connected to.
Log in to your Upstash account and navigate to the Redis section.
2
Create New Database
Click Create Database in the upper right corner to open the configuration dialog.
3
Configure Database Settings
Database Name: Choose a descriptive name for your database (e.g., my-ship-app-production)Primary Region: Select the region closest to your main application deployment for optimal write performance.Read Regions: Choose additional regions where you expect high read traffic for better global performance.
4
Select Plan & Deploy
Choose your pricing plan based on expected usage and click Create to deploy your database.
Once your database is created, you’ll need the connection string for your application:
1
Navigate to Connection Info
Go to your database dashboard and find the Connect to your database section.
2
Copy Connection String
Select the Node tab for the appropriate connection string format
Click Reveal to show the hidden password
Copy the complete Redis URI (format: rediss://username:password@host:port)
3
Add to Environment Variables through k8sec
Using k8sec, add the Redis connection string to your environment configuration:
Copy
Ask AI
k8sec set api-production-secret -n production REDIS_URI=$REDIS_URI
After updating environment variables, restart your API pod using:
Copy
Ask AI
kubectl delete pod <pod-name> -n <namespace>
This will trigger Kubernetes to create a new pod with the updated environment variables.
Redis Insight is a powerful GUI tool for managing and debugging Redis databases.
1
Install Redis Insight
Download and install Redis Insight on your local machine.
2
Add Database Connection
Open Redis Insight
Click Add Database
Paste your Upstash Redis connection string in the Connection URL field
Click Add Database
3
Explore Your Database
Once connected, you can use Upstash Redis Console to:
Browse keys and data structures
Execute Redis commands directly
Monitor real-time performance metrics
Debug application data storage
Real-time Monitoring: Upstash Redis updates database metrics automatically every 10 seconds, giving you near real-time visibility into your Redis performance and usage.