# API action Source: https://ship.paralect.com/docs/api-reference/api-action ## Overview **API action** — is HTTP handler that perform database updates and other logic required by the business logic. Actions should reside in the `/actions` folder within resource. Usually action is a single file that has meaningful name, e.x. `list`, `get-by-id`, `update-email`. If action has a lot of logic and require multiple files it needs to be placed into the folder with name of the action and action need to exposed using module pattern (index.ts file). Direct database updates of the current resource entity are allowed within action. ## Examples ```typescript theme={null} import Router from '@koa/router'; import { AppKoaContext } from 'types'; import { validateMiddleware } from 'middlewares'; type GetCompanies = { userId: string; }; async function handler(ctx: AppKoaContext) { const { userId } = ctx.validatedData; // validatedData is returned by API validator ctx.body = {}; // action result sent to the client } export default (router: Router) => { // see Rest API validator router.get('/companies', validateMiddleware(schema), handler); }; ``` # API action validator Source: https://ship.paralect.com/docs/api-reference/api-action-validator ## Overview **API action validator** — is an array of functions (think middlewares) that is used to make sure that data sent by client is valid. ## Examples ```typescript theme={null} import { z } from 'zod'; import { AppKoaContext, Next } from 'types'; import { EMAIL_REGEX, PASSWORD_REGEX } from 'app-constants'; import { userService } from 'resources/user'; import { validateMiddleware } from 'middlewares'; const schema = z.object({ firstName: z.string().min(1, 'Please enter fist name.').max(100), lastName: z.string().min(1, 'Please enter last name.').max(100), email: z.string().regex(EMAIL_REGEX, 'Email format is incorrect.'), password: z.string().regex(PASSWORD_REGEX, 'The password format is incorrect'), }); type ValidatedData = z.infer; async function validator(ctx: AppKoaContext, next: Next) { const { email } = ctx.validatedData; const isUserExists = await userService.exists({ email }); ctx.assertClientError(!isUserExists, { email: 'User with this email is already registered', }); await next(); } async function handler(ctx: AppKoaContext) { // ...action code } export default (router: AppRouter) => { router.post('/sign-up', validateMiddleware(schema), validator, handler); }; ``` To pass data from the `validator` to the `handler`, utilize the `ctx.validatedData` object: ```typescript theme={null} import { z } from 'zod'; import { AppKoaContext, AppRouter, Next, User } from 'types'; import { EMAIL_REGEX, PASSWORD_REGEX } from 'app-constants'; import { userService } from 'resources/user'; import { validateMiddleware } from 'middlewares'; import { securityUtil } from 'utils'; const schema = z.object({ email: z.string().regex(EMAIL_REGEX, 'Email format is incorrect.'), password: z.string().regex(PASSWORD_REGEX, 'The password format is incorrect'), }); interface ValidatedData extends z.infer { user: User; } async function validator(ctx: AppKoaContext, next: Next) { const { email, password } = ctx.validatedData; const user = await userService.findOne({ email }); ctx.assertClientError(user && user.passwordHash, { credentials: 'The email or password you have entered is invalid', }); const isPasswordMatch = await securityUtil.compareTextWithHash(password, user.passwordHash); ctx.assertClientError(isPasswordMatch, { credentials: 'The email or password you have entered is invalid', }); ctx.validatedData.user = user; await next(); } async function handler(ctx: AppKoaContext) { const { user } = ctx.validatedData; // ...action code } export default (router: AppRouter) => { router.post('/sign-in', validateMiddleware(schema), validator, handler); }; ``` # API limitations Source: https://ship.paralect.com/docs/api-reference/api-limitations To keep things simple and maintainable we enforce some limitations on the API level. Limitations are more conventions and are not enforced in the code. If followed, they guarantee you can continue ship things quickly even after years of development. ## Resource updates ### Rule **Every** entity update should stay within a resource folder. Direct database updates are allowed in data services, handlers and actions. ### Explanation This restriction makes sure that entity updates are not exposed outside the resource. This enables the discoverability of all updates and simplifies resource changes. ## Complex read operations ### Rule Complex data read operations (e.x. aggregation, complex queries) must be defined in the data service. ### Explanation Complex operations often require maintenance and need to be edited, when the code (especially data schema) changes. The goal of that rule is to keep all such operations in the data service or near it in a separate file, so it's easy to discover them. ## Predictable file location ### Rule Put things as close as possible to the place where they are used. ### Explanation We want to keep things together that belong together and keep them apart if they belong apart. With that rule in mind, every resource has clear boundaries. ## Two data services ### Rule Two data services can not use each other directly. You may use two services together in actions or (better!) in workflows. ### Explanation Use of two data services inside each other leads to circular dependencies and unnecessary complexities. We encourage the use of CUD events to protect the boundaries of different resources and services. You can use two services together in the workflow. ## Dependencies relatively to the /src folder ### Rule Files shoud be required from the current folder or from the root, `../` is not allowed. E.x.: ```typescript theme={null} import service from 'resources/user/user.service'; ``` ### Explanation This makes it easy to move files around without breaking an app and also much simpler to understand where the actual file is located, compared to something like: `../../user.service`. # Data service Source: https://ship.paralect.com/docs/api-reference/data-service ## Overview **Data Service** — is a layer that has two functions: database updates and domain functions. Database updates encapsulate the logic of updating and reading data in a database (also known as Repository Pattern in DDD). Domain functions use database updates to perform domain changes (e.x. `changeUserEmail`, `updateCredentials`, etc). For simplicity, we break the single responsibility pattern here. Data Service is usually named as `entity.service` (e.x. `user.service`). ## Examples ```typescript theme={null} import _ from 'lodash'; import db from 'db'; import constants from 'app.constants'; import schema from './user.schema'; import { User } from './user.types'; const service = db.createService('users', { schema }); async function createInvitationToUser(email: string, companyId: string): Promise { // the logic } export default Object.assign(service, { createInvitationToUser, }); ``` # Event handler Source: https://ship.paralect.com/docs/api-reference/event-handler ## Overview **Event handler** — is a simple function that receives event as an argument and performs required logic. All event handlers should be stored in the /handlers folder within resource. Handler name should include event name e.x. `user.created.handler.ts`. That helps find all places were event is used. Direct database updates of the current resource entity are allowed within handler. ## Examples ```ts theme={null} import { eventBus, InMemoryEvent } from '@paralect/node-mongo'; import ioEmitter from 'io-emitter'; import { DATABASE_DOCUMENTS } from 'app-constants'; import { Document } from 'types'; const { DOCUMENTS } = DATABASE_DOCUMENTS; eventBus.on(`${DOCUMENTS}.created`, (data: InMemoryEvent) => { try { const document = data.doc; ioEmitter.publishToUser(document.userId, 'document:created', document); } catch (error) { logger.error(`${DOCUMENTS}.created handler error: ${error}`); } }) ``` ```ts theme={null} import { eventBus, InMemoryEvent } from '@paralect/node-mongo'; import { DATABASE_DOCUMENTS } from 'app-constants'; import { User } from 'types'; const { USERS } = DATABASE_DOCUMENTS; eventBus.onUpdated(USERS, ['firstName', 'lastName'], (data: InMemoryEvent) => { const user = data.doc; const fullName = user.lastName ? `${user.firstName} ${user.lastName}` : user.firstName; console.log(`User fullName was updated: ${fullName}`) }) ``` # CUD events Source: https://ship.paralect.com/docs/api-reference/events ## Overview **CUD events (Create, Update, Delete)** — is a set of events published by `data layer` (`@paralect/node-mongo` npm package). Events are the best solve three main problems: * nicely update denormalized data (e.x.: use user.created and user.deleted to maintain usersCount field in the company) * avoid tight coupling between your app entities (e.x. if you need to keep user updates history you can just subscribe to user updates in the history resource vs using history.service inside user.resource and marry user and history) * they’re the best to integrate with external systems (e.x. events can be published as web hooks and webhooks can power real time Zapier triggers) ## Examples There are three types of events: * entity.created event (e.x. user.created) ```typescript theme={null} { _id: string, createdOn: Date, type: 'user.created', userId: string, companyId: string, data: { object: {}, // user object stored here }, }; ``` * entity.updated event (e.x. user.updated). We use [diff](https://github.com/flitbit/diff) to calculate the raw difference between previous and current version of updated entity. diff object is too complex and should not be used directly. Instead, fields required by business logic should be exposed via change object e.x. previousUserEmail ```typescript theme={null} { _id: string, createdOn: Date, type: 'user.updated', userId: string, companyId: string, data: { object: {}, // user object stored here diff: {}, change: {} }, }; ``` * entity.removed event (e.x. user.removed). ```typescript theme={null} { _id: string, createdOn: Date, type: 'user.removed', userId: string, companyId: string, data: { object: {}, // user object stored here }, }; ``` # Middlewares Source: https://ship.paralect.com/docs/api-reference/middlewares ## Overview Ship provides two essential middleware utilities for API routes: * **Rate Limit Middleware** — protects your API endpoints from excessive requests * **Validate Middleware** — validates incoming request data using Zod schemas Both middlewares are located in `/api/src/middlewares` and can be imported and applied to any route. ## Rate Limit Middleware The rate limit middleware protects your API endpoints from abuse by limiting the number of requests a user can make within a specified time window. It automatically uses Redis when available, falling back to in-memory storage for development environments. ### Parameters The `rateLimitMiddleware` function accepts an options object with the following parameters: * `limitDuration` (optional) — Time window in seconds. Default: `60` seconds * `requestsPerDuration` (optional) — Maximum number of requests allowed within the time window. Default: `10` * `errorMessage` (optional) — Custom error message shown when rate limit is exceeded. Default: `'Looks like you are moving too fast. Retry again in few minutes.'` ### Key Features * **Automatic Storage Selection**: Uses Redis if `REDIS_URI` is configured, otherwise falls back to in-memory storage * **User-Specific Limits**: Rate limits are applied per authenticated user (based on `user._id`) or per IP address for unauthenticated requests * **Response Headers**: Includes rate limit headers in the response for client-side tracking ### Example ```typescript theme={null} import Router from '@koa/router'; import { rateLimitMiddleware, validateMiddleware } from 'middlewares'; async function handler(ctx: AppKoaContext) { // Your handler logic here ctx.body = { success: true }; } export default (router: Router) => { router.post( '/send-email', rateLimitMiddleware({ limitDuration: 300, // 5 minutes requestsPerDuration: 5, // 5 requests per 5 minutes errorMessage: 'Too many emails sent. Please try again later.', }), validateMiddleware(schema), handler, ); }; ``` ### Common Use Cases * Protecting email sending endpoints * Rate limiting authentication attempts * Preventing API abuse on expensive operations * Throttling public API endpoints ## Validate Middleware The validate middleware automatically validates incoming request data against a Zod schema. It combines data from request body, files, query parameters, and route parameters into a single validated object. ### How It Works The middleware validates the following request data: * **Request body** (`ctx.request.body`) * **Uploaded files** (`ctx.request.files`) * **Query parameters** (`ctx.query`) * **Route parameters** (`ctx.params`) If validation fails, it automatically throws a `400` error with detailed field-level error messages. If validation succeeds, the validated data is available via `ctx.validatedData`. ### Example ```typescript theme={null} import Router from '@koa/router'; import { z } from 'zod'; import { AppKoaContext } from 'types'; import { validateMiddleware } from 'middlewares'; // Define your schema const schema = z.object({ email: z.string().email('Email format is incorrect'), firstName: z.string().min(1, 'First name is required').max(100), lastName: z.string().min(1, 'Last name is required').max(100), age: z.number().int().positive().optional(), }); type CreateUserData = z.infer; async function handler(ctx: AppKoaContext) { // Access validated data const { email, firstName, lastName, age } = ctx.validatedData; // Your handler logic here ctx.body = { email, firstName, lastName, age }; } export default (router: Router) => { router.post('/users', validateMiddleware(schema), handler); }; ``` ### Error Response Format When validation fails, the middleware returns a structured error response with field-specific error messages: ```json theme={null} { "clientErrors": { "email": ["Email format is incorrect"], "firstName": ["First name is required"] } } ``` # Overview Source: https://ship.paralect.com/docs/api-reference/overview `/api` is a source folder for three independent services: * API — REST API, entry file is `/api/src/app.ts` * Migrator — small utility that runs MongoDB migrations, entry file is `/api/src/migrator.ts` * Scheduler — standalone service, that runs background jobs, entry file is `/api/src/scheduler.ts` ## What is resource? From the outside world, the resource is a set of REST API endpoints that allow managing a particular business entity (e.x. user). Inside, the resource includes methods to work with the database, business logic, database data, request validation, events and event handlers. Most of the time, a resource has 1 on 1 mapping to the database entity. The resource describes how you can work with any given entity. E.x. User API needs to create new users, edit an existing user, list all users. ## Resource components A resource represents on REST endpoint and most of the time one database table or collection. The resource usually consists of the following parts: * [API actions](./api-action). Each action consists of three things: the handler, validator, and route. * [A data service](./data-service). Typically this service is used to access databases. For simplicity, we mix the data access layer with the domain operations associated with a given entity. * [Event handlers](./event-handler). Event handlers include any updates that come as a reaction to what happens outside the boundaries of any given entity. * [Workflows](./workflow). They are essentially updates that require multiple resources to work together. E.x. User Signup is usually a workflow, as it requires to a) create user b) create company 3) create authentication token. * [Data schema](/package-sharing/schemas). We use [Zod](https://zod.dev/) to define database schemas and validate them in runtime. (we’ve used [Joi](https://joi.dev/) in the past). * Type definitions. If typescript is used for the project. The data schemas and type definitions can be found in the `packages` folder. Check out the ["Package sharing" section](/package-sharing/overview) for more information on data sharing. ## Example structure Here is the resource structure example: ```shell theme={null} /resource /actions get.ts create.ts ... index.ts resource.service.ts resource.routes.ts ``` ## Routing Ship's API uses a structured routing system built on [Koa](https://koajs.com/) that organizes endpoints into three categories: **public**, **private**, and **admin** routes. All requests pass through global middlewares for authentication, error handling, and request processing. Learn more about the routing architecture in the [Routing](/api-reference/routing/overview) section. # Middlewares Source: https://ship.paralect.com/docs/api-reference/routing/middlewares ## Overview Ship uses middlewares for authentication, error handling, and request processing. Middlewares are applied in two ways: * **Global Middlewares** — Run for all routes in a specific order * **Route-Specific Middlewares** — Applied only to private or admin routes ## Execution Order ```mermaid theme={null} sequenceDiagram participant Request participant Global Middlewares participant Auth Middleware participant Handler Request->>Global Middlewares: 1-5: Process all global middlewares Note over Global Middlewares: attachCustomErrors
attachCustomProperties
routeErrorHandler
extractTokens
tryToAttachUser Global Middlewares->>Auth Middleware: 6: auth or adminAuth (if needed) Auth Middleware->>Handler: 7: Execute route handler Handler-->>Request: Response ``` ## Global Middlewares ### 1. attachCustomErrors Adds custom error handling methods to the context (`ctx`). **Available methods:** ```typescript theme={null} // Throw simple error ctx.throwError('User not found', 404); // Conditional error ctx.assertError(user.isActive, 'User account is disabled', 403); // Field-level validation errors ctx.throwClientError({ email: 'Email is required', password: 'Password must be at least 8 characters' }); // Conditional client error ctx.assertClientError(!existingUser, { email: 'Email already exists' }); // Redirect with error message ctx.throwGlobalErrorWithRedirect('Authentication failed', 'https://example.com/login'); ``` ### 2. attachCustomProperties Initializes `ctx.validatedData = {}` which is later populated by the [validate middleware](/api-reference/middlewares#validate-middleware). ### 3. routeErrorHandler Catches and formats errors from route handlers. Logs errors with request context and hides sensitive details in production. **Error response format:** ```json theme={null} { "errors": { "email": ["Email is required"], "password": ["Password must be at least 8 characters"] } } ``` ### 4. extractTokens Extracts access tokens from requests and stores in `ctx.state.accessToken`. **Token sources (checked in order):** 1. `ACCESS_TOKEN` cookie 2. `Authorization: Bearer ` header ### 5. tryToAttachUser Validates the access token and attaches the user to `ctx.state.user` if valid. Also updates the user's last request timestamp. This middleware doesn't block requests if the token is invalid. Use the `auth` middleware to enforce authentication. ## Route-Specific Middlewares Applied to specific route types for authentication and authorization. ### auth Ensures user is authenticated by checking if `ctx.state.user` exists. Returns `401` if not authenticated. **Usage:** ```typescript theme={null} app.use(mount('/account', compose([auth, accountRoutes.privateRoutes]))); ``` ### adminAuth Validates admin access by checking the `x-admin-key` header against the `ADMIN_KEY` environment variable. Returns `401` if invalid. **Usage:** ```typescript theme={null} app.use(mount('/admin/users', compose([adminAuth, userRoutes.adminRoutes]))); ``` **Making admin requests:** ```bash theme={null} curl -X GET https://api.example.com/admin/users \ -H "x-admin-key: your-admin-key-here" ``` ## Summary **Execution order:** 1. `attachCustomErrors` - Adds error methods 2. `attachCustomProperties` - Initializes properties 3. `routeErrorHandler` - Wraps in error handler 4. `extractTokens` - Extracts token from cookie/header 5. `tryToAttachUser` - Validates token, attaches user 6. `auth` or `adminAuth` (if applicable) 7. Route handler ## See Also * [Routing](/api-reference/routing/overview) - Routing architecture overview * [Middlewares](/api-reference/middlewares) - Validation and rate limiting * [API Actions](/api-reference/api-action) - Creating resource endpoints # Overview Source: https://ship.paralect.com/docs/api-reference/routing/overview Ship's API routing system is built on [Koa](https://koajs.com/) and organizes routes into three categories: **public**, **private**, and **admin** routes. All routes pass through global middlewares before reaching their handlers. ## Request Flow ```mermaid theme={null} flowchart TD A[Incoming Request] --> B[Global Middlewares] B --> C{Route Type?} C -->|Public| D[Public Routes] C -->|Private| E[auth middleware] C -->|Admin| F[adminAuth middleware] E --> G[Private Routes] F --> H[Admin Routes] D --> I[Route Handler] G --> I H --> I I --> J[Response] ``` ## Route Definition Routes are defined in `/api/src/routes/index.ts`: ```typescript /api/src/routes/index.ts theme={null} const defineRoutes = (app: AppKoa) => { // Global middlewares (applied to all routes) app.use(attachCustomErrors); app.use(attachCustomProperties); app.use(routeErrorHandler); app.use(extractTokens); app.use(tryToAttachUser); // Route registration publicRoutes(app); privateRoutes(app); adminRoutes(app); }; ``` Global middlewares run for every request. See [Routing Middlewares](/api-reference/routing/middlewares) for details. ## Route Types ### Public Routes Accessible without authentication. Defined in `/api/src/routes/public.routes.ts`. **Examples:** * `GET /health` - Health check * `POST /account/sign-up` - User registration * `POST /account/sign-in` - User authentication ### Private Routes Require user authentication via the `auth` middleware. Defined in `/api/src/routes/private.routes.ts`. ```typescript theme={null} // Private routes use auth middleware app.use(mount('/account', compose([auth, accountRoutes.privateRoutes]))); ``` **Examples:** * `GET /account` - Get current user account * `PUT /account` - Update user profile * `GET /users` - List users ### Admin Routes Require admin authentication via the `adminAuth` middleware (validates `x-admin-key` header). Defined in `/api/src/routes/admin.routes.ts`. ```typescript theme={null} // Admin routes use adminAuth middleware app.use(mount('/admin/users', compose([adminAuth, userRoutes.adminRoutes]))); ``` **Examples:** * `GET /admin/users` - Admin user management * `PUT /admin/users/:id` - Admin user updates ## Route Mounting Ship uses two Koa utilities: * **koa-mount** - Mount routes at a specific path prefix * **koa-compose** - Compose multiple middlewares together ```typescript theme={null} // Mount routes with prefix app.use(mount('/account', accountRoutes.publicRoutes)); // Compose auth middleware with routes app.use(mount('/account', compose([auth, accountRoutes.privateRoutes]))); ``` ## Authentication Flow ```mermaid theme={null} sequenceDiagram participant Client participant extractTokens participant tryToAttachUser participant auth participant Handler Client->>extractTokens: Request with token extractTokens->>tryToAttachUser: ctx.state.accessToken tryToAttachUser->>tryToAttachUser: Validate & find user tryToAttachUser->>auth: ctx.state.user (if valid) alt Private Route auth->>auth: Check ctx.state.user alt Authenticated auth->>Handler: Process request Handler->>Client: Response else Not Authenticated auth->>Client: 401 Unauthorized end else Public Route tryToAttachUser->>Handler: Process request Handler->>Client: Response end ``` ## See Also * [Routing Middlewares](/api-reference/routing/middlewares) - Global and route-specific middlewares * [API Actions](/api-reference/api-action) - Creating resource endpoints * [Middlewares](/api-reference/middlewares) - Validation and rate limiting # Testing Source: https://ship.paralect.com/docs/api-reference/testing ## Overview Tests in Ship applications are **optional by default**. The template excludes testing dependencies to keep it lightweight and focused on rapid development. However, as your project grows in complexity or requires higher reliability, testing becomes essential. Testing is implemented using the [Jest](https://jestjs.io/) framework and [MongoDB memory server](https://github.com/typegoose/mongodb-memory-server). The setup supports execution in CI/CD pipelines. MongoDB memory server allows connecting to an in-memory MongoDB instance and running integration tests in isolation, ensuring reliable and fast test execution without affecting your production database. * Complex and confusing business rules * Critical calculations/algorithms * Core flows that affect other features * Logic reused across the applications * Areas with recurring hard bugs * Simple CRUD operations * Prototype/MVP development * Short-term projects * Basic UI components * Static content pages ## Installation and Setup ### Installing Dependencies Add the necessary testing packages to your project: ```shell theme={null} pnpm add -D --filter=api \ jest \ @types/jest \ ts-jest \ @shelf/jest-mongodb \ mongodb-memory-server \ supertest \ @types/supertest \ dotenv ``` ### Jest Configuration Navigate to the root of `apps/api`. Create configuration file for jest: ```javascript jest.config.js theme={null} /** @type {import('jest').Config} */ const config = { preset: '@shelf/jest-mongodb', verbose: true, testEnvironment: 'node', testMatch: ['**/?(*.)+(spec.ts)'], transform: { '^.+\\.(ts|tsx)$': ['ts-jest', { useESM: true, diagnostics: false }], }, extensionsToTreatAsEsm: ['.ts', '.tsx'], watchPathIgnorePatterns: ['globalConfig'], roots: [''], modulePaths: ['src'], moduleDirectories: ['node_modules'], testTimeout: 10000, forceExit: true, detectOpenHandles: true, setupFiles: ['dotenv/config'], }; export default config; ``` After, create configuration file for jest-mongodb: ```javascript jest-mongodb-config.js theme={null} module.exports = { mongoURLEnvName: 'MONGO_URI', mongodbMemoryServerOptions: { binary: { version: '8.0.0', }, autoStart: false, }, }; ``` Jest requires set of environment variables to work properly. Create `.env.test` file in the root of `apps/api` and set the following variables: ```shell .env.test theme={null} APP_ENV=staging API_URL=http://localhost:3001 WEB_URL=http://localhost:3002 MONGO_DB_NAME=api-tests ``` ### package.json scripts Add test scripts to your `apps/api/package.json`: ```json apps/api/package.json theme={null} { "scripts": { "test": "NODE_OPTIONS=--experimental-vm-modules DOTENV_CONFIG_PATH=.env.test jest --runInBand", "test:watch": "NODE_OPTIONS=--experimental-vm-modules DOTENV_CONFIG_PATH=.env.test jest --watch", "test:coverage": "NODE_OPTIONS=--experimental-vm-modules DOTENV_CONFIG_PATH=.env.test jest --coverage", } } ``` ### Test Structure Tests should be placed next to the code they are testing inside a `tests/` folder. Use `*.spec.ts` suffixes and standardize by unit type: * `.action.spec.ts` — for action handler + validator (HTTP via supertest) * `.service.spec.ts` — for data/service layer * `.validator.spec.ts` — for standalone schema/validators #### API resource example ```text theme={null} apps/api/src/resources/user/ ├── actions/ │ ├── create.ts │ └── tests/ │ └── create.action.spec.ts ├── user.service.ts ├── user.routes.ts └── tests/ ├── user.service.spec.ts └── factories/ └── user.factory.ts ``` #### Utilities Colocate tests for utility modules. Keep them small and pure (no DB). Create `tests/` folder inside `utils` ```text theme={null} apps/api/src/utils/ ├── cookie.util.ts ├── promise.util.ts ├── security.util.ts └── tests/ ├── cookie.util.spec.ts ├── promise.util.spec.ts └── security.util.spec.ts ``` ## Testing Examples ### Service Integration Test Example ```typescript user.service.spec.ts theme={null} import { generateId } from '@paralect/node-mongo'; import { userService } from 'resources/user'; describe('user service', () => { beforeEach(async () => await userService.deleteMany({})); it('should create user', async () => { const mockUser = { _id: generateId(), firstName: 'John', lastName: 'Doe', email: 'john.doe@example.com', isEmailVerified: false, }; await userService.insertOne(mockUser, { publishEvents: false }); const insertedUser = await userService.findOne({ _id: mockUser._id }); expect(insertedUser).not.toBeNull(); expect(insertedUser?.email).toBe(mockUser.email); }); it('should update user', async () => { const user = await userService.insertOne( { _id: generateId(), firstName: 'John', lastName: 'Doe', email: 'john@example.com', isEmailVerified: false, }, { publishEvents: false }, ); await userService.updateOne({ _id: user._id }, () => ({ isEmailVerified: true }), { publishEvents: false }); const updatedUser = await userService.findOne({ _id: user._id }); expect(updatedUser?.isEmailVerified).toBe(true); }); }); ``` ### API Action Test Example Test your API endpoints: ```typescript sign-up.action.spec.ts theme={null} import app from 'app'; import request from 'supertest'; import { tokenService } from 'resources/token'; import { userService } from 'resources/user'; describe('post /account/sign-up', () => { beforeEach(async () => await Promise.all([userService.deleteMany({}), tokenService.deleteMany({})])); it('should create user with valid data', async () => { const userData = { firstName: 'John', lastName: 'Doe', email: 'john.doe@example.com', password: 'Password123!', }; await request(app.callback()).post('/account/sign-up').send(userData).expect(204); }); it('should return validation error for invalid email', async () => { const userData = { firstName: 'John', lastName: 'Doe', email: 'invalid-email', password: 'Password123!', }; await request(app.callback()).post('/account/sign-up').send(userData).expect(400); }); }); ``` ### Testing Utilities ```typescript security.util.spec.ts theme={null} import { securityUtil } from 'utils'; describe('security utils', () => { describe('generateSecureToken', () => { it('should generate token of specified length', async () => { const token = await securityUtil.generateSecureToken(32); expect(token).toHaveLength(32); expect(typeof token).toBe('string'); }); }); describe('hashPassword', () => { it('should hash password correctly', async () => { const password = 'test-password-123'; const hash = await securityUtil.hashPassword(password); expect(hash).not.toBe(password); const isValid = await securityUtil.verifyPasswordHash(hash, password); expect(isValid).toBe(true); }); }); }); ``` ## Best Practices ### Test Isolation Each test should be isolated from the others. Use `beforeEach` to clean the database before each test. ```typescript theme={null} describe('user service', () => { beforeEach(async () => { // Clean database before each test await userService.deleteMany({}); }); }); ``` ### Test Naming Use descriptive test names that explain the expected behavior: ```typescript theme={null} // Good it('should return user data when valid ID is provided', () => {}); // Bad it('should work', () => {}); ``` ### Mock External Services Mock external API calls and services: ```typescript theme={null} jest.mock('@aws-sdk/client-s3', () => ({ S3Client: jest.fn(), PutObjectCommand: jest.fn(), })); ``` *** ## title: "Testing" ## Overview In Ship testing settled through [Jest](https://jestjs.io/) framework and [MongoDB memory server](https://github.com/nodkz/mongodb-memory-server#available-options) with possibility running them in CI/CD pipeline. MongoDB's memory server allows connecting to the MongoDB server and running integration tests isolated. Tests should be placed in the `tests` directory specified for each resource from the `resources` folder and have next naming format `user.service.spec.ts`. ```markdown apps/api/src/resources/user/tests/user.service.spec.ts theme={null} resources/ user/ tests/ user.service.spec.ts ``` Run tests and linter. ```shell theme={null} pnpm run test ``` Run only tests. ```shell theme={null} pnpm run test:unit ``` ## Example ```typescript theme={null} import { Database } from '@paralect/node-mongo'; import { DATABASE_DOCUMENTS } from 'app-constants'; import { User } from 'types'; import { userSchema } from 'schemas'; const database = new Database(process.env.MONGO_URL as string); const userService = database.createService(DATABASE_DOCUMENTS.USERS, { schemaValidator: (obj) => userSchema.parseAsync(obj), }); describe('User service', () => { beforeAll(async () => { await database.connect(); }); it('should insert doc to collection', async () => { const mockUser = { _id: '12q', name: 'John' }; await userService.insertOne(mockUser); const insertedUser = await userService.findOne({ _id: mockUser._id }); expect(insertedUser).toEqual(mockUser); }); afterAll(async () => { await database.close(); }); }); ``` ## GitHub Actions By default, tests run for each pull request to the `main` branch through the `run-tests.yml` workflow. ```yaml .github/workflows/run-tests.yml theme={null} name: run-tests on: pull_request: branches: - main jobs: test: runs-on: ubuntu-latest strategy: matrix: node-version: [ 16.x ] steps: - uses: actions/checkout@v2 - name: Test api using jest uses: actions/setup-node@v3 with: node-version: ${{ matrix.node-version }} cache: 'npm' - run: npm install - run: npm test ``` To set up pull request rejection if tests failed visit `Settings > Branches` tab in your repository. Then add the branch [protection rule](https://docs.github.com/en/repositories/configuring-branches-and-merges-in-your-repository/defining-the-mergeability-of-pull-requests/managing-a-branch-protection-rule) "Require status checks to pass before merging". # Using events Source: https://ship.paralect.com/docs/api-reference/using-events We are using events to keep our systems simple and use the [full power of denormalized data in MongoDB.](https://www.mongodb.com/blog/post/6-rules-of-thumb-for-mongodb-schema-design-part-2) Denormalization is a database optimization technique in which we add redundant data to one or more tables. It can help us avoid costly joins in a database. You can find a simple example of this technique below. Denormalization The picture below shows the 'events part' of a sample API implementation: Event handlers In Ship, every resource produces events on create, update and delete database operations. As a result, we have all events in one place and these events describe system behavior. Stripe has [an event for any change](https://stripe.com/docs/api/events/types) that happens in their system. We do pretty much the same. # Workflow Source: https://ship.paralect.com/docs/api-reference/workflow ## Overview `Workflow` — is a complex business operation that requires two or more data services to be used together. If a workflow is simple enough and used only in one place — it can be defined right in the Rest API action. If not — it should be placed into the ‘workflowName.workflow’ file. A most common workflow example is a `signup.workflow` that exposes `createUserAccount` function and used when new user signs up or receive an invite. ## Examples ```typescript theme={null} import userService from 'resources/user/user.service'; import companyService from 'resources/company/company.service'; const signup = async ({ firstName, surname, email, }: { firstName: string; surname: string; email: string, }) : Promise => { let signedUpUser = null; await companyService.withTransaction(async (session: any) => { const companyId = companyService.generateId(); await companyService.create({ _id: companyId, name: '', }, { session }); signedUpUser = await userService.create({ _id: userId, companyId, email, firstName, surname, }, { session }); }); return signedUpUser; }; export default { signup, }; ``` # Architecture Source: https://ship.paralect.com/docs/architecture Every technological decision is driven by simplicity. We believe that product used by people is the only reason why technologies exist. Our goal is to help products stand up on their feet without investing too much on early stages. ## Overview Our technological choices based on the following main tools: [Next.js](https://nextjs.org/), [Tanstack Query](https://tanstack.com/query/latest/), [React Hook Form](https://react-hook-form.com/), [Mantine UI](https://mantine.dev/), [Koa.js](https://koajs.com/), [Socket.IO](https://socket.io/), [MongoDB](https://www.mongodb.com/), [Turborepo](https://turbo.build/repo/docs), [Docker](https://www.docker.com/), [Kubernetes](https://kubernetes.io/), [GitHub Actions](https://github.com/features/actions) and [TypeScript](https://www.typescriptlang.org/). On a high-level Ship consist of the following parts: The image below illustrates the main components and key relationships between them: ## Starting application with Turborepo To run infra and all services -- just run: `pnpm start` 🚀 ### Turborepo: Running infra and services separately 1. Start base infra services in Docker containers: ```bash theme={null} pnpm run infra ``` 2. Run services with Turborepo ```bash theme={null} pnpm turbo-start ``` ## Using Ship with Docker To run infra and all services -- just run: `pnpm run docker` 🚀 ### Docker: Running infra and services separately 1. Start base infra services in Docker containers: ```bash theme={null} pnpm run infra ``` 2. Run services you need: ```bash theme={null} ./bin/start.sh api web ``` You can also run infra services separately with `./bin/start.sh` bash script. # Contribution Guide Source: https://ship.paralect.com/docs/contribution-guide Thank you for interest in contributing to Ship. Here, you'll find all the necessary information to get started. ## Ways to contribute * **Fix Ship docs** – fixing issues like bad wording, incomplete information, or missing examples in the documentation. * **Fix Ship template** - contribute to resolving bugs and enhancing the existing code in current template. * **Give feedback and suggest improvements** - report bugs through [issues](https://github.com/paralect/ship/issues) or suggest improvements. * **Share Ship** - share Ship link with everyone how can be interested. ## How to open pull request 1. Fork the [project](https://github.com/paralect/ship/fork). 2. Set up the project locally. 3. Create a new branch from the `main` branch, using a name that describes your changes. 4. Implement your changes and create a commit. 5. Open a pull request. 6. Attach 3 labels: * "To Review" - indicates that the task is ready for review. * Functionality type - specify a type (Feature, Bug, or Improvement). * Edit location - specify the area of modification (docs, create-ship-app, node-mongo or template). Your pull request will be checked by our team. If everything is well, it will be merged to `main` branch. If there are suggestions, we'll leave comments and label it "Changes Requested". You'll get an email to let you know. Once you've implemented the necessary changes, please reattach the "To Review" tag. ## Documentation We use [Mintlify](https://mintlify.com/) for documentation, located in the `/docs` folder. The docs are written in [MDX](https://mdxjs.com/) format. Check out the [Github Markdown Guide](https://docs.github.com/en/get-started/writing-on-github/getting-started-with-writing-and-formatting-on-github/basic-writing-and-formatting-syntax) and [Mintlify documentation](https://mintlify.com/docs/quickstart) to become familiar with this syntax. ### How to run documentation 1. Install Mintlify globally. Ensure that your Node.js version is 19.00.0 or higher. ```shell pnpm theme={null} pnpm add -g mint ``` ```shell npm theme={null} npm i -g mint ``` 2. Execute the following command in the project's root directory: ```shell pnpm theme={null} pnpm run docs ``` ```shell npm theme={null} npm run docs ``` Documentation will be opened on [http://localhost:4100](http://localhost:4100) if it's not in use. ## Packages Within the `/package` folder, you'll find two essential packages: `create-ship-app` and `node-mongo`. **create-ship-app** is simple CLI tool for bootstrapping Ship applications. Downloads actual template from Ship monorepo and configures it to run. Learn more in the [documentation](/packages/create-ship-app.mdx). **node-mongo** is lightweight reactive extension to official Node.js MongoDB [driver](https://mongodb.github.io/node-mongodb-native/4.10/). It's used in the Ship template, and you can find details in the [documentation](/packages/node-mongo.mdx). ## Ship template structure The project template is in the `/template` folder and includes all necessary files and folders. Let's explore the structure: * **.github** - contains scripts for Github actions. * **.husky** - contains a script that runs before committing and checks for ESLint errors. * **.vscode** - VS Code settings. * **apps** - contains [API](/api-reference/overview.mdx) and [WEB](/web/overview.mdx). * **bin** - contains scripts for setting up and starting the project. * **packages** - contains packages with shared code. Find more details about package sharing in our [documentation](/package-sharing/overview.mdx). ### How to run Ship template 1. Navigate to the `/template` folder: ```shell theme={null} cd template ``` 2. In `/apps/api`, copy the **.env.example** file with the name **.env**. 3. Install dependencies: ```shell theme={null} pnpm i ``` 4. Set up and start project ```shell theme={null} pnpm run start ``` The **API** will work on port 3001, **WEB** on port 3002, and **Mailer** on port 4000. ### Deployment Ship can be deployed on four platforms: **AWS**, **Digital Ocean Kubernetes**, **Digital Ocean Apps**, and **Render**. Deployment settings are in the `/deploy` folder within the respective folders. When installing a project via `npx create-ship-app`, users can choose the deployment type, and the appropriate settings from the respective folder will be added to their project in the `/deploy` folder. To check deployment, create a new project via `npx create-ship-app` and attempt to deploy the app in your environment using the [deployment instructions](/deployment). # Digital Ocean Apps Source: https://ship.paralect.com/docs/deployment/digital-ocean-apps There is a simplified deployment type without Kubernetes. This type is **recommended** for most new applications because it allows you to set up infrastructure faster and doesn't require additional DevOps knowledge from the development team. You can switch to a more complex Kubernetes solution when your application will be at scale. Explore our method of deploying Ship to DigitalOcean Apps using [Infrastructure as Code](https://www.pulumi.com/what-is/what-is-infrastructure-as-code/). For a detailed guide, check out our [documentation on this approach](/deployment/digital-ocean-apps-iac). It's a step-by-step Ship deployment guide. We will use the [Digital Ocean Apps](https://www.digitalocean.com/products/app-platform) and [GitHub Actions](https://github.com/features/actions) for automated deployment. [Mongo Atlas](https://www.mongodb.com/) and [Redis Cloud](https://redis.com/try-free/) for databases deployment, and [Cloudflare](https://www.cloudflare.com/) for DNS and SSL configuration. You need to create [GitHub](https://github.com/), [Digital Ocean](https://www.digitalocean.com/), [CloudFlare](https://www.cloudflare.com/), [MongoDB Atlas](https://www.mongodb.com/cloud/atlas/register) and [Redis Cloud](https://redis.com/try-free/) accounts. Also, you need [git](https://git-scm.com/) and [Node.js](https://nodejs.org/en/) if you already haven't. [Migrator](/migrator) and [Scheduler](/scheduler) will run in a Docker container for API. Unlike [Kubernetes](https://github.com/docs/deployment/kubernetes/digital-ocean.md), where separate containers are used for them. ## Setup project First, initialize your project. Type `npx create-ship-app init` in the terminal then choose desired build type and **Digital Ocean Apps** as a cloud service provider. Init project You will have next project structure. ```shell theme={null} /my-app /apps /web /api /.github ... ``` Create GitHub private repository and upload source code. Private repo ```shell theme={null} cd my-app git remote add origin https://github.com/Oigen43/my-app.git git branch -M main git push -u origin main ``` ## MongoDB Atlas Navigate to [MongoDB Atlas](https://cloud.mongodb.com/), sign in to your account and create a new database. ### Database creation 1. Select the appropriate type. Dedicated for a production environment, shared for staging/demo. 2. Select provider and region. We recommend selecting the same or closest region to the DO application. 3. Select cluster tier. Free M0 Sandbox should be enough for staging/demo environments. For production environment we recommended selecting the option that supports cloud backups, M2 or higher. 4. Enter cluster name Mongo cluster ### Security and connection After cluster creation, you'll need to set up security. Select the authentication type (username and password) and create a user. Please be aware that the initial character of the generated password should be a letter. If it isn't, you'll need to create a new password. Failing to do this may lead to DigitalOcean parsing the `MONGO_URI` variable incorrectly. Mongo setup authentication Add IP addresses list, which should have access to your cluster. Add 0.0.0.0/0 IP address to allow anyone with credentials to connect. Mongo setup ip white list After database creation, go to the dashboard page and get the URI connection string by pressing the `connect` button. Mongo dashboard Select `Connect your application` option. Choose driver and mongo version, and copy connection string. Don't forget to replace `` with your credentials. Mongo connect dialog Save this value. It will be needed later when creating the app in Digital Ocean. Before moving to production, it's crucial to set up [MongoDB backup methods](https://www.mongodb.com/docs/manual/core/backups). This ensures that you can reliably restore your data in the event of unforeseen circumstances. ## Redis Cloud Navigate to [Redis Cloud](https://redis.com/try-free/) and create an account. Select cloud provider and region, then press `Let's start free` to finish database creation. Redis create database Open database settings and get the database public endpoint and password. Redis public endpoint Redis password Form Redis connection string using public endpoint and password `redis://:`. Save this value. It will be needed later when creating the app in Digital Ocean. ## Digital Ocean Navigate to the Digital Ocean Control Panel and select the **Apps** tab. The `Full-Stack` build type requires 2 applications. First for [Web](/web/overview) and second for [API](/api-reference/overview), [Migrator](https://github.com/docs/migrator.md), and [Scheduler](https://github.com/docs/scheduler.md) services. ### Initial step 1. Select GitHub as a service provider. You might need to grant Digital Ocean access to your GitHub account or organization. 2. Select the repository with the application. 3. Select a branch for deployment. 4. Select the source directory if the code is in a subfolder.It should `apps/web` for web application and `apps/api` for api. 5. Turn off the Autodeploy option. The Ship uses GitHub Actions for CI due to the poor support of monorepos in the Digital Ocean Apps Create app screen ### Resources setup 1. Delete duplicated resources without dockerfile if you have one. 2. Select desired plan. For staging/demo environments, sufficiently selecting a basic plan for 5\$. For production, you might consider selecting a more expensive plan. Create app resources ### Environment variables The `APP_ENV` environment variable is typically set based on the environment in which the application is running. Its value corresponds to the specific environment, such as "development", "staging" or "production". This variable helps the application identify its current environment and load the corresponding configuration. For the web application, by setting the environment variable `APP_ENV`, the application can determine the environment in which it is running and download the appropriate configuration file: | APP\_ENV | File | | ----------- | ---------------- | | development | .env.development | | staging | .env.staging | | production | .env.production | These files should contain specific configuration variables required for each environment. In contrast, the API utilizes a single `.env` file that houses its environment-specific configuration. This file typically contains variables like API keys, secrets, or other sensitive information. To ensure security, it's crucial to add the `.env` file to the `.gitignore` file, preventing it from being tracked and committed to the repository. So just specify the environment variables that will contain the values of your secrets. For example, if you have a secret named `API_KEY`, create an environment variable named `API_KEY` and set the value of the corresponding secret for it Variables, added in the `Global` section will be available to all resources within the application, while ones added in the `ship` section will be available only for that resource. Adding `MONGO_CONNECTION` in the global section allows you to use it later if you decide to set up migrator/scheduler resources Create app environment variables ### Application name and hosted region * \[**Optional**] Select desired application name and/or region for your application Create app host ### Review Verify everything is correct and create a new resource. After the application creation, you'll land on the application dashboard page. On dashboard, you can see application status, check runtime logs, track deployment status, and manage application settings. ### App Spec Digital Ocean sets the path to Dockerfiles to the root by default. You will need to change it manually. Navigate to Settings, expand the App spec tab and change `dockerfile_path` in the editor. To deploy your application in a monorepo, it's essential to modify the `source_dir` parameter to the root directory. This adjustment is necessary to ensure the correct configuration and operation of the applications within the monorepo. Create app review ## Cloudflare Before this step you need to register a domain name, usually we already have it if not, look: [Register a new domain](https://developers.cloudflare.com/registrar/get-started/register-domain) Navigate to your Digital ocean application and open `Settings` tab. Select `Domains` row to open domain settings and click `Add domain` button Digital Ocean domains Type your desired domain and select option `You manage your domain` In the bottom section you'll be asked to copy CNAME alias of your digital ocean application name to record in your dns provider. Copy that alias and leave the form (do no close it or submit). Digital Ocean new domain Navigate to [CloudFlare](https://dash.cloudflare.com/) and sign into account. 1. Go to `DNS` tab and create a new record. 2. Click `Add record`. Select type `CNAME`, enter domain name (must be the same you entered in digital ocean settings) and paste alias into `target` field. Make sure `Proxy status` toggle enabled. 3. Save changes Cloudflare DNS Now go back to digital ocean and submit form. It usually takes about 5 minutes for digital ocean to confirm and start using your new domain. Once domain is confirmed, application can be accessed by new address. ## GitHub Actions You can find two github actions in the `.github/workflows` folder, responsible for triggering deployment when you push changes in your repository. If you chose frontend or backend on the initialization step, you'll have one github workflow for the selected application type. These actions require a Digital Ocean access token and application ID. Respectively these are `DO_ACCESS_TOKEN` and `DO_API_STAGING_APP_ID`/`DO_WEB_STAGING_APP_ID`/`DO_API_PRODUCTION_APP_ID`/`DO_WEB_PRODUCTION_APP_ID`. Navigate to digital ocean and open the **API** tab on the left sidebar. Click **Generate new token**, select a name and set the expiration period. Also, pick both **read** and **write** permissions for the scope. Do access token create You'll see generated token in the list. Do not forget to copy the value and store it in a safe place. You won't be able to copy value after leaving the page. Do access token copy Next, navigate to the **Apps** tab in the left sidebar and open your Digital Ocean application. You can find the id of your application id in the browser address bar. Do application id Now you can add these keys to your github repository's secrets. Navigate to the GitHub repository page, and open the **Settings** tab and these values. You have to be repository **admin** or **owner** to open this tab. Github secrets Done! Application deployed and can be accessed by provided domain. Deployed application ## Set up migrator and scheduler (Optional) Digital Ocean Apps allows configuring additional resources within one application, which can serve as background workers and jobs, and a scheduler to run before/after the deployment process. Navigate to your Digital Ocean application. **Make sure to select the application with API server**, open a `Create` dropdown menu in the top-right corner, and select the `Create Resources From Source Code` option. Do create resource 1. Select a project repository, add a path to the source directory, disable auto-deploy, and press `Next`. Create resource screen 2. Delete a resource without Dockerfile and edit second by clicking on the pencil icon. Create app resources 3. In the edit resource form, select `Resource Type` - `Job`, `Before every deploy`, and change the name of the resource (not required, but might be useful later). Press save and go back to the resources screen. Edit resource screen 4. Select the `Add Additional Resource from Source` option below the list of added resources, repeat steps 1-2, and navigate to the edit form for a new resource. 5. Select `Resource Type` - `Worker`, save changes and go back. Edit resource screen 6. Proceed with the next steps, add environment variables if needed, verify a new monthly cost of the application and create resources. You can find created resources in the `overview` tab. Resources overview screen 7. Navigate to Application Spec `(settings tab)`. Change the `dockerfile_path` variable to files with migrator and scheduler. Migrator is placed in the `jobs` section. You can also find it by name of the resource. The scheduler is placed in the `workers` section. To deploy your application in a monorepo, it’s essential to modify the `source_dir` parameter to the root directory. This adjustment is necessary to ensure the correct configuration and operation of the applications within the monorepo. Migrator spec screen Scheduler spec screen ## Logging (optional) ### Build-in Digital Ocean has built-in logs in raw format. It will gather all data that your apps will produce. In order to view them, follow these steps: 1. Log in to your Digital Ocean account. 2. Click on the Apps tab in the left-hand navigation menu. 3. Click on the name of the app you want to view the logs for. 4. Click on the Runtime Logs tab in the app dashboard. 5. You will see a list of logs for different components of your app. Click on the component you want to view the logs for. 6. You can filter the logs by time, severity, and component. Use the drop-down menus provided to select your filter criteria. 7. You can also search for specific keywords in the logs by using the search bar at the top of the page. Runtime built in logs screen ### Integrations Currently, Digital Ocean Apps supports only 3 integrations: [PaperTrail](https://marketplace.digitalocean.com/add-ons/papertrail), [Logtail](https://marketplace.digitalocean.com/add-ons/logtail) and [Datadog](https://www.datadoghq.com/). You can find detailed instructions on how to set up these logs at this [link](https://docs.digitalocean.com/products/app-platform/how-to/forward-logs/). ### Example Integration Logtail To configure Logtail follow these steps: 1. Create account on Logtail 2. Open Sources on the left sidebar. 3. Create new source by clicking "Connect source" button Logs Integrations logtail sources 4. Select HTTP source and specify name for this connection Logs Integrations Logtail connect 5. Copy source token Logs Integrations Logtail token 6. Open Digital Ocean Apps 7. Select Settings tab for your application Logs Integrations Settings 8. Select Log Forwarding and then press "Add Destination" Logs Forwarding 9. Fill information with token that we retrieved from Logtail Logs Create Log Forward 10. That's it! In couple minutes your app will send the latest logs to Logtail Logs Logtail Final View # Digital Ocean Apps (IaC) Source: https://ship.paralect.com/docs/deployment/digital-ocean-apps-iac There is a simplified deployment type without Kubernetes. This type is **recommended** for most new applications because it allows you to set up infrastructure faster and doesn't require additional DevOps knowledge from the development team. You can switch to a more complex Kubernetes solution when your application will be at scale. It's a step-by-step Ship deployment guide. We will use the [Digital Ocean Apps](https://www.digitalocean.com/products/app-platform) and [GitHub Actions](https://github.com/features/actions) for automated deployment. [Mongo Atlas](https://www.mongodb.com/) and [Redis Cloud](https://redis.com/try-free/) for databases deployment, [Cloudflare](https://www.cloudflare.com/) for DNS and SSL configuration and [Pulumi](https://www.pulumi.com/) for Infrastructure as Code You need to create [GitHub](https://github.com/), [Digital Ocean](https://www.digitalocean.com/), [CloudFlare](https://www.cloudflare.com/), [MongoDB Atlas](https://www.mongodb.com/cloud/atlas/register) and [Redis Cloud](https://redis.com/try-free/) accounts. Also, you need [git](https://git-scm.com/) and [Node.js](https://nodejs.org/en/) if you already haven't. ## Setup project First, initialize your project. Type `npx create-ship-app init` in the terminal then choose desired build type and **Digital Ocean Apps** as a cloud service provider. Init project You will have the next project structure. ```shell theme={null} /my-app /deploy /apps /web /api /.github ... ``` Create GitHub private repository and upload source code. Private repo ```shell theme={null} cd my-app git remote add origin https://github.com/Oigen43/my-app.git git branch -M main git push -u origin main ``` ## MongoDB Atlas Navigate to [MongoDB Atlas](https://cloud.mongodb.com/), sign in to your account and create a new database. ### Database creation 1. Select the appropriate type. Dedicated for a production environment, shared for staging/demo. 2. Select provider and region. We recommend selecting the same or closest region to the DO application. 3. Select cluster tier. Free M0 Sandbox should be enough for staging/demo environments. For production environment we recommended selecting the option that supports cloud backups, M10 or higher. 4. Enter cluster name Mongo cluster ### Security and connection After cluster creation, you'll need to set up security. Select the authentication type (username and password) and create a user. Mongo setup authentication Add IP addresses list, which should have access to your cluster. Add 0.0.0.0/0 IP address to allow anyone with credentials to connect. Mongo setup ip white list After database creation, go to the dashboard page and get the URI connection string by pressing the `connect` button. Mongo dashboard Select `Connect your application` option. Choose driver and mongo version, and copy connection string. Don't forget to replace `` with your credentials. Mongo connect dialog Save this value. It will be needed later when creating the app in Digital Ocean. Before moving to production, it's crucial to set up [MongoDB backup methods](https://www.mongodb.com/docs/manual/core/backups). This ensures that you can reliably restore your data in the event of unforeseen circumstances. ## Redis Cloud Navigate to [Redis Cloud](https://redis.com/try-free/) and create an account. Select cloud provider and region, then press `Let's start free` to finish database creation. Redis create database Open database settings and get the database public endpoint and password. Redis public endpoint Redis password Form Redis connection string using public endpoint and password `redis://:`. Save this value. It will be needed later when creating the app in Digital Ocean. ## Environment variables The `APP_ENV` environment variable is typically set based on the environment in which the application is running. Its value corresponds to the specific environment, such as "development", "staging" or "production". This variable helps the application identify its current environment and load the corresponding configuration. For the web application, by setting the environment variable `APP_ENV`, the application can determine the environment in which it is running and download the appropriate configuration file: | APP\_ENV | File | | ----------- | ---------------- | | development | .env.development | | staging | .env.staging | | production | .env.production | These files should contain specific configuration variables required for each environment. In contrast, the API utilizes a single `.env` file that houses its environment-specific configuration. This file typically contains variables like API keys, secrets, or other sensitive information. To ensure security, it's crucial to add the `.env` file to the `.gitignore` file, preventing it from being tracked and committed to the repository. So just specify the environment variables that will contain the values of your secrets. For example, if you have a secret named `API_KEY`, create an environment variable named `API_KEY` and set the value of the corresponding secret for it. ## Digital Ocean via Pulumi [Pulumi](https://www.pulumi.com/) is an open-source [Infrastructure as Code (IaC)](https://www.pulumi.com/what-is/what-is-infrastructure-as-code/) platform that allows developers to define and provision cloud infrastructure using familiar programming languages.
Instead of using domain-specific languages or YAML templates, Pulumi leverages existing languages like TypeScript, Python, Go, and C#. Go to the `/deploy` directory at the root of your project and proceed to the next steps: Ensure you have [Pulumi CLI](https://www.pulumi.com/docs/get-started/install/) installed.
After installing, verify everything is in working order by running the pulumi CLI: ```shell theme={null} pulumi version ```
[Login in pulumi](https://www.pulumi.com/docs/cli/commands/pulumi_login/) for managing your stacks: ```shell theme={null} pulumi login --local ``` Create your [Personal Access Token](https://docs.digitalocean.com/reference/api/create-personal-access-token/) and [Access Keys](https://docs.digitalocean.com/products/spaces/how-to/manage-access/#access-keys) for DigitalOcean Add DigitalOcean Personal Access Token and Access Keys to your configuration file: `.zshrc` or `.bashrc` First you’ll need to enter the `.zshrc` or `.bashrc` file in editing mode: ```shell .zshrc theme={null} vi ~/.zshrc ``` ```shell .bashrc theme={null} vi ~/.bashrc ``` Insert the following variables at the end of the configuration file: ```shell theme={null} # DigitalOcean start export DIGITALOCEAN_TOKEN=dop_v1_... export SPACES_ACCESS_KEY_ID=DO... export SPACES_SECRET_ACCESS_KEY=... # DigitalOcean end ``` To reflect the changes in the bash, either exit and launch the terminal again. Or use the command: ```shell .zshrc theme={null} source ~/.zshrc ``` ```shell .bashrc theme={null} source ~/.bashrc ``` Grant DigitalOcean access to your GitHub repository using [this link](https://cloud.digitalocean.com/apps/github/install). Initialize your stack using the command: ```shell theme={null} pulumi stack init organization/{project-name}/{environment} ``` Substitute `{project-name}` with the actual name of your project and make sure to update it in `Pulumi.yaml` file.
Replace `{environment}` with the desired environment: `staging` or `production` values are allowed.
Duplicate the `.env.example` file to create a new environment-specific file using the command: ```shell staging theme={null} cp .env.example .env.staging ``` ```shell production theme={null} cp .env.example .env.production ``` Populate the new file with the necessary environment variables. Ensure that you set the necessary variables in your web application. Edit the .env files accordingly and remember to push these changes to your remote repository. Install the required dependencies using the command: ```shell theme={null} npm i ``` To create the resources in the initialized stack, execute the command: ```shell theme={null} pulumi up ```
Finally, you will observe the following output: Pulumi Preview Review the planned resource creation and proceed with the resource update. This process may take a few minutes to complete. ## Cloudflare Navigate to your Digital Ocean application and open `Settings` tab. Navigate to `Domains` row to open domain settings and copy starter domain of application. Digital Ocean domains Navigate to [CloudFlare](https://dash.cloudflare.com/) and sign into account. 1. Go to `DNS` tab and create a new record. 2. Click `Add record`. Select type `CNAME`, enter domain name (must be the same you entered in digital ocean settings) and paste alias into `target` field. Make sure `Proxy status` toggle enabled. 3. Save changes Cloudflare DNS Now go back to digital ocean and submit form. It usually takes about 5 minutes for digital ocean to confirm and start using your new domain. Once domain is confirmed, application can be accessed by new address. ## GitHub Actions You can find two GitHub actions in the `.github/workflows` folder, responsible for triggering deployment when you push changes in your repository. If you chose frontend or backend on the initialization step, you'll have one github workflow for the selected application type. These actions require a [Digital Ocean Personal Access Token](https://docs.digitalocean.com/reference/api/create-personal-access-token/) and application ID. Respectively these are `DO_ACCESS_TOKEN` and `DO_API_STAGING_APP_ID`/`DO_WEB_STAGING_APP_ID`/`DO_API_PRODUCTION_APP_ID`/`DO_WEB_PRODUCTION_APP_ID`. Next, navigate to the **Apps** tab in the left sidebar and open your Digital Ocean application. You can find the id of your application id in the browser address bar. Do application id Now you can add these keys to your GitHub repository's secrets. Navigate to the GitHub repository page, and open the **Settings** tab and these values. You have to be repository **admin** or **owner** to open this tab. Github secrets Done! Application deployed and can be accessed by provided domain. Deployed application ## Logging (optional) ### Build-in Digital Ocean has built-in logs in raw format. It will gather all data that your apps will produce. In order to view them, follow these steps: 1. Log in to your Digital Ocean account. 2. Click on the Apps tab in the left-hand navigation menu. 3. Click on the name of the app you want to view the logs for. 4. Click on the Runtime Logs tab in the app dashboard. 5. You will see a list of logs for different components of your app. Click on the component you want to view the logs for. 6. You can filter the logs by time, severity, and component. Use the drop-down menus provided to select your filter criteria. 7. You can also search for specific keywords in the logs by using the search bar at the top of the page. Runtime built in logs screen ### Integrations Currently, Digital Ocean Apps supports only 3 integrations: [PaperTrail](https://marketplace.digitalocean.com/add-ons/papertrail), [Logtail](https://marketplace.digitalocean.com/add-ons/logtail) and [Datadog](https://www.datadoghq.com/). You can find detailed instructions on how to set up these logs at this [link](https://docs.digitalocean.com/products/app-platform/how-to/forward-logs/). ### Example Integration Logtail To configure Logtail follow these steps: 1. Create account on Logtail 2. Open Sources on the left sidebar. 3. Create new source by clicking "Connect source" button Logs Integrations logtail sources 4. Select HTTP source and specify name for this connection Logs Integrations Logtail connect 5. Copy source token Logs Integrations Logtail token 6. Open Digital Ocean Apps 7. Select Settings tab for your application Logs Integrations Settings 8. Select Log Forwarding and then press "Add Destination" Logs Forwarding 9. Fill information with token that we retrieved from Logtail Logs Create Log Forward 10. That's it! In couple minutes your app will send the latest logs to Logtail Logs Logtail Final View # AWS Source: https://ship.paralect.com/docs/deployment/kubernetes/aws export const provider_1 = "aws" export const provider_0 = "aws" It's a step-by-step Ship deployment guide. We will use [Amazon Elastic Kubernetes Service (EKS)](https://aws.amazon.com/eks), [Mongo Atlas](https://www.mongodb.com/), [Amazon Elastic Container Registry (ECR)](https://aws.amazon.com/ecr), [GitHub Actions](https://github.com/features/actions) for automated deployment, and [CloudFlare](https://www.cloudflare.com) for DNS and SSL configuration. You need to create [GitHub](https://github.com), [AWS](https://aws.amazon.com), [MongoDB Atlas](https://www.mongodb.com/cloud/atlas/register) and [CloudFlare](https://www.cloudflare.com/) accounts and install the next tools on your machine before starting: * [kubectl](https://kubernetes.io/docs/tasks/tools/#kubectl) - CLI tool for accessing Kubernetes cluster; * [kubectx](https://github.com/ahmetb/kubectx) - CLI tool for easier switching between Kubernetes contexts; * [helm](https://helm.sh/docs/intro/install) - CLI tool for managing Kubernetes deployments; * [aws-cli](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) - CLI tool for managing AWS resources; * [eksctl](https://docs.aws.amazon.com/eks/latest/userguide/getting-started-eksctl.html) - CLI tool for managing EKS clusters; * [k8sec](https://github.com/dtan4/k8sec) - CLI tool for managing Kubernetes Secrets easily; Download k8sec tar.gz, then do: ``` chmod +x k8sec ``` ``` sudo cp k8sec /usr/local/bin/ ``` ``` k8sec --help ``` Try the next commands to ensure that everything is installed correctly: ``` kubectl kubectx helm aws sts get-caller-identity eksctl k8sec ``` Also, you need [git](https://git-scm.com/) and [Node.js](https://nodejs.org/en/) if you already haven't. ## Setup project First, initialize your project. Type `npx create-ship-app@latest` in the terminal then choose **AWS EKS** deployment type. Init project You will have the next project structure. ```shell theme={null} /my-ship-app /.github /apps /api /web /deploy ... ``` ## AWS Regions AWS Regions are physical locations of AWS cluster data centers. Each group of logical data centers calls Availability Zone (AZ). AZs allow the operation of production applications and databases that are more highly available, fault-tolerant and scalable. Now you need to select an AWS region for future use of the services. You can read more about region selection for your workloads here: [What to Consider when Selecting a Region for your Workloads](https://aws.amazon.com/blogs/architecture/what-to-consider-when-selecting-a-region-for-your-workloads/). For this deployment guide, we will use the **us-east-1** region. Usually, you have to create AWS resources in a single region. If you don't see created resources, you may need to switch to the appropriate AWS region. ## Container registry You need to create [private repositories](https://console.aws.amazon.com/ecr/private-registry/repositories/create) for storing Docker images. The deployment script will upload images to Container Registry during the build step, and Kubernetes will automatically pull these images from Container Registry to run a new version of the service during the deployment step. Now we should create a repository for each service. For Ship, we need to create repositories for the next services: * [**API**](/api-reference/overview) - api * [**Migrator**](/migrator) - migrator * [**Scheduler**](/scheduler) - scheduler * [**Web**](/web/overview) - web Container Registry creation You should create a private repository for each service manually. After creation, you should have the following 4 services in ECR Container Registry creation Docker images for each service are stored in a separate repository. During the deployment process script will automatically create paths to repositories in the next format: * [**API**](/api-reference/overview) - 276472736030.dkr.ecr.us-east-1.amazonaws.com/api; * [**Migrator**](/migrator) - 276472736030.dkr.ecr.us-east-1.amazonaws.com/migrator; * [**Scheduler**](/scheduler) - 276472736030.dkr.ecr.us-east-1.amazonaws.com/scheduler; * [**Web**](/web/overview) - 276472736030.dkr.ecr.us-east-1.amazonaws.com/web; Repository name`276472736030.dkr.ecr.us-east-1.amazonaws.com/api` consists of 5 values: * `276472736030` - AWS account ID; * `us-east-1` - AWS region. * `dkr.ecr` - AWS service. * `amazonaws.com` - AWS domain. * `api` - service name. Images for all environments will be uploaded to the same repository for each service. ## Kubernetes Cluster Now let's [create EKS cluster](https://console.aws.amazon.com/eks/cluster-create). Navigate to the cluster creation page and choose `Custom configuration`
Make sure to disable `EKS Auto Mode` Cluster configuration
Enter a name for your cluster. It's recommended to use your project name. Cluster naming For multi-environment setups, append the environment name to your cluster: * `my-ship-app-staging` * `my-ship-app-production` For the `Cluster IAM role`: 1. Click the `Create recommended role` button 2. AWS will automatically create IAM roles with necessary EKS cluster permissions 3. Return to cluster creation page and select the created policy Cluster IAM role In the `Cluster access` section: * Set `Cluster authentication mode` to `EKS API and ConfigMap` Cluster access Navigate to 'Select add-ons' and verify these required add-ons are selected: * CoreDNS * kube-proxy * Amazon VPC CNI * Node monitoring agent Cluster addons Move to the review section and verify all configuration parameters are correct before creating the cluster. Default values for other configuration parameters are suitable unless you have specific requirements.
After creation, you need to wait a few minutes until the cluster status becomes **Active**. Cluster Created After cluster creation, you should attach [EC2](https://aws.amazon.com/ec2) instances to the cluster. You can do it by clicking on the **Add Node Group** button on the **Compute** tab. Add Node Group Set the node group name as `pool-app` and select the relevant Node IAM role from the list. If you don't have any IAM roles here, click the `Create recommended role` button. You will be prompted to create properly configured IAM roles with all necessary permissions. Node Group Configuration AWS recommends creating at least 2 nodes **t3.medium** instance type for the production environment. Node Group Instance Configuration Default values for other configuration parameters are suitable unless you have specific requirements. ## Accessing a cluster from a local machine Before proceeding, ensure you have [configured the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html). Run the following command to configure cluster access: ```bash theme={null} aws eks update-kubeconfig \ --region us-east-1 \ --name my-ship-app \ --alias my-ship-app ``` Replace `us-east-1` with your cluster's region and `my-ship-app` with your cluster name. Execute `kubectx` in your terminal and select your cluster from the list. ```bash theme={null} kubectx ``` Kubectx selection Check the installed pods by running: ```bash theme={null} kubectl get pods -A ``` You should see a list of system pods in your cluster: List of pods ## Ingress NGINX Controller [ingress-nginx](https://github.com/kubernetes/ingress-nginx) is an Ingress controller for Kubernetes using [NGINX](https://nginx.org) as a reverse proxy and load balancer. Learn more about ingress-nginx functionality in the **[official documentation](https://docs.nginx.com/nginx-ingress-controller/intro/how-nginx-ingress-controller-works/)**. Change to the `deploy/dependencies` directory in your terminal. This step is required **only if** you specified a custom node group name in your EKS cluster. If you did, update the `eks.amazonaws.com/nodegroup` value in `values.yaml.gotmpl`: ```yaml deploy/dependencies/ingress-nginx/values.yaml.gotmpl {5} theme={null} controller: publishService: enabled: true nodeSelector: eks.amazonaws.com/nodegroup: pool-app rbac: create: true defaultBackend: enabled: false ``` Install helm dependencies using helmfile: ```bash theme={null} helmfile deps ``` Preview the changes first: ```bash theme={null} helmfile diff ``` If the preview looks correct, apply the configuration: ```bash theme={null} helmfile apply ``` ## DNS and SSL After deploying ingress-nginx, retrieve the Load Balancer's external hostname: ```bash theme={null} kubectl get svc ingress-nginx-controller -n ingress-nginx -o json | jq -r '.status.loadBalancer.ingress[0].hostname' ``` If you have trouble running the above command, you can alternatively use: ```bash theme={null} kubectl get svc ingress-nginx-controller -n ingress-nginx ``` And copy the value from the `EXTERNAL-IP` column. You can follow this recommended naming pattern for different environments: | Environment | API Domain | Web Domain | | ----------- | -------------------- | -------------------- | | Production | api.ship.com | app.ship.com | | Staging | api.staging.ship.com | app.staging.ship.com | 1. First, ensure you have a domain in Cloudflare. You can either: * [Register a new domain](https://developers.cloudflare.com/registrar/get-started/register-domain/) * [Transfer an existing domain](https://developers.cloudflare.com/registrar/get-started/transfer-domain-to-cloudflare/) 2. In the Cloudflare DNS tab, create 2 `CNAME` records: * One for Web interface * One for API endpoint Both should point to your Load Balancer's external hostname. Enable the **Proxied** option to: * Route traffic through Cloudflare * Generate SSL certificates automatically CloudFlare API DNS Configuration
CloudFlare Web DNS Configuration Cloudflare's free Universal SSL certificates only cover the apex domain and one subdomain level. For multiple subdomain levels, you'll need an [Advanced Certificate](https://developers.cloudflare.com/ssl/edge-certificates/advanced-certificate-manager/manage-certificates/).
Update your domain settings in the appropriate environment configuration files: For API service: ```yaml deploy/app/api/production.yaml theme={null} service: api port: 3001 domain: api.my-ship-app.paralect.com ``` ```yaml deploy/app/api/staging.yaml theme={null} service: api port: 3001 domain: api.my-ship-app.staging.paralect.com ``` For Web service: ```yaml deploy/app/web/production.yaml theme={null} service: web port: 3002 domain: my-ship-app.paralect.com ``` ```yaml deploy/app/web/staging.yaml theme={null} service: web port: 3002 domain: my-ship-app.staging.paralect.com ```
## MongoDB Atlas [MongoDB Atlas](https://cloud.mongodb.com/) is a fully managed cloud database service that provides automated backups, scaling, and security features. It offers 99.995% availability with global deployment options and seamless integration with AWS infrastructure. ### Cluster Creation Sign in to your [MongoDB Atlas account](https://cloud.mongodb.com/) and create a new project if needed. Click **Create** to start cluster deployment. **Cluster Tier Selection:** * **Staging**: `M0` (Free tier) - Suitable for development and testing * **Production**: `M10` or higher - Includes automated backups and advanced features {provider_1 === 'aws' && (

Provider & Region:

  • Select AWS as your cloud provider
  • Choose the same region as your EKS cluster for optimal performance
)} {provider_1 === 'do' && (

Provider & Region:

  • Select AWS as your cloud provider
  • Choose the region closest to your Digital Ocean cluster for optimal performance
)} Deploy MongoDB Atlas cluster
Enter a descriptive cluster name (e.g., `ship-production-cluster`, `ship-staging-cluster`)
### Security Configuration Navigate to **Database Access** → **Add New Database User** * **Authentication Method**: Password * **Username**: Use environment-specific names (e.g., `api-production`, `api-staging`) * **Password**: Generate a strong password * **Database User Privileges**: **Read and write to any database** Add MongoDB database user **Password Requirements**: Ensure the password starts with a letter and contains only alphanumeric characters and common symbols. Special characters at the beginning can cause URI parsing issues. Navigate to **Network Access** → **Add IP Address** * Click **Allow access from anywhere** to allow connections from any IP with valid credentials * For production, consider restricting to specific IP ranges for enhanced security Configure MongoDB network access ### Get Connection String Go to your cluster dashboard and click the **Connect** button. MongoDB Atlas dashboard 1. Select **Drivers** in the "Connect your application" section 2. Choose **Node.js** driver and latest version 3. Copy the connection string and replace `` with your actual password MongoDB connection string **Example Connection String:** ```bash theme={null} mongodb+srv://api-production:your-password@cluster0.xxxxx.mongodb.net/?retryWrites=true&w=majority ``` Store the connection string securely - you'll need it for environment configuration later Before deploying to production, configure [automated backups](https://www.mongodb.com/docs/atlas/backup-restore-cluster/) in the Atlas console to ensure data recovery capabilities. ## Environment variables Kubernetes applications require proper environment variable configuration for both API and Web components. This section covers how to set up and manage environment variables securely using Kubernetes secrets and configuration files. ### API Environment Variables For the API deployment, you need to set up environment variables using Kubernetes secrets to securely manage sensitive configuration data. **Secrets** in Kubernetes are used to store sensitive information, such as passwords, API tokens, and keys. They are encoded in Base64 format to provide a level of security. These can be mounted into containers as data volumes or used as environment variables. Before deploying the app, make sure all necessary variables from the API config are exist. Here are the minimal set of required variables: | Name | Description | Example value | | --------------- | -------------------------- | -------------------------------------------------- | | `APP_ENV` | Application environment | `production` | | `MONGO_URI` | Database connection string | `mongodb://:@ship.mongodb.net` | | `MONGO_DB_NAME` | Database name | `api-production` | | `API_URL` | API domain URL | `https://api.my-ship-app.paralect.com` | | `WEB_URL` | Web app domain URL | `https://my-ship-app.paralect.com` | #### Environment Variable Details Specifies the application environment (development, staging, production). This controls logging levels, debugging features, error reporting, and other environment-specific behaviors. The API uses this to determine which configuration settings to load. MongoDB connection string including authentication credentials and cluster information. This is the primary database connection for the API. Format: `mongodb+srv://username:password@cluster.mongodb.net`. Each environment should use a separate database cluster or at minimum separate credentials. Name of the MongoDB database to use for this environment. Each environment (development, staging, production) should have its own database to prevent data conflicts and ensure proper isolation. The fully qualified domain name where the API will be accessible. This must be a valid HTTPS URL and should match your Kubernetes ingress configuration. Used for CORS settings and internal service communication. The fully qualified domain name where the web application will be accessible. Used for CORS configuration, redirect URLs, email templates, and social sharing metadata. Must be a valid HTTPS URL. #### Setting up Kubernetes Secrets Create Kubernetes namespaces and secret objects for staging and production environments: ```bash theme={null} kubectl create namespace staging kubectl create secret generic api-staging-secret -n staging kubectl create namespace production kubectl create secret generic api-production-secret -n production ``` First, create an `APP_ENV` variable to initialize secret storage for k8sec: ```bash production theme={null} k8sec set api-production-secret APP_ENV=production -n production ``` ```bash staging theme={null} k8sec set api-staging-secret APP_ENV=staging -n staging ``` Run the following command to check the created secret: ```bash production theme={null} k8sec list api-production-secret -n production ``` ```bash staging theme={null} k8sec list api-staging-secret -n staging ``` Create a `.env.production` file with all required variables: ```bash .env.production theme={null} APP_ENV=production MONGO_URI=mongodb://username:password@ship.mongodb.net MONGO_DB_NAME=api-production API_URL=https://api.my-ship-app.paralect.com WEB_URL=https://my-ship-app.paralect.com ``` ```bash .env.staging theme={null} APP_ENV=staging MONGO_URI=mongodb://username:password@ship.mongodb.net MONGO_DB_NAME=api-staging API_URL=https://api.my-ship-app.staging.paralect.com WEB_URL=https://my-ship-app.staging.paralect.com ``` Replace all example values with your actual configuration. Never use production secrets in documentation or version control. Import secrets from the .env file to Kubernetes secret using k8sec: ```bash production theme={null} k8sec load -f .env.production api-production-secret -n production ``` ```bash staging theme={null} k8sec load -f .env.staging api-staging-secret -n staging ``` After updating environment variables, you must initiate a new deployment for changes to take effect. Kubernetes pods cache variable values during startup, requiring a pod restart or rolling update to apply changes. ### Web Environment Variables The web application uses Next.js environment variables that are embedded at build time and made available in the browser. Unlike API secrets, these variables are stored directly in the GitHub repository. **Why Web Environment Variables Are Safe in Git**: Web environment variables (prefixed with `NEXT_PUBLIC_`) contain only public configuration like URLs and API endpoints. They don't include sensitive data like passwords or API keys, making them safe to store in version control. These values are already exposed to users in the browser, so repository storage doesn't create additional security risks. **Security Notice**: Never store sensitive information (passwords, API keys, secrets) in web environment files as they will be accessible on the client side. Only use public configuration values that are safe to expose to end users. #### Configuration Files Web environment variables are stored in separate files for each deployment environment: ```bash apps/web/.env.production theme={null} NEXT_PUBLIC_API_URL=https://api.my-ship-app.paralect.com NEXT_PUBLIC_WS_URL=https://api.my-ship-app.paralect.com NEXT_PUBLIC_WEB_URL=https://my-ship-app.paralect.com ``` ```bash apps/web/.env.staging theme={null} NEXT_PUBLIC_API_URL=https://api.my-ship-app.staging.paralect.com NEXT_PUBLIC_WS_URL=https://api.my-ship-app.staging.paralect.com NEXT_PUBLIC_WEB_URL=https://my-ship-app.staging.paralect.com ``` #### Environment Variables Reference | Variable | Description | Example | | --------------------- | ------------------------------------ | -------------------------------------- | | `NEXT_PUBLIC_API_URL` | Base URL for API requests | `https://api.my-ship-app.paralect.com` | | `NEXT_PUBLIC_WS_URL` | WebSocket server URL for real-time | `https://api.my-ship-app.paralect.com` | | `NEXT_PUBLIC_WEB_URL` | App's own URL for redirects/metadata | `https://my-ship-app.paralect.com` | **Best Practice**: Keep web environment files in your repository and ensure all values are non-sensitive. If you need to reference sensitive data from the frontend, create a secure API endpoint that returns the necessary information after proper authentication. # Setting up GitHub Actions CI/CD ### Creating IAM user in AWS To set up CI/CD with GitHub Actions securely, we need to create a dedicated IAM user in AWS with specific permissions. This separate user will be used exclusively for CI/CD operations, following the principle of least privilege and keeping deployment credentials isolated from other system users. 1. Go to [AWS IAM Policies](https://console.aws.amazon.com/iam/home#/policies) 2. Click **Create policy** 3. Select **JSON** tab and add the policy: ```json theme={null} { "Version": "2012-10-17", "Statement": [ { "Sid": "ECR", "Effect": "Allow", "Action": [ "ecr:BatchGetImage", "ecr:CompleteLayerUpload", "ecr:GetAuthorizationToken", "ecr:UploadLayerPart", "ecr:InitiateLayerUpload", "ecr:BatchCheckLayerAvailability", "ecr:PutImage" ], "Resource": "*" }, { "Sid": "EKS", "Effect": "Allow", "Action": "eks:DescribeCluster", "Resource": "*" } ] } ``` Policy Configuration 4. (Optional) Add tags 5. Give the policy a name (e.g. `GitHubActionsDeployPolicy`) and create it Policy review create 1. Navigate to **Users** in IAM console 2. Click **Create user** 3. Give the user a name (e.g. `github-actions`) User creating 4. Attach the policy you created by selecting: * **Attach existing policies directly** * Choose the CI/CD policy created in previous step User policy 4. (Optional) Add user tags 5. Review and create user 1. Find your new user in the users list and open user's page 2. Click **Create access key** User create access key 3. Select use case: **Third-party service** User access key use cases 4. Save the Access Key ID and Secret Access Key securely User credentials The Secret Access Key will only be shown once - make sure to save it immediately! 1. Copy your user's ARN from the IAM dashboard User ARN 2. Run the following command to grant Kubernetes access: ```bash theme={null} eksctl create iamidentitymapping \ --cluster my-ship-app \ --group system:masters \ --username github-actions \ --arn YOUR_USER_ARN ``` Replace `YOUR_USER_ARN` with the actual ARN copied earlier. These permissions enable CI/CD workflows while following security best practices: * Minimal required permissions for ECR operations * Limited EKS access for cluster management * Dedicated CI/CD user separate from other IAM users ### Configuring GitHub Actions secrets and variables Before starting, make sure you have created a GitHub repository for your project. GitHub Secrets and variables allow you to manage reusable configuration data. Secrets are encrypted and are used for sensitive data. [Learn more about encrypted secrets](https://docs.github.com/actions/automating-your-workflow-with-github-actions/creating-and-using-encrypted-secrets). Variables are shown as plain text and are used for non-sensitive data. [Learn more about variables](https://docs.github.com/actions/learn-github-actions/variables). The deployment will be triggered on each commit: * Commits to **main** branch → deploy to **staging** environment * Commits to **production** branch → deploy to **production** environment [Configure](https://docs.github.com/en/actions/security-for-github-actions/security-guides/using-secrets-in-github-actions#creating-secrets-for-a-repository) the following secrets and variables in your GitHub repository: | Name | Type | Description | | ------------------------- | -------- | --------------------------------------------------------------------------------------------------------------------------- | | AWS\_SECRET\_ACCESS\_KEY | secret | The secret access key from the AWS IAM user created for CI/CD. This allows GitHub Actions to authenticate with AWS services | | AWS\_ACCESS\_KEY\_ID | variable | The access key ID from the AWS IAM user. Used in conjunction with the secret key for AWS authentication | | AWS\_REGION | variable | The AWS region where your EKS cluster and ECR registry are located (e.g. `us-east-1`) | | CLUSTER\_NODE\_GROUP | variable | The name of the EKS node group where your application pods will be scheduled (e.g. `pool-app`) | | CLUSTER\_NAME\_PRODUCTION | variable | The name of your production EKS cluster. Required when deploying to the production environment | | CLUSTER\_NAME\_STAGING | variable | The name of your staging EKS cluster. Required when deploying to the staging environment | Never commit sensitive credentials directly to your repository.
Always use GitHub Secrets for sensitive information like AWS keys.
Variables (unlike secrets) are visible in logs and can be used for non-sensitive configuration values that may need to be referenced or modified. GitHub Secrets GitHub Variables Now commit all changes to GitHub that will trigger deployment, or you can [run a workflow manually](https://docs.github.com/en/actions/managing-workflow-runs-and-deployments/managing-workflow-runs/manually-running-a-workflow) CI start Done! Application deployed and can be accessed by the provided domain. CI finish Deployment finish Deployed pods If something went wrong you can check the workflows logs on GitHub and use [**kubectl logs**](https://kubernetes.io/docs/reference/kubectl/cheatsheet/#interacting-with-running-pods), [**kubectl describe**](https://kubernetes.io/docs/reference/kubectl/cheatsheet/#viewing-finding-resources) commands. ## Setting up Upstash Redis database (recommended) # Upstash Redis Integration [Upstash Redis](https://upstash.com/) is a **highly available, infinitely scalable** Redis-compatible database that provides enterprise-grade features without the operational complexity. ## How Ship Uses Redis Ship leverages Redis for several critical functionalities: | Use Case | Description | Implementation | | --------------------------- | ------------------------------------------------- | ------------------------------------------------------------------- | | **Real-time Communication** | Pub/Sub mechanism for WebSocket functionality | [Socket.io Redis Adapter](https://socket.io/docs/v4/redis-adapter/) | | **Rate Limiting** | API request throttling and abuse prevention | Redis counters with TTL | | **Caching** | Application data caching for improved performance | Key-value storage with expiration | **Redis as a Message Broker**: When scaling to multiple server instances, Redis acts as a message broker between Socket.io servers, ensuring real-time messages reach all connected clients regardless of which server they're connected to. ## Setting Up Upstash Redis ### Create Your Database Log in to your [Upstash account](https://console.upstash.com/) and navigate to the Redis section. Click **Create Database** in the upper right corner to open the configuration dialog. Create Upstash Redis Database **Database Name:** Choose a descriptive name for your database (e.g., `my-ship-app-production`) **Primary Region:** Select the region closest to your main application deployment for optimal write performance. **Read Regions:** Choose additional regions where you expect high read traffic for better global performance. Choose your pricing plan based on expected usage and click **Create** to deploy your database. {provider_0 === 'aws' && (

Region Selection: For Kubernetes deployments on AWS, choose the same AWS region as your EKS cluster to minimize latency and data transfer costs.

)} {provider_0 === 'do' && (

Region Selection: For Kubernetes deployments on Digital Ocean, choose the same region as your cluster to minimize latency and data transfer costs.

)}
Once your database is created, you'll need the connection string for your application: Go to your database dashboard and find the **Connect to your database** section. Upstash Redis Connection Details 1. Select the **Node** tab for the appropriate connection string format 2. Click **Reveal** to show the hidden password 3. Copy the complete Redis URI (format: `rediss://username:password@host:port`) Using `k8sec`, add the Redis connection string to your environment configuration: ```bash production theme={null} k8sec set api-production-secret -n production REDIS_URI=$REDIS_URI ``` ```bash staging theme={null} k8sec set api-staging-secret -n staging REDIS_URI=$REDIS_URI ``` After updating environment variables, restart your API pod using: ```bash theme={null} kubectl delete pod -n ``` This will trigger Kubernetes to create a new pod with the updated environment variables. ### Verify Connection with Redis Insight Redis Insight is a powerful GUI tool for managing and debugging Redis databases. Download and install [Redis Insight](https://redis.io/insight/) on your local machine. 1. Open Redis Insight 2. Click **Add Database** 3. Paste your Upstash Redis connection string in the **Connection URL** field 4. Click **Add Database** Redis Insight Connection Setup Once connected, you can use Upstash Redis Console to: * Browse keys and data structures * Execute Redis commands directly * Monitor real-time performance metrics * Debug application data storage Upstash Redis Metrics Dashboard **Real-time Monitoring**: Upstash Redis updates database metrics automatically every 10 seconds, giving you near real-time visibility into your Redis performance and usage. # Digital Ocean Source: https://ship.paralect.com/docs/deployment/kubernetes/digital-ocean export const provider_1 = "do" export const provider_0 = "do" It's a step-by-step Ship deployment guide. We will use Digital Ocean Managed [Kubernetes](https://www.digitalocean.com/products/kubernetes), [Container Registry](https://www.digitalocean.com/products/container-registry), [Mongo Atlas](https://www.mongodb.com/), [GitHub Actions](https://github.com/features/actions) for automated deployment, and [CloudFlare](https://www.cloudflare.com/) for DNS and SSL configuration. You need to create [GitHub](https://github.com/), [CloudFlare](https://www.cloudflare.com/), [Digital Ocean](https://www.digitalocean.com/) and [MongoDB Atlas](https://www.mongodb.com/cloud/atlas/register) accounts and install the next tools on your machine before starting: * [kubectl](https://kubernetes.io/docs/tasks/tools/#kubectl) - CLI tool for accessing Kubernetes cluster; * [kubectx](https://github.com/ahmetb/kubectx) - CLI tool for easier switching between Kubernetes contexts; * [helm](https://helm.sh/docs/intro/install) - CLI tool for managing Kubernetes deployments; * [k8sec](https://github.com/dtan4/k8sec) - CLI tool for managing Kubernetes Secrets easily; Download k8sec tar.gz, then do: ``` chmod +x k8sec ``` ``` sudo cp k8sec /usr/local/bin/ ``` ``` k8sec --help ``` Try the next commands to ensure that everything is installed correctly: ``` kubectl kubectx helm k8sec ``` Also, you need [git](https://git-scm.com/) and [Node.js](https://nodejs.org/en/) if you already haven't. ## Setup project First, initialize your project. Type `npx create-ship-app@latest` in the terminal then choose **Digital Ocean Managed Kubernetes** deployment type. Init project You will have next project structure. ```shell theme={null} /my-ship-app /.github /apps /api /web /deploy ... ``` ## Container registry You need to create [Container Registry](https://www.digitalocean.com/products/container-registry) for storing Docker images. The deployment script will upload images to Container Registry during the build step, and Kubernetes will automatically pull these images from Container Registry to run a new version of service during the deployment step. Name container registry as the name of organization, which usually is equals to the name of the project: `my-ship-app`. Container Registry creation After some time, you will get registry endpoint. Container Registry creation `registry.digitalocean.com/my-ship-app` is registry endpoint, where `my-ship-app` is registry name. Docker images for each service are stored in separate repository. In Digital Ocean repositories are created automatically when something is uploaded by specific paths. During deployment process script will automatically create paths to repositories in next format: * [**API**](/api-reference/overview) - registry.digitalocean.com/my-ship-app/api; * [**Scheduler**](/scheduler) - registry.digitalocean.com/my-ship-app/scheduler; * [**Migrator**](/migrator) - registry.digitalocean.com/my-ship-app/migrator; * [**Web**](/web/overview) - registry.digitalocean.com/my-ship-app/web; Images for all environments will be uploaded to the same repository for each service. ## Kubernetes cluster Now let's create [Managed Kubernetes](https://www.digitalocean.com/products/kubernetes) cluster. Navigate to the cluster creation page [here](https://cloud.digitalocean.com/kubernetes/clusters/new)
We recommend you to create a cluster in the region where your end-users are located, it will reduce response time to incoming requests to all services. Also, if your cluster will be located in one region with a Container Registry deployment process will be faster. You can find more information about regions [here](https://docs.digitalocean.com/products/platform/availability-matrix/). Cluster Region
Set Node pool name (e.g. `pool-app`) and configure Nodes. Digital Ocean recommends creating at least 2 nodes for the production environment. These settings will have an impact on the price of the cluster. Cluster Capacity Set cluster name (e.g. `my-ship-app`). A common practice is to use the project name for it. Cluster Name Click on `Create Kubernetes Cluster` button to create a cluster and wait for cluster to be ready. After cluster is created, go to the Container Registry's settings and find `DigitalOcean Kubernetes integration` section. Registry Settings You need to select your newly created `my-ship-app` cluster. Registry Check Cluster
## Personal access token To upload docker images in Container Registry and pull them after from cluster we need Digital Ocean [Personal Access Token](https://cloud.digitalocean.com/account/api/tokens). When you created cluster - one with **Read Only** scope was automatically created. But we need to [generate](https://cloud.digitalocean.com/account/api/tokens/new) a new one with: * Name (e.g. `my-ship-app-admin-deploy`) * **Full Access** scope * No expiration You cannot change scope of already generated token. Digital Ocean Token We will need this token soon, so don't close this page yet. Digital Ocean Token Be very careful with Personal Access Token, if someone steals it he will get access to all resources from your Digital Ocean account. ## Accessing cluster from a local machine Download cluster's kubeconfig, this file includes information for accessing cluster through `kubectl`. Kubeconfig Download ```yaml my-ship-app-kubeconfig.yaml theme={null} apiVersion: v1 clusters: - cluster: certificate-authority-data: ... server: https://... name: do-nyc3-my-ship-app contexts: - context: cluster: do-nyc3-my-ship-app user: do-nyc3-my-ship-app-admin name: do-nyc3-my-ship-app current-context: do-nyc3-my-ship-app kind: Config preferences: {} users: - name: do-nyc3-my-ship-app-admin user: token: dop_v1_... ``` And replace initial **Read only** token with new **Full access** token from [Personal access token](#personal-access-token) section. ```yaml my-ship-app-kubeconfig.yaml {18-19} theme={null} apiVersion: v1 clusters: - cluster: certificate-authority-data: ... server: https://... name: do-nyc3-my-ship-app contexts: - context: cluster: do-nyc3-my-ship-app user: do-nyc3-my-ship-app-admin name: do-nyc3-my-ship-app current-context: do-nyc3-my-ship-app kind: Config preferences: {} users: - name: do-nyc3-my-ship-app-admin user: # replace this token for full access token token: dop_v1_... ``` [Kubeconfig](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/) files contain information about several clusters, you have your own on the local machine, it should have been created after `kubectl` installation. You need to add information about the new cluster to your kubeconfig. Find `.kube/config` file on your machine, and add `cluster`, `context` and `user` values from `my-ship-app-kubeconfig.yaml`. ```yaml ~/.kube/config {7-11, 17-21, 29-33} theme={null} apiVersion: v1 clusters: - cluster: certificate-authority-data: ... server: https://... name: some-cluster # your new cluster from my-ship-app-kubeconfig.yaml goes here - cluster: certificate-authority-data: ... server: https://... name: do-nyc3-my-ship-app contexts: - context: cluster: some-cluster user: some-user name: some-cluster # your new context from my-ship-app-kubeconfig.yaml goes here - context: cluster: do-nyc3-my-ship-app user: do-nyc3-my-ship-app-admin name: do-nyc3-my-ship-app current-context: some-cluster kind: Config preferences: {} users: - name: some-user user: token: dop_v1_... # your new user from my-ship-app-kubeconfig.yaml goes here - name: do-nyc3-my-ship-app-admin user: token: dop_v1_... ``` Execute kubectx in your terminal and select your cluster from the list. ```shell theme={null} kubectx ``` You will see the list of available clusters. ```shell theme={null} some-cluster do-nyc3-my-ship-app ``` Select your cluster from the list: ```shell theme={null} kubectx do-nyc3-my-ship-app ``` Check the installed pods by running: ```shell theme={null} kubectl get pods -A ``` You should see a list of system pods in your cluster: ```shell theme={null} NAMESPACE NAME READY STATUS RESTARTS AGE kube-system cilium-tb8td 1/1 Running 0 18m kube-system cilium-x5w8n 1/1 Running 0 19m kube-system coredns-5679ffb5c8-b7dzj 1/1 Running 0 17m kube-system coredns-5679ffb5c8-d465r 1/1 Running 0 17m kube-system cpc-bridge-proxy-ebpf-2gzfr 1/1 Running 0 17m kube-system cpc-bridge-proxy-ebpf-jknzh 1/1 Running 0 17m kube-system csi-do-node-jcqd2 2/2 Running 0 17m kube-system csi-do-node-rpx6q 2/2 Running 0 17m kube-system do-node-agent-ldhxq 1/1 Running 0 17m kube-system do-node-agent-pdksz 1/1 Running 0 17m kube-system hubble-relay-66f54dcd57-l7xjb 1/1 Running 0 21m kube-system hubble-ui-785bdbc45b-6xd57 2/2 Running 0 18m kube-system konnectivity-agent-h79mt 1/1 Running 0 17m kube-system konnectivity-agent-hvv67 1/1 Running 0 17m ``` ## Ingress NGINX Controller [ingress-nginx](https://github.com/kubernetes/ingress-nginx) is an Ingress controller for Kubernetes using [NGINX](https://nginx.org) as a reverse proxy and load balancer. Learn more about ingress-nginx functionality in the **[official documentation](https://docs.nginx.com/nginx-ingress-controller/intro/how-nginx-ingress-controller-works/)**. Change to the `deploy/dependencies` directory in your terminal. This step is required **only if** you specified a custom node pool name in your Digital Ocean Kubernetes cluster. If you did, update the `doks.digitalocean.com/node-pool` value in `values.yaml.gotmpl`: ```yaml deploy/dependencies/ingress-nginx/values.yaml.gotmpl {5} theme={null} controller: publishService: enabled: true nodeSelector: doks.digitalocean.com/node-pool: pool-app rbac: create: true defaultBackend: enabled: false ``` Install helm dependencies using helmfile: ```bash theme={null} helmfile deps ``` Preview the changes first: ```bash theme={null} helmfile diff ``` If the preview looks correct, apply the configuration: ```bash theme={null} helmfile apply ``` ## DNS and SSL After deploying ingress-nginx, retrieve the Load Balancer's external ip: ```bash theme={null} kubectl get svc ingress-nginx-controller -n ingress-nginx ``` Copy the value from the `EXTERNAL-IP` column. ```shell theme={null} NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ingress-nginx-controller LoadBalancer 10.245.201.160 138.68.124.241 80:30186/TCP,443:32656/TCP 28m ``` It take some time while **ingress-nginx** will configure everything and provide `EXTERNAL-IP`. You can follow this recommended naming pattern for different environments: | Environment | API Domain | Web Domain | | ----------- | -------------------- | -------------------- | | Production | api.ship.com | app.ship.com | | Staging | api.staging.ship.com | app.staging.ship.com | 1. First, ensure you have a domain in Cloudflare. You can either: * [Register a new domain](https://developers.cloudflare.com/registrar/get-started/register-domain/) * [Transfer an existing domain](https://developers.cloudflare.com/registrar/get-started/transfer-domain-to-cloudflare/) 2. In the Cloudflare DNS tab, create 2 `A` records: * One for Web interface * One for API endpoint Both should point to your Load Balancer's external hostname. Enable the **Proxied** option to: * Route traffic through Cloudflare * Generate SSL certificates automatically CloudFlare API DNS Configuration
CloudFlare Web DNS Configuration Cloudflare's free Universal SSL certificates only cover the apex domain and one subdomain level. For multiple subdomain levels, you'll need an [Advanced Certificate](https://developers.cloudflare.com/ssl/edge-certificates/advanced-certificate-manager/manage-certificates/).
Update your domain settings in the appropriate environment configuration files: For API service: ```yaml deploy/app/api/production.yaml theme={null} service: api port: 3001 domain: api.my-ship-app.paralect.com ``` ```yaml deploy/app/api/staging.yaml theme={null} service: api port: 3001 domain: api.my-ship-app.staging.paralect.com ``` For Web service: ```yaml deploy/app/web/production.yaml theme={null} service: web port: 3002 domain: my-ship-app.paralect.com ``` ```yaml deploy/app/web/staging.yaml theme={null} service: web port: 3002 domain: my-ship-app.staging.paralect.com ```
## MongoDB Atlas [MongoDB Atlas](https://cloud.mongodb.com/) is a fully managed cloud database service that provides automated backups, scaling, and security features. It offers 99.995% availability with global deployment options and seamless integration with AWS infrastructure. ### Cluster Creation Sign in to your [MongoDB Atlas account](https://cloud.mongodb.com/) and create a new project if needed. Click **Create** to start cluster deployment. **Cluster Tier Selection:** * **Staging**: `M0` (Free tier) - Suitable for development and testing * **Production**: `M10` or higher - Includes automated backups and advanced features {provider_1 === 'aws' && (

Provider & Region:

  • Select AWS as your cloud provider
  • Choose the same region as your EKS cluster for optimal performance
)} {provider_1 === 'do' && (

Provider & Region:

  • Select AWS as your cloud provider
  • Choose the region closest to your Digital Ocean cluster for optimal performance
)} Deploy MongoDB Atlas cluster
Enter a descriptive cluster name (e.g., `ship-production-cluster`, `ship-staging-cluster`)
### Security Configuration Navigate to **Database Access** → **Add New Database User** * **Authentication Method**: Password * **Username**: Use environment-specific names (e.g., `api-production`, `api-staging`) * **Password**: Generate a strong password * **Database User Privileges**: **Read and write to any database** Add MongoDB database user **Password Requirements**: Ensure the password starts with a letter and contains only alphanumeric characters and common symbols. Special characters at the beginning can cause URI parsing issues. Navigate to **Network Access** → **Add IP Address** * Click **Allow access from anywhere** to allow connections from any IP with valid credentials * For production, consider restricting to specific IP ranges for enhanced security Configure MongoDB network access ### Get Connection String Go to your cluster dashboard and click the **Connect** button. MongoDB Atlas dashboard 1. Select **Drivers** in the "Connect your application" section 2. Choose **Node.js** driver and latest version 3. Copy the connection string and replace `` with your actual password MongoDB connection string **Example Connection String:** ```bash theme={null} mongodb+srv://api-production:your-password@cluster0.xxxxx.mongodb.net/?retryWrites=true&w=majority ``` Store the connection string securely - you'll need it for environment configuration later Before deploying to production, configure [automated backups](https://www.mongodb.com/docs/atlas/backup-restore-cluster/) in the Atlas console to ensure data recovery capabilities. # Environment variables Kubernetes applications require proper environment variable configuration for both API and Web components. This section covers how to set up and manage environment variables securely using Kubernetes secrets and configuration files. ### API Environment Variables For the API deployment, you need to set up environment variables using Kubernetes secrets to securely manage sensitive configuration data. **Secrets** in Kubernetes are used to store sensitive information, such as passwords, API tokens, and keys. They are encoded in Base64 format to provide a level of security. These can be mounted into containers as data volumes or used as environment variables. Before deploying the app, make sure all necessary variables from the API config are exist. Here are the minimal set of required variables: | Name | Description | Example value | | --------------- | -------------------------- | -------------------------------------------------- | | `APP_ENV` | Application environment | `production` | | `MONGO_URI` | Database connection string | `mongodb://:@ship.mongodb.net` | | `MONGO_DB_NAME` | Database name | `api-production` | | `API_URL` | API domain URL | `https://api.my-ship-app.paralect.com` | | `WEB_URL` | Web app domain URL | `https://my-ship-app.paralect.com` | #### Environment Variable Details Specifies the application environment (development, staging, production). This controls logging levels, debugging features, error reporting, and other environment-specific behaviors. The API uses this to determine which configuration settings to load. MongoDB connection string including authentication credentials and cluster information. This is the primary database connection for the API. Format: `mongodb+srv://username:password@cluster.mongodb.net`. Each environment should use a separate database cluster or at minimum separate credentials. Name of the MongoDB database to use for this environment. Each environment (development, staging, production) should have its own database to prevent data conflicts and ensure proper isolation. The fully qualified domain name where the API will be accessible. This must be a valid HTTPS URL and should match your Kubernetes ingress configuration. Used for CORS settings and internal service communication. The fully qualified domain name where the web application will be accessible. Used for CORS configuration, redirect URLs, email templates, and social sharing metadata. Must be a valid HTTPS URL. #### Setting up Kubernetes Secrets Create Kubernetes namespaces and secret objects for staging and production environments: ```bash theme={null} kubectl create namespace staging kubectl create secret generic api-staging-secret -n staging kubectl create namespace production kubectl create secret generic api-production-secret -n production ``` First, create an `APP_ENV` variable to initialize secret storage for k8sec: ```bash production theme={null} k8sec set api-production-secret APP_ENV=production -n production ``` ```bash staging theme={null} k8sec set api-staging-secret APP_ENV=staging -n staging ``` Run the following command to check the created secret: ```bash production theme={null} k8sec list api-production-secret -n production ``` ```bash staging theme={null} k8sec list api-staging-secret -n staging ``` Create a `.env.production` file with all required variables: ```bash .env.production theme={null} APP_ENV=production MONGO_URI=mongodb://username:password@ship.mongodb.net MONGO_DB_NAME=api-production API_URL=https://api.my-ship-app.paralect.com WEB_URL=https://my-ship-app.paralect.com ``` ```bash .env.staging theme={null} APP_ENV=staging MONGO_URI=mongodb://username:password@ship.mongodb.net MONGO_DB_NAME=api-staging API_URL=https://api.my-ship-app.staging.paralect.com WEB_URL=https://my-ship-app.staging.paralect.com ``` Replace all example values with your actual configuration. Never use production secrets in documentation or version control. Import secrets from the .env file to Kubernetes secret using k8sec: ```bash production theme={null} k8sec load -f .env.production api-production-secret -n production ``` ```bash staging theme={null} k8sec load -f .env.staging api-staging-secret -n staging ``` After updating environment variables, you must initiate a new deployment for changes to take effect. Kubernetes pods cache variable values during startup, requiring a pod restart or rolling update to apply changes. ### Web Environment Variables The web application uses Next.js environment variables that are embedded at build time and made available in the browser. Unlike API secrets, these variables are stored directly in the GitHub repository. **Why Web Environment Variables Are Safe in Git**: Web environment variables (prefixed with `NEXT_PUBLIC_`) contain only public configuration like URLs and API endpoints. They don't include sensitive data like passwords or API keys, making them safe to store in version control. These values are already exposed to users in the browser, so repository storage doesn't create additional security risks. **Security Notice**: Never store sensitive information (passwords, API keys, secrets) in web environment files as they will be accessible on the client side. Only use public configuration values that are safe to expose to end users. #### Configuration Files Web environment variables are stored in separate files for each deployment environment: ```bash apps/web/.env.production theme={null} NEXT_PUBLIC_API_URL=https://api.my-ship-app.paralect.com NEXT_PUBLIC_WS_URL=https://api.my-ship-app.paralect.com NEXT_PUBLIC_WEB_URL=https://my-ship-app.paralect.com ``` ```bash apps/web/.env.staging theme={null} NEXT_PUBLIC_API_URL=https://api.my-ship-app.staging.paralect.com NEXT_PUBLIC_WS_URL=https://api.my-ship-app.staging.paralect.com NEXT_PUBLIC_WEB_URL=https://my-ship-app.staging.paralect.com ``` #### Environment Variables Reference | Variable | Description | Example | | --------------------- | ------------------------------------ | -------------------------------------- | | `NEXT_PUBLIC_API_URL` | Base URL for API requests | `https://api.my-ship-app.paralect.com` | | `NEXT_PUBLIC_WS_URL` | WebSocket server URL for real-time | `https://api.my-ship-app.paralect.com` | | `NEXT_PUBLIC_WEB_URL` | App's own URL for redirects/metadata | `https://my-ship-app.paralect.com` | **Best Practice**: Keep web environment files in your repository and ensure all values are non-sensitive. If you need to reference sensitive data from the frontend, create a secure API endpoint that returns the necessary information after proper authentication. ## Setting up GitHub Actions CI/CD To automate deployment through Github Actions you need to configure [Github Secrets](https://docs.github.com/en/actions/security-guides/encrypted-secrets) inside workflow files. ### Configuring GitHub Actions secrets and variables Before starting, make sure you have created a GitHub repository for your project. GitHub Secrets and variables allow you to manage reusable configuration data. Secrets are encrypted and are used for sensitive data. [Learn more about encrypted secrets](https://docs.github.com/actions/automating-your-workflow-with-github-actions/creating-and-using-encrypted-secrets). Variables are shown as plain text and are used for non-sensitive data. [Learn more about variables](https://docs.github.com/actions/learn-github-actions/variables). The deployment will be triggered on each commit: * Commits to **main** branch → deploy to **staging** environment * Commits to **production** branch → deploy to **production** environment [Configure](https://docs.github.com/en/actions/security-for-github-actions/security-guides/using-secrets-in-github-actions#creating-secrets-for-a-repository) the following secrets and variables in your GitHub repository: | Name | Type | Description | | --------------------------- | -------- | ----------------------------------------------------------------------------------------------------------------- | | DO\_PERSONAL\_ACCESS\_TOKEN | secret | The secret access user created for CI/CD. This allows GitHub Actions to authenticate with DO services | | CLUSTER\_NAME\_STAGING | variable | Name of the staging cluster. (our case: `my-ship-app`) | | CLUSTER\_NAME\_PRODUCTION | variable | Name of the production cluster. (our case: `my-ship-app`, same as staging cluster since we have only one cluster) | | CLUSTER\_NODE\_POOL | variable | Name of the node pool. (our case: `pool-app`) | | REGISTRY\_NAME | variable | Name of the Digital Ocean Container Registry. (our case: `my-ship-app`) | Never commit sensitive credentials directly to your repository.
Always use GitHub Secrets for sensitive information like DO keys.
Variables (unlike secrets) are visible in logs and can be used for non-sensitive configuration values that may need to be referenced or modified. We set up **DO\_PERSONAL\_ACCESS\_TOKEN** to be universal for both production and staging environments with **Full access** scope. Your **KUBE\_CONFIG\_PRODUCTION** and **KUBE\_CONFIG\_STAGING** will be the same if you have only one cluster for both environments. GitHub Secrets GitHub Variables Now commit all changes to GitHub that will trigger deployment, or you can [run a workflow manually](https://docs.github.com/en/actions/managing-workflow-runs-and-deployments/managing-workflow-runs/manually-running-a-workflow) CI start Done! Application deployed and can be accessed by the provided domain. CI finish Deployment finish ```shell theme={null} kubectl get pods -A NAMESPACE NAME READY STATUS RESTARTS AGE ingress-nginx ingress-nginx-controller-6bdff8c8fd-kwxcn 1/1 Running 0 6h50m kube-system cilium-tb8td 1/1 Running 0 8h kube-system cilium-x5w8n 1/1 Running 0 8h kube-system coredns-5679ffb5c8-b7dzj 1/1 Running 0 8h kube-system coredns-5679ffb5c8-d465r 1/1 Running 0 8h kube-system cpc-bridge-proxy-ebpf-2gzfr 1/1 Running 0 8h kube-system cpc-bridge-proxy-ebpf-jknzh 1/1 Running 0 8h kube-system csi-do-node-jcqd2 2/2 Running 0 8h kube-system csi-do-node-rpx6q 2/2 Running 0 8h kube-system do-node-agent-ldhxq 1/1 Running 0 8h kube-system do-node-agent-pdksz 1/1 Running 0 8h kube-system hubble-relay-66f54dcd57-l7xjb 1/1 Running 0 8h kube-system hubble-ui-785bdbc45b-6xd57 2/2 Running 0 8h kube-system konnectivity-agent-h79mt 1/1 Running 0 8h kube-system konnectivity-agent-hvv67 1/1 Running 0 8h production api-57d7787d98-cj75s 1/1 Running 0 2m15s production migrator-286bq 0/1 Completed 0 2m54s production scheduler-6c497dfbcc-n6b5l 1/1 Running 0 2m6s production web-54c6674974-lv94b 1/1 Running 0 71m redis redis-master-0 1/1 Running 0 6h49m staging api-689b75c786-97c4l 1/1 Running 0 71m staging scheduler-57b984f6c-zcc44 1/1 Running 0 71m staging web-55bdd955b-chswp 1/1 Running 0 70m ``` If something went wrong you can check the workflows logs on GitHub and use [**kubectl logs**](https://kubernetes.io/docs/reference/kubectl/cheatsheet/#interacting-with-running-pods), [**kubectl describe**](https://kubernetes.io/docs/reference/kubectl/cheatsheet/#viewing-finding-resources) commands. ## Setting up Upstash Redis database (recommended) # Upstash Redis Integration [Upstash Redis](https://upstash.com/) is a **highly available, infinitely scalable** Redis-compatible database that provides enterprise-grade features without the operational complexity. ## How Ship Uses Redis Ship leverages Redis for several critical functionalities: | Use Case | Description | Implementation | | --------------------------- | ------------------------------------------------- | ------------------------------------------------------------------- | | **Real-time Communication** | Pub/Sub mechanism for WebSocket functionality | [Socket.io Redis Adapter](https://socket.io/docs/v4/redis-adapter/) | | **Rate Limiting** | API request throttling and abuse prevention | Redis counters with TTL | | **Caching** | Application data caching for improved performance | Key-value storage with expiration | **Redis as a Message Broker**: When scaling to multiple server instances, Redis acts as a message broker between Socket.io servers, ensuring real-time messages reach all connected clients regardless of which server they're connected to. ## Setting Up Upstash Redis ### Create Your Database Log in to your [Upstash account](https://console.upstash.com/) and navigate to the Redis section. Click **Create Database** in the upper right corner to open the configuration dialog. Create Upstash Redis Database **Database Name:** Choose a descriptive name for your database (e.g., `my-ship-app-production`) **Primary Region:** Select the region closest to your main application deployment for optimal write performance. **Read Regions:** Choose additional regions where you expect high read traffic for better global performance. Choose your pricing plan based on expected usage and click **Create** to deploy your database. {provider_0 === 'aws' && (

Region Selection: For Kubernetes deployments on AWS, choose the same AWS region as your EKS cluster to minimize latency and data transfer costs.

)} {provider_0 === 'do' && (

Region Selection: For Kubernetes deployments on Digital Ocean, choose the same region as your cluster to minimize latency and data transfer costs.

)}
Once your database is created, you'll need the connection string for your application: Go to your database dashboard and find the **Connect to your database** section. Upstash Redis Connection Details 1. Select the **Node** tab for the appropriate connection string format 2. Click **Reveal** to show the hidden password 3. Copy the complete Redis URI (format: `rediss://username:password@host:port`) Using `k8sec`, add the Redis connection string to your environment configuration: ```bash production theme={null} k8sec set api-production-secret -n production REDIS_URI=$REDIS_URI ``` ```bash staging theme={null} k8sec set api-staging-secret -n staging REDIS_URI=$REDIS_URI ``` After updating environment variables, restart your API pod using: ```bash theme={null} kubectl delete pod -n ``` This will trigger Kubernetes to create a new pod with the updated environment variables. ### Verify Connection with Redis Insight Redis Insight is a powerful GUI tool for managing and debugging Redis databases. Download and install [Redis Insight](https://redis.io/insight/) on your local machine. 1. Open Redis Insight 2. Click **Add Database** 3. Paste your Upstash Redis connection string in the **Connection URL** field 4. Click **Add Database** Redis Insight Connection Setup Once connected, you can use Upstash Redis Console to: * Browse keys and data structures * Execute Redis commands directly * Monitor real-time performance metrics * Debug application data storage Upstash Redis Metrics Dashboard **Real-time Monitoring**: Upstash Redis updates database metrics automatically every 10 seconds, giving you near real-time visibility into your Redis performance and usage. # Overview Source: https://ship.paralect.com/docs/deployment/kubernetes/overview We use the next primary technologies for deployment: * [Docker](https://www.docker.com/) for delivering applications inside containers; * [Kubernetes](https://kubernetes.io/) for containers orchestration; * [Helm](https://helm.sh/) for managing Kubernetes applications; * [GitHub Actions](https://github.com/features/actions) for CI/CD deployment; To use this guide we highly recommend you to check their documentation and be familiar with basics. Deployed application is multiple services, wrapped in Docker containers and run inside the Kubernetes cluster. Ship consists of 4 services by default: [**Web**](/web/overview), [**API**](/api-reference/overview), [**Scheduler**](/scheduler) and [**Migrator**](/migrator). Deployment can be done manually from your local machine or via a CI/CD pipeline. We have templates of deployment scripts for Digital Ocean and AWS, but you can use another cloud providers with minor changes. ## External services you need * Kubernetes cluster, you can create [DO Managed Kubernetes](https://www.digitalocean.com/products/kubernetes) or [AWS EKS](https://aws.amazon.com/eks/); * Container registry for your Docker images, mostly cloud providers has it own: [DO Container Registry](https://www.digitalocean.com/products/container-registry), [AWS ECR](https://aws.amazon.com/ecr/); * DNS provider account. We recommend [CloudFlare](https://www.cloudflare.com/), for AWS you can use [Route 53](https://aws.amazon.com/route53/); * Managed MongoDB. We recommend [MongoDB Atlas](https://www.mongodb.com/atlas/database) or [DO Managed MongoDB](https://www.digitalocean.com/products/managed-databases-mongodb); ## Deployment schema Deployment schema ## Deployment flow The deployment flow is pretty simple. Once you make changes in any service, the script builds a new Docker image for it, adds **image tag** to it, and pushes it to the Container Registry. Script passes the image tag to deployment command, so Kubernetes knows which image needs to be downloaded from the registry. Image tag consists of repo branch and commits hash. ```shell theme={null} imageTag = `${branch}.${commitSHA}`; ``` Then script grabs all resources templates([Ingress](https://kubernetes.io/docs/concepts/services-networking/ingress/), [Service](https://kubernetes.io/docs/concepts/services-networking/service/), etc) from the [templates](https://github.com/paralect/ship/blob/main/deploy/digital-ocean/deploy/app/api/templates) folder for services that are deploying, packages them as [Helm Chart](https://helm.sh/docs/topics/charts/), and creates [Helm Release](https://helm.sh/docs/intro/using_helm/#three-big-concepts) that installs all that resources in the cluster. During the release, Kubernetes will download a new Docker image from registry and use it to create a new version of the service in the cluster. We use [**Blue-Green**](https://martinfowler.com/bliki/BlueGreenDeployment.html) deployment through [**Rolling Update**](https://kubernetes.io/docs/tutorials/kubernetes-basics/update/update-intro/). This is the main part of the deployment script. ```javascript deploy/script/src/index.js theme={null} const buildAndPushImage = async ({ dockerFilePath, dockerRepo, dockerContextDir, imageTag, environment }) => { await execCommand(`docker build \ --build-arg APP_ENV=${environment} \ -f ${dockerFilePath} \ -t ${dockerRepo} \ ${dockerContextDir}`); await execCommand(`docker tag ${dockerRepo} ${imageTag}`); await execCommand(`docker push ${imageTag}`); } const pushToKubernetes = async ({ imageTag, appName, deployConfig }) => { await execCommand(` helm upgrade --install apps-${config.environment}-${appName} ${deployDir} \ --namespace ${config.namespace} --create-namespace \ --set appname=${appName} \ --set imagesVersion=${imageTag} \ --set nodePool=${config.nodePool} \ --set containerRegistry=${config.dockerRegistry.name} \ -f ${deployDir}/${config.environment}.yaml \ --timeout 35m \ `); } // build web image and push it to registry await buildAndPushImage({ ...deployConfig, imageTag: `${deployConfig.dockerRepo}:${imageTag}`, environment: config.environment }); // deploy web to kubernetes await pushToKubernetes({ imageTag, appName: 'web', deployConfig }); ``` We have 2 separate GitHub Actions [workflows](https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions) for services: * Deploy [**Web**](/web/overview) * Deploy [**API**](/api-reference/overview), [**Scheduler**](/scheduler), and [**Migrator**](/migrator). Migrator deploys **before** API and Scheduler. If the Migrator fails, API and Scheduler will be not deployed. This approach guarantees us that the API and Scheduler always work with the appropriate database schema. We have [**separate**](https://github.com/paralect/ship/tree/main/deploy/digital-ocean/.github/workflows) GitHub Actions workflows for different environments. ## Environment variables When deploying "Ship" to Kubernetes, it's essential to consider the configuration needs of different environments. By separating the environment-specific settings into dedicated files, you can easily manage and deploy the application across environments. The `APP_ENV` environment variable is typically set based on the environment in which the application is running. Its value corresponds to the specific environment, such as "development", "staging" or "production". This variable helps the application identify its current environment and load the corresponding configuration. For the web application, by setting the environment variable `APP_ENV`, the application can determine the environment in which it is running and download the appropriate configuration file: | APP\_ENV | File | | ----------- | ---------------- | | development | .env.development | | staging | .env.staging | | production | .env.production | These files should contain specific configuration variables required for each environment. In contrast, the API utilizes a single `.env` file that houses its environment-specific configuration. This file typically contains variables like API keys, secrets, or other sensitive information. To ensure security, it's crucial to add the `.env` file to the `.gitignore` file, preventing it from being tracked and committed to the repository. When deploying to Kubernetes, you'll need to include the appropriate environment-specific configuration files in your deployment manifests. Kubernetes offers [**ConfigMaps**](https://kubernetes.io/docs/concepts/configuration/configmap/) and [**Secrets**](https://kubernetes.io/docs/concepts/configuration/secret/) for managing such configurations. **ConfigMaps** are suitable for non-sensitive data, while **Secrets** are recommended for sensitive information like API keys or database connection string. Ensure that you create **ConfigMaps** or **Secrets** in your Kubernetes cluster corresponding to the environment-specific files mentioned earlier. ## Database setup We recommend avoiding self-managed database solutions and use cloud service like [MongoDB Atlas](https://www.mongodb.com/atlas/database) that provides managed database. It handles many quite complex things: database deployment, backups, scaling, and security. ## SSL To make your application work in modern browsers and be secure, you need to configure SSL certificates. The easiest way is to use [CloudFlare](https://www.cloudflare.com/), it allows you to set up SSL in most simple way by proxying all traffic through CloudFlare. Use this [guide](https://developers.cloudflare.com/fundamentals/get-started/setup/add-site/). Cloudflare can be used as DNS nameservers for your DNS registrar, such as [GoDaddy](https://www.godaddy.com/). Also, you can buy and register a domain in Cloudflare itself. If you are deploying in AWS you can use [AWS Certificate Manager](https://aws.amazon.com/ru/certificate-manager/) for SSL. ## Services Services are parts of your application packaged as Helm Charts. | Service | Description | Kubernetes Resource | | :------------------------------------------------------------------------------------------ | :-------------------------------------------------------------------------------------------------------------------------------------- | :------------------ | | [**Web**](https://github.com/paralect/ship/blob/main/template/apps/web) | Next.js server that serves static files and API endpoints | Pod | | [**API**](https://github.com/paralect/ship/blob/main/template/apps/api) | Backend server | Pod | | [**Scheduler**](https://github.com/paralect/ship/blob/main/template/apps/api/src/scheduler) | Service that runs cron jobs | Pod | | [**Migrator**](https://github.com/paralect/ship/blob/main/template/apps/api/src/migrator) | Service that migrates database schema. It deploys before api through Helm pre-upgrade [hook](https://helm.sh/docs/topics/charts_hooks/) | Job | To deploy services in the cluster manually you need to set cluster authorization credentials inside [config](https://github.com/paralect/ship/blob/main/examples/base/deploy/script/src/config.js) and run deployment [script](https://github.com/paralect/ship/blob/main/examples/base/deploy/script/src/index.js). ```shell deploy/script/src theme={null} node index ? What service to deploy? (Use arrow keys) api web ``` When you will configure GitHub Secrets in your repo, GitHub Actions will automatically deploy your services on push in the repo. You can check the required secrets inside [workflow](https://github.com/paralect/ship/tree/main/deploy/digital-ocean/.github/workflows) files. If you are adding new service, you need to configure it in [**app**](https://github.com/paralect/ship/blob/main/examples/base/deploy/app) and [**script**](https://github.com/paralect/ship/blob/main/examples/base/deploy/script/src) folders. You can do it following the example from neighboring services. ## Dependencies Dependencies are third-party services packaged as Helm Charts and bash scripts that install configured resources in the cluster. | Dependency | Description | | :----------------------------------------------------------------------------------------------------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | [ingress-nginx](https://github.com/kubernetes/ingress-nginx) | Ingress controller for Kubernetes using Nginx as a reverse proxy and load balancer | | [redis](https://github.com/bitnami/charts/tree/main/bitnami/redis) | Open source, advanced key-value store | | [regcred](https://github.com/paralect/ship/blob/main/deploy/digital-ocean/deploy/dependencies/regcred) | Bash script for creating Kubernetes [Secret](https://kubernetes.io/docs/concepts/configuration/secret/). Secret needs for authorizing in Container Registry when pulling images from cluster. Required only for Digital Ocean clusters | To deploy dependencies in cluster, you need to run [deploy-dependencies.sh](https://github.com/paralect/ship/blob/main/deploy/digital-ocean/deploy/bin/deploy-dependencies.sh) script. ```shell deploy/bin theme={null} bash deploy-dependencies.sh ``` If you are adding new dependency, you need to create separate folder inside [**dependencies**](https://github.com/paralect/ship/blob/main/examples/base/deploy/dependencies) folder and configure new Chart. Also, you need to add new dependency in [**deploy-dependencies.sh**](https://github.com/paralect/ship/blob/main/examples/base/deploy/bin/deploy-dependencies.sh) script. You can do it following the example from neighboring dependencies. ## Deploy scripts structure | Folder | Description | | :-------------------------------------------------------------------------------------------------- | :---------------------------------------------------------------------- | | [.github](https://github.com/paralect/ship/blob/main/deploy/digital-ocean/.github/workflows) | GitHub Actions CI/CD pipelines for automated deployment on push in repo | | [app](https://github.com/paralect/ship/blob/main/deploy/digital-ocean/deploy/app) | Helm charts for services | | [bin](https://github.com/paralect/ship/blob/main/deploy/digital-ocean/deploy/bin) | Utils scripts | | [dependencies](https://github.com/paralect/ship/blob/main/deploy/digital-ocean/deploy/dependencies) | Helm charts for dependencies | | [script](https://github.com/paralect/ship/blob/main/deploy/digital-ocean/deploy/script) | Deployment script | # Render Source: https://ship.paralect.com/docs/deployment/render There is a simplified type of deployment where you can try to deploy Ship for free using [Render](https://render.com). This type allows you to set up infrastructure faster and doesn't require additional DevOps knowledge from the development team. You can switch to a more complex Kubernetes solution when your application will be at scale. It's a step-by-step Ship deployment guide. We will use the [Render](https://render.com), [Mongo Atlas](https://www.mongodb.com/) and [Redis Cloud](https://redis.com/try-free/) for databases deployment and [Cloudflare](https://www.cloudflare.com/) **(optional)** for DNS and SSL configuration. You need to create [GitHub](https://github.com/), [Render](https://render.com), [CloudFlare](https://www.cloudflare.com/), [MongoDB Atlas](https://www.mongodb.com/cloud/atlas/register) and [Redis Cloud](https://redis.com/try-free/) accounts. Also, you need [git](https://git-scm.com/) and [Node.js](https://nodejs.org/en/) if you already haven't. ## Setup project First, initialize your project. Type `npx create-ship-app init` in the terminal then choose desired build type and **Render** as a cloud service provider. Init project You will have next project structure. ```shell theme={null} /my-app /apps /web /api /.github ... ``` Create GitHub private repository and upload source code. Private repo ```shell theme={null} cd my-app git remote add origin https://github.com/Oigen43/my-app.git git branch -M main git push -u origin main ``` ## MongoDB Atlas Navigate to [MongoDB Atlas](https://cloud.mongodb.com/), sign in to your account and create a new database. ### Database creation 1. Select the appropriate type. Dedicated for a production environment, shared for staging/demo. 2. Select provider and region. We recommend selecting the same or closest region to the DO application. 3. Select cluster tier. Free M0 Sandbox should be enough for staging/demo environments. For production environment we recommended selecting the option that supports cloud backups, M2 or higher. 4. Enter cluster name Mongo cluster ### Security and connection After cluster creation, you'll need to set up security. Select the authentication type (username and password) and create a user. Please be aware that the initial character of the generated password should be a letter. If it isn't, you'll need to create a new password. Failing to do this may lead to DigitalOcean parsing the `MONGO_URI` variable incorrectly. Mongo setup authentication Add IP addresses list, which should have access to your cluster. Add 0.0.0.0/0 IP address to allow anyone with credentials to connect. Mongo setup ip white list After database creation, go to the dashboard page and get the URI connection string by pressing the `connect` button. Mongo dashboard Select `Connect your application` option. Choose driver and mongo version, and copy connection string. Don't forget to replace `` with your credentials. Mongo connect dialog Save this value. It will be needed later when creating the app in Digital Ocean. Before moving to production, it's crucial to set up [MongoDB backup methods](https://www.mongodb.com/docs/manual/core/backups). This ensures that you can reliably restore your data in the event of unforeseen circumstances. ## Redis Cloud Navigate to [Redis Cloud](https://redis.com/try-free/) and create an account. Select cloud provider and region, then press `Let's start free` to finish database creation. Redis create database Open database settings and get the database public endpoint and password. Redis public endpoint Redis password Form Redis connection string using public endpoint and password `redis://:`. Save this value. It will be needed later when creating the app in Digital Ocean. ## Render Navigate to the [Render Dashboard Panel](https://dashboard.render.com/) and select the **Blueprints** tab. The `Full-Stack` build type requires 2 applications. First for [Web](/web/overview) and second for [API](/api-reference/overview), [Migrator (TBD)](https://github.com/docs/migrator.md), and [Scheduler (TBD)](https://github.com/docs/scheduler.md) services. Render Blueprints Tab Ship provides an easy way to deploy your applications using [Infrastructure as Code (IaC)](https://render.com/docs/infrastructure-as-code). You can learn more about [Blueprint Specification here](https://render.com/docs/blueprint-spec). Review your `render.yaml` file in the application root directory and make some corrections if necessary. Click on the **New Blueprint Instance** button and **connect** the appropriate repository with Ship. Create a Blueprint instance Specify a name for your Blueprint instance, select a branch, and review the changes to apply them. **Apply** changes if you are satisfied with everything. Review Blueprint Creation ### Environment variables The `APP_ENV` environment variable is typically set based on the environment in which the application is running. Its value corresponds to the specific environment, such as "development", "staging" or "production". This variable helps the application identify its current environment and load the corresponding configuration. For the web application, by setting the environment variable `APP_ENV`, the application can determine the environment in which it is running and download the appropriate configuration file: | APP\_ENV | File | | ----------- | ---------------- | | development | .env.development | | staging | .env.staging | | production | .env.production | These files should contain specific configuration variables required for each environment. In contrast, the API utilizes a single `.env` file that houses its environment-specific configuration. This file typically contains variables like API keys, secrets, or other sensitive information. To ensure security, it's crucial to add the `.env` file to the `.gitignore` file, preventing it from being tracked and committed to the repository. So just specify the environment variables that will contain the values of your secrets. For example, if you have a secret named `API_KEY`, create an environment variable named `API_KEY` and set the value of the corresponding secret for it Now navigate to **Dashboard**, select your instance of *Web Service* and select the **Environment** tab in the sidebar. Here you need to pass only one variable - `APP_ENV`. Make sure that your web application has up-to-date environment data in the repository. Configuration Web Environments In the same way, specify the necessary environments in the *API Service* instance. ## Cloudflare Render provides an initial URL of the form `*.onrender.com` to all deployed services. `onrender.com` is a [public suffix domain](https://publicsuffix.org/) as it’s a shared domain across all Render services - and is done so in order to protect users from being able to read each other’s cookies. Ship uses cookies to store tokens on the front-end side. Therefore, you need a different domain to successfully deploy the application. If you don't have a personal domain, you can use free solutions for educational purposes, such as [free-domain](https://github.com/Olivr/free-domain) repository. Navigate to your Render application and open the `Settings` tab, scroll down to the `Custom Domains` section and click the `Add Custom Domain` button. Render Custom Domains Type your desired domain and click the `Save` button. After adding a new custom domain, you need to add a `CNAME` record in your DNS provider. Copy this alias for your app and move on. Render new domain Navigate to [CloudFlare](https://dash.cloudflare.com/) and sign in to account. 1. Go to `DNS` tab and create a new record. 2. Click `Add record`. Select type `CNAME`, enter domain name (must be the same you entered in Render settings) and paste alias into `target` field. Make sure `Proxy status` toggle enabled. 3. Save changes Cloudflare DNS Now go back to Custom Domains Settings and click the `Verify` button. It usually takes about 5 minutes for Render to confirm your domain and issue a certificate and start using your new domain. Once the domain is confirmed, application can be accessed by new address. Now make sure you have up-to-date environments in your API and Web applications. # How it Works Source: https://ship.paralect.com/docs/examples/api-public-docs/how-it-works When you're calling `registerDocs` function, we add config in Registry. You can register docs in any part of application. This config is written with this [standard](https://github.com/OAI/OpenAPI-Specification/blob/main/versions/3.1.0.md#serverObject) in mind. This registry contains all actions that gathered inside API. To retrieve these docs in open api format, you can call `docsService.getDocs` function. For advanced usage cases, you can reference to this [documentation](https://github.com/asteasolutions/zod-to-openapi) # Overview Source: https://ship.paralect.com/docs/examples/api-public-docs/overview Keeping your public API up to date can be a cumbersome task that requires manually updating files. Our solution eliminates this hassle by allowing you to document your code itself, so you can focus on development without worrying about API updates. The example demonstrates how to add Api public docs and the components included in the Ship to build a web application that supports documentation of API with Open API specification. The example includes documentation and sample code that will help the developer in the process of integrating with the Ship template. [Example with code](https://github.com/paralect/ship/tree/main/examples/public-docs) # Usage Source: https://ship.paralect.com/docs/examples/api-public-docs/usage Just add `docsService.registerDocs` in your code. For example ```typescript theme={null} // resources/account/actions/sign-up/doc.ts const config: RouteExtendedConfig = { private: false, tags: [resourceName], method: 'post', path: `/${resourceName}/sign-up`, summary: 'Sign up', request: {}, responses: {}, }; export default config; // resources/account/actions/sign-up/index.ts import docConfig from './doc'; export default (router: AppRouter) => { docsService.registerDocs(docConfig); router.post('/sign-up', validateMiddleware(schema), validator, handler); }; ``` Here we just added `/account/sign-up` path to open api specification. Later on we will learn how to customise it. To look at result you can launch application and make call to api endpoint with `/docs/json` path. E.g. `http://localhost:3002/docs/json` if you're using local server. It will return you json specification in [Open API standard](https://github.com/OAI/OpenAPI-Specification/blob/main/versions/3.1.0.md#serverObject). You can use tools like [Swagger Editor](https://editor-next.swagger.io/) to see it in pretty editor. In order to add body, params or query, you can add zod schema inside `request` property. E.g. for body it will look like that: ```typescript theme={null} // schemas/empty.schema.ts export const EmptySchema = docsService.registerSchema('EmptyObject', z.object({})); // resources/account/actions/sign-up/schema.ts export const schema = z.object({ firstName: z.string().min(1, 'Please enter First name').max(100), lastName: z.string().min(1, 'Please enter Last name').max(100), email: z.string().min(1, 'Please enter email').email('Email format is incorrect.'), password: z.string().regex( PASSWORD_REGEXP, 'The password must contain 6 or more characters with at least one letter (a-z) and one number (0-9).', ), }); // resources/account/actions/sign-up/doc.ts import { resourceName } from 'resources/account/constants'; import { schema } from './schema'; import { EmptySchema } from 'schemas/empty.schema'; const config: RouteExtendedConfig = { private: false, tags: [resourceName], method: 'post', path: `/${resourceName}/sign-up`, summary: 'Sign up', request: { body: { content: { 'application/json': { schema } } }, }, responses: { 200: { description: 'Empty data.', content: { 'application/json': { schema: EmptySchema, }, }, }, }, }; export default config; ``` Here we also added details inside `responses` property to let api user know what we might expect from there. For that we have used `docsService.registerSchema` function. For more details you can look at this [documentation](https://github.com/asteasolutions/zod-to-openapi) For query, it will look like that: ```typescript theme={null} // resources/account/actions/verify-email/schema.ts export const schema = z.object({ token: z.string().min(1, 'Token is required'), }); // resources/account/actions/verify-email/doc.ts import { resourceName } from 'resources/account/constants'; import { schema } from './schema'; const config: RouteExtendedConfig = { private: false, tags: [resourceName], method: 'get', path: `/${resourceName}/verify-email`, summary: 'Verify email', request: { query: schema, }, responses: { 302: { description: 'Redirect to web app', }, }, }; export default config; ``` Also, there is an option to make the endpoint secure. Just set the `private` property to `true`, and it will add JWT authorization to this endpoint. If your auth method is not JWT, then there is built-in `cookie-auth` strategy. To update set it, you can set `authType` in config. ```typescript theme={null} // resources/account/actions/verify-email/doc.ts import { resourceName } from 'resources/account/constants'; import { schema } from './schema'; const config: RouteExtendedConfig = { private: false, authType: 'cookieAuth', tags: [resourceName], method: 'get', path: `/${resourceName}/verify-email`, summary: 'Verify email', request: { query: schema, }, responses: { 302: { description: 'Redirect to web app', }, }, }; export default config; ``` If you need other strategies, please look at `docs.service.ts` file. There will be option to extend methods like that: ```typescript theme={null} registry.registerComponent('securitySchemes', 'customAuth', { type: 'apiKey', in: 'cookie', name: 'JSESSIONID', }); ``` # Overview Source: https://ship.paralect.com/docs/examples/overview It's the draft version of **Examples** documentation, the articles contains some deprecated details. This section will be refactored and actualized soon ✨. The example section in the Ship documentation provides developers with a set of working examples that demonstrate how to use the various services and components that are included in the Ship. The examples are designed to help developers get up and running with the boilerplate quickly and efficiently! ## List of examples * **[Stripe Subscriptions](/examples/stripe-subscriptions/overview)** * **[Public Docs](/examples/api-public-docs/overview)** # Stripe account Source: https://ship.paralect.com/docs/examples/stripe-subscriptions/account ## Api keys Navigate to the `Developers` tab [link](https://dashboard.stripe.com/test/developers) and select `API keys` in the left sidebar. Here you can find two keys - public and secret. Public key is required by the client-side stripe library (for example - a payment form to purchase a product or add a card for later use). (Ship uses client-side stripe library to display add card form) Secret key is used to interract with stripe using application's server. Copy them and store in web's (public key) configs and server's configs (secret key) Stripe create product form ## Webhooks Webhooks allow stripe to communicate with your server by sending plain POST requests on every event, that happens on stripe. During adding a new webhook you can select which events should be sent to the server. Start creating a new webhook by clicking on the `+ Add endpoint` button. Type URL of the endpoint, responsible for listening for the stripe events (Default ship URL is `HTTPS:///webhook/stripe`). Add events that you want to listen to. Ship listens for the following events - * `setup_intent.succeeded` Triggers when a customer adds a new card using stripe form on the web application. * `customer.subscription.create` Triggers when a customer's subscription created * `payment_method.attached` Triggers when a customer adds a new card during the payment process on stripe checkout page * `customer.subscription.updated` Triggers when a customer's subscription changes * `customer.subscription.deleted` Triggers when a customer's subscription is deleted. Stripe create webhook form Navigate to the created webhook and reveal webhook secret. Copy this value to API config's Stripe webhook details page For local development, you can use Stripe CLI to forward events on your localhost - [link](https://stripe.com/docs/stripe-cli) ## Subscription products Navigate to the product tab [link](https://dashboard.stripe.com/test/products) and press the Button `+ Add product` in the top right corner. Add product name and, optionally, description and image. Next, add price information. To create a basic subscription with recurring payments, select pricing model `Standard pricing`, add price and currency, select `Recurring` option below the price, and select Billing period. Stripe allows to setup several prices for product. Ship uses subscripti ons with two payment periods - monthly and yearly, therefore stripe product should has two prices. Click the button `+ Add another price` at the bottom and fill in the second form. Make sure prices have `Monthly` and `Yearly` billing periods Stripe create product form Open created product detailed view. Here you can find information about prices, logs and events, related to this product. Purchase operation requires price ids of the product. Copy them and store them in the application's config Stripe product details page # Overview Source: https://ship.paralect.com/docs/examples/stripe-subscriptions/api/overview This section contains list of stripe api calls, used in the application with brief description of each. # Payments Source: https://ship.paralect.com/docs/examples/stripe-subscriptions/api/payments ## POST /payments/create-setup-intent Generates a one-time secret key to allow the web application to communicate with stripe directly Returns `clientSecret` token, used by `@stripe/react-stripe-js` components. Stripe call snippet - ```typescript theme={null} const setupIntent = await stripeService.setupIntents.create({ customer: user.stripeId, payment_method_types: ['card'], }); ``` Parameters description - | Parameter | type | Description | | ---------------------- | ------ | --------------------------------------------------------------------------- | | customer | string | id of the stripe customer | | payment\_method\_types | array | The list of payment method types that this SetupIntent is allowed to set up | More information about stripe `SetupIntent` object - [link](https://stripe.com/docs/api/setup_intents/object) ## GET /payments/payment-information Returns customer's billing details, balance on stripe account and card information (last 4 digits, expiration date and brand) Stripe call snippet - ```typescript theme={null} const paymentInformation = await stripeService.customers.retrieve(user.stripeId, { expand: ['invoice_settings.default_payment_method'], }); ``` Parameters description - | Parameter | type | Description | | --------- | ------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | stripeId | string | id of the stripe customer | | expand | array | By default, stripe returns only the if of the related object (default\_payment\_method in this case). Those objects can be expanded inline with the expand request parameter. | More information about stripe `Customer` object - [link](https://stripe.com/docs/api/customers/object) ## GET /payments/get-history Returns a list of the customer's charges. The charges are returned in sorted order, with the most recent charges appearing first. Query params - | Parameter | type | Description | | --------- | ------ | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | cursorId | string | A cursor for use in pagination. starting\_after is an object ID that defines your place in the list. For instance, if you make a list request and receive 100 objects, ending with obj\_foo, your subsequent call can include starting\_after=obj\_foo in order to fetch the next page of the list. | | direction | string | A direction of pagination. Used to determine which parameter `starting_after` or `ending_before` should be passed to the stripe call | | perPage | string | A limit on the number of objects to be returned | Stripe call snippet - ```typescript theme={null} const charges = await stripeService.charges.list({ limit: perPage, customer: user.stripeId as string, starting_after: cursorId, }); ``` More information about stripe `Charge` object - [link](https://stripe.com/docs/api/charges/object) More information about stripe pagination - [link](https://stripe.com/docs/api/pagination) # Subscriptions Source: https://ship.paralect.com/docs/examples/stripe-subscriptions/api/subscriptions ## POST /subscriptions/subscribe Generates subscription for customer and returns a link to checkout session where user can review payment details, provides payment information, and purchase a subscription. Body params - | Parameter | type | Description | | --------- | ------ | -------------------------------- | | priceId | string | Id of the price for subscription | Returns `checkoutLink` url with checkout form. Stripe call snippet - ```typescript theme={null} const session = await stripeService.checkout.sessions.create({ mode: 'subscription', customer: user.stripeId, line_items: [{ quantity: 1, price: priceId, }], success_url: `${config.webUrl}?subscriptionPlan=${priceId}`, cancel_url: config.WEB_URL, }); ``` Parameters description - | Parameter | type | Description | | ---------------------------------------- | --------------------------------------- | ------------------------------------------------------------------------------------------------------------------------- | | mode | string | The mode of the Checkout Session. For subscription it should be set to `subscription` | | customer | string | id of the stripe customer. if `undefined`, stripe will ask for the user's email and create a new customer upon purchasing | | line\_items | array | Array with information about items that will be included in checkout session | |
  • quantity
  • price
|
  • number
  • string
|
  • Amount of items (usually 1 for subscription)
  • id of the specific price of the product
| | success\_url | string | The URL to which Stripe should send customers when payment or setup is complete | | cancel\_url | string | The URL the customer will be directed to if they decide to cancel payment and return to your website | More information about stripe checkout object - [link](https://stripe.com/docs/api/checkout/sessions) ## POST /subscriptions/cancel-subscription Cancels prolongation of a customer’s subscription. The customer will not be charged again for the subscription. Stripe call snippet - ```typescript theme={null} stripeService.subscriptions.update(user.subscription?.subscriptionId as string, { cancel_at_period_end: true, }); ``` Parameters description - | Parameter | type | Description | | ----------------------- | ------- | ------------------------------------------------------------------------------------ | | customer | string | id of the stripe customer | | cancel\_at\_period\_end | boolean | Indicating whether this subscription should cancel at the end of the current period. | More information about subscription update - [link](https://stripe.com/docs/api/subscriptions/update) ## POST /subscriptions/upgrade Changes customer's subscription plan (billing period or subscription plan) Body params - | Parameter | type | Description | | --------- | ------ | -------------------------------- | | priceId | string | Id of the price for subscription | Stripe call snippet - ```typescript theme={null} const subscriptionDetails = await stripeService.subscriptions.retrieve(subscriptionId); const items = [{ id: subscriptionDetails.items.data[0].id, price: priceId, }]; await stripeService.subscriptions.update(subscriptionId, { proration_behavior: 'always_invoice', cancel_at_period_end: false, items, }); ``` Parameters description - | Parameter | type | Description | | ---------------------------------- | --------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | subscriptionId | string | id of the customer's active subscription | | proration\_behavior | string | Determines how to handle prorations when the billing cycle changes (switching plan in this case). parameter `Always_invoice` is used to charge the customer immediately | | cancel\_at\_period\_end | string | Boolean indicating whether this subscription should cancel at the end of the current period. | | items | array | array with information about subscription items | |
  • id
  • price
|
  • string
  • string
|
  • id of the customer's active subscription
  • id of the specific price of the product
| More information about subscription update - [link](https://stripe.com/docs/api/subscriptions/update) ## GET /subscriptions/current Returns detailes subscription information along with pending invoice Stripe call snippet - ```typescript theme={null} const product = await stripeService.products.retrieve(user.subscription?.productId); ``` | Parameter | type | Description | | --------- | ------ | -------------------------------- | | productId | string | id of the product (subscription) | More information about products - [link](https://stripe.com/docs/api/products) ```typescript theme={null} const pendingInvoice = await stripeService.invoices.retrieveUpcoming({ subscription: user.subscription?.subscriptionId, }); ``` | Parameter | type | Description | | ------------ | ------ | ---------------------------------------- | | subscription | string | id of the customer's active subscription | More information about invoices - [link](https://stripe.com/docs/api/invoices) ## GET /subscriptions/preview-upgrade Returns invoice with billing information of subscription upgrade/downgrade Query params - | Parameter | type | Description | | --------- | ------ | -------------------------------- | | priceId | string | Id of the price for subscription | Returns invoice with payment details Code snippet - ```typescript theme={null} if (priceId === 'price_0') { items = [{ id: subscriptionDetails.items.data[0].id, price_data: { currency: 'USD', product: user.subscription?.productId, recurring: { interval: subscriptionDetails.items.data[0].price.recurring?.interval, interval_count: 1, }, unit_amount: 0, }, }]; } else { items = [{ id: subscriptionDetails.items.data[0].id, price: priceId, }]; } const invoice = await stripeService.invoices.retrieveUpcoming({ customer: user.stripeId || undefined, subscription: user.subscription?.subscriptionId, subscription_items: items, subscription_proration_behavior: 'always_invoice', }); ``` Parameters description - | Parameter | type | Description | | ---------------------------------- | --------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | customer | string | id of the stripe customer | | subscription | string | id of customer's active subscription | | subscription\_proration\_behavior | string | Determines how to handle prorations when the billing cycle changes (e.g., when switching plans, resetting billing\_cycle\_anchor=now, or starting a trial), or if an item’s quantity changes. | | subscription\_items | array | array with information about subscription items | |
  • id
  • price
|
  • string
  • string
|
  • id of the customer's active subscription
  • id of the specific price of the product
| In case the customer chooses free plan, we need to send a custom price object to stripe with unit\_amount set to 0 in order to receive invoice with empty products and information about a refund for canceled subscription Price data parameters - | Parameter | type | Description | | -------------------------------------------------- | --------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------- | | currency | string | Three-letter ISO currency code, in lowercase. | | product | string | id of subscription product | | recurring | object | The recurring components of a price such as interval and interval\_count | |
  • interval
  • interval\_count
|
  • string
  • number
|
  • Specifies billing frequency. Either day, week, month or year.
  • The number of intervals between subscription billings
| | unit\_amount | number | A positive integer in cents (or 0 for a free price) representing how much to charge. | More information about previewing upcoming invoices - [link](https://stripe.com/docs/api/invoices/upcoming) # Overview Source: https://ship.paralect.com/docs/examples/stripe-subscriptions/overview The example demonstrates how to use the Stripe API and the components included in the Ship to build a web application that supports subscription-based billing. The example includes documentation and sample code that will help the developer in the process of integrating Stripe subscriptions with the Ship template. [Example with code](https://github.com/paralect/ship/tree/main/examples/stripe-subscriptions) # Git Hooks Source: https://ship.paralect.com/docs/git-hooks Automated code quality checks with Husky and lint-staged Ship uses [Husky](https://typicode.github.io/husky/) and [lint-staged](https://github.com/lint-staged/lint-staged) to automatically validate and auto-fix the entire project before each commit, ensuring consistent code quality across the codebase. ## How It Works When you commit changes, Husky triggers a pre-commit hook that runs lint-staged. The configuration runs linters on the **entire project** to ensure project-wide code quality. The pre-commit hook is automatically installed when you run `pnpm install` via the `prepare` script. **Project-wide Validation**: Ship intentionally runs linters on ALL project files (using `.`), not just staged files. This defensive approach ensures that if someone bypassed hooks with `--no-verify`, the next commit will auto-fix those issues and prevent blocking other team members. ## Configuration ### API ```json TypeScript Files theme={null} "lint-staged": { "*.ts": [ "eslint . --fix", "bash -c 'tsc --noEmit'", "prettier . --write" ] } ``` ```json JSON & Markdown theme={null} "lint-staged": { "*.{json,md}": [ "prettier . --write" ] } ``` When any `.ts` file is staged, runs on **entire project**: 1. **ESLint** - Auto-fixes all `.ts` files (note the `.`) 2. **TypeScript** - Type checks all files 3. **Prettier** - Formats all files (note the `.`) ### Web ```json TypeScript/React Files theme={null} "lint-staged": { "*.{ts,tsx}": [ "eslint . --fix", "bash -c 'tsc --noEmit'", "prettier . --write" ] } ``` ```json CSS Files theme={null} "lint-staged": { "*.css": [ "stylelint . --fix", "prettier . --write" ] } ``` ```json JSON & Markdown theme={null} "lint-staged": { "*.{json,md}": [ "prettier . --write" ] } ``` ### Packages All shared packages (`schemas`, `mailer`, `app-types`, etc.) have similar lint-staged configurations tailored to their file types. ## Customization ### Modify Linters Edit `lint-staged` in your `package.json`: ```json theme={null} "lint-staged": { "*.ts": [ "eslint . --fix", // The '.' runs on entire project "prettier . --write" // The '.' formats all files // Add or remove tools as needed ] } ``` ### Run on Staged Files Only (Alternative) If you prefer to only lint staged files instead of the entire project: ```json theme={null} "lint-staged": { "*.ts": [ "eslint --fix", // No '.' - only staged files "prettier --write" // No '.' - only staged files ] } ``` Linting only staged files means errors in other parts of the codebase (e.g., from `--no-verify` commits) won't be caught or fixed until those files are modified. ### Skip Hooks Temporarily ```bash theme={null} # Bypass pre-commit hook (not recommended) git commit --no-verify -m "message" ``` Only skip hooks when absolutely necessary, as they help maintain code quality. ## Troubleshooting ### Hook Not Running If pre-commit hooks don't run: 1. Ensure Husky is installed: ```bash theme={null} pnpm install ``` 2. Check if `.husky/pre-commit` exists in your project root 3. Verify Git hooks path: ```bash theme={null} git config core.hooksPath ``` ### Linter Errors Blocking Commits If linters fail: 1. Review the error output 2. Fix the issues manually or let auto-fix handle them 3. Stage the fixed files: `git add .` 4. Commit again ## Best Practices * **Never use `--no-verify`** - It bypasses quality checks and can break the build for teammates * **Fix issues early** - Don't accumulate linting errors across the codebase * **Keep configs in sync** - Ensure lint-staged matches your editor settings # GitHub Actions Source: https://ship.paralect.com/docs/github-actions Pre-configured CI/CD workflows for automated building and linting Ship includes GitHub Actions workflows in `.github/workflows/` that run automatically on pull requests to validate builds and enforce code quality. ## Workflows Overview Validate Docker builds for API and Web Run ESLint, TypeScript, and Prettier ## Build Workflows Workflows build Docker images to ensure they compile successfully before merging. They use BuildKit caching to speed up builds. ### Build API **File**: `.github/workflows/build-api.yml` **Triggers**: PRs to `main` with changes in `apps/api/**` or `packages/**` ```yaml Key Configuration theme={null} on: pull_request: branches: [main] paths: - apps/api/** - packages/** jobs: build: steps: - uses: docker/build-push-action@v5 with: file: ./apps/api/Dockerfile push: false cache-from: type=local,src=/tmp/.buildx-cache ``` ### Build Web **File**: `.github/workflows/build-web.yml` **Triggers**: PRs to `main` with changes in `apps/web/**` or `packages/**` ```yaml Key Configuration theme={null} on: pull_request: branches: [main] paths: - apps/web/** - packages/** jobs: build: steps: - uses: docker/build-push-action@v5 with: file: ./apps/web/Dockerfile push: false cache-from: type=local,src=/tmp/.buildx-cache ``` ## Lint Workflows Linting uses a reusable template workflow that runs ESLint, TypeScript, and Prettier checks. ### Linter Template **File**: `.github/workflows/linter.template.yml` Reusable workflow for running linters with pnpm and Node.js. ```yaml Environment theme={null} env: PNPM_VERSION: 9.5.0 NODE_VERSION: 22.13.0 ``` ```yaml Linters theme={null} - name: Run Linters uses: wearerequired/lint-action@v2 with: eslint: true eslint_dir: ${{ inputs.dir }} tsc: true tsc_dir: ${{ inputs.dir }} prettier: true prettier_dir: ${{ inputs.dir }} ``` The [lint-action](https://github.com/wearerequired/lint-action) automatically posts inline comments on PRs for linting issues. ### Lint API & Web **Files**: `.github/workflows/run-api-linter.yml`, `.github/workflows/run-web-linter.yml` **Triggers**: PRs to `main` with changes in respective apps or packages ```yaml API theme={null} jobs: lint: uses: ./.github/workflows/linter.template.yml with: component: api dir: apps/api ``` ```yaml Web theme={null} jobs: lint: uses: ./.github/workflows/linter.template.yml with: component: web dir: apps/web ``` ## Customization ### Update Node.js/pnpm Versions Edit `.github/workflows/linter.template.yml`: ```yaml theme={null} env: PNPM_VERSION: 9.5.0 # Update version NODE_VERSION: 22.13.0 # Update version ``` All workflows support manual triggering via `workflow_dispatch` for testing. # Introduction Source: https://ship.paralect.com/docs/introduction The [Ship](https://ship.paralect.com/) is a Full-Stack Node.js boilerplate that will help you build and launch products faster. You can focus on getting things done, not building infrastructure. Ship uses simple tools and approaches and has built-in support for everything from the frontend to CI/CD automation. The Ship was first created in 2015 and since then we keep testing it on real products at [Paralect](https://www.paralect.com/). ## Key principles ### 😊 Simplicity We use the most simple solutions in every part of the Ship. They are easier to understand, test, and maintain. ### 📈 Product comes first Our jobs from engineer to CEO only exist because there are customers who use the products we create. We encourage developers to focus on a product more than on technology. Get things done as quickly as possible with the Ship. ### 🚀 Production ready You can use Ship to create production-ready products. We prefer to use well-tested technologies. ### 🥷 For developers Ship is written for developers and easy to use and understand. We write documentation to explain how things work. ## Getting started The best way to get started with Ship is to use [Ship CLI](/packages/create-ship-app) to bootstrap your project. ``` npx create-ship-app@latest init ``` This command will create everything you need to develop, launch locally and deploy your product. ## Next steps ### Launch your project We use [Turborepo](https://turborepo.org/docs) for managing monorepo. To run infra and all services -- just run: `npm start` 🚀 ### Learn key concepts Learn about Ship [architecture](/architecture) and key components([Web](/web/overview), [API](/api-reference/overview), [Deployment](/deployment/digital-ocean-apps), [Migrator](/migrator), [Scheduler](/scheduler). # LLMs.txt Source: https://ship.paralect.com/docs/llms Ship provides LLM-friendly documentation following the [llmstxt.org standard](https://llmstxt.org/), automatically generated by [Mintlify](https://mintlify.com/docs/ai/llmstxt). ## Available Files ### llms.txt A structured list of all documentation pages with links and descriptions. Similar to how a sitemap helps search engines, this file helps AI tools understand your documentation structure. * **View**: [/llms.txt](/llms.txt) * **URL**: `https://ship.paralect.com/docs/llms.txt` ### llms-full.txt The complete documentation site combined into a single file for full context. * **View**: [/llms-full.txt](/llms-full.txt) * **URL**: `https://ship.paralect.com/docs/llms-full.txt` ## Key Differences | Feature | llms.txt | llms-full.txt | | ------------ | ---------------------------- | --------------------------- | | **Content** | Page links with descriptions | Complete documentation text | | **Size** | Lightweight | Comprehensive | | **Best for** | Quick navigation | Full context | **Use `llms.txt`** when AI tools can fetch pages as needed (ChatGPT, Claude).\ **Use `llms-full.txt`** when you need all context upfront (Cursor). ## Usage with AI Tools ### Cursor Use the `@Docs` feature: 1. Type `@Docs` in your prompt 2. Reference: `https://ship.paralect.com/docs/llms-full.txt` 3. Ask questions about Ship ### ChatGPT & Claude 1. Mention you're using Ship framework 2. Reference: `https://ship.paralect.com/docs/llms.txt` 3. The AI will fetch the documentation ## Example Prompts * "Using Ship framework, how do I create a new API endpoint with validation?" * "Show me how to set up event handlers in Ship" * "How to deploy Ship to Digital Ocean?" ## Learn More * [llmstxt.org standard](https://llmstxt.org/) * [Mintlify llms.txt documentation](https://mintlify.com/docs/ai/llmstxt) # Mailer Source: https://ship.paralect.com/docs/mailer ### Overview The **Mailer** package is a utility designed to facilitate the process of generating email templates. This package provides a way to style, template, and generate HTML for sending emails using the [`React Email`](https://react.email/) library ### Template System The template system is key to the functionality of the **Mailer** package. Each template corresponds to a different type of email that can be sent. The specific variables available for substitution in a template depend on the template itself. The Template type should be used to reference the available templates. For example, to reference to verify email template, you would use `Template.VERIFY_EMAIL`. ## Adding new Template All development takes place in the package turborepo `packages/mailer`. The preview of emails is launched automatically when you launch the application using turborepo or Docker, the application will be available on port 4000. 1. Create a new template in the `emails` folder. 2. Export your component and props interface from the file. ```typescript my-app/packages/mailer/emails/verify-email.tsx theme={null} import React, { FC } from 'react'; import { Text } from '@react-email/components'; import Layout from './_layout'; import Button from './components/button'; export interface VerifyEmailProps { firstName: string; href: string; } export const VerifyEmail:FC = ({ firstName = 'John', href = 'https://ship.paralect.com', }) => ( Dear {' '} {firstName} , Welcome to Ship! We are excited to have you on board. Before we get started, we just need to verify your email address. This is to ensure that you have access to all our features and so we can send you important account notifications. Please verify your account by clicking the button below: ); export default VerifyEmail; ``` 3. Navigate to `src/template.ts` file, import your component and props interface. 4. Add your template to the `Template` enum, `EmailComponent` object and `TemplateProps` type. ```typescript my-app/packages/mailer/src/template.ts theme={null} import { ResetPassword, ResetPasswordProps } from 'emails/reset-password'; import { SignUpWelcome, SignUpWelcomeProps } from 'emails/sign-up-welcome'; import { VerifyEmail, VerifyEmailProps } from 'emails/verify-email'; export enum Template { RESET_PASSWORD, SIGN_UP_WELCOME, VERIFY_EMAIL, } export const EmailComponent = { [Template.RESET_PASSWORD]: ResetPassword, [Template.SIGN_UP_WELCOME]: SignUpWelcome, [Template.VERIFY_EMAIL]: VerifyEmail, }; export type TemplateProps = { [Template.RESET_PASSWORD]: ResetPasswordProps; [Template.SIGN_UP_WELCOME]: SignUpWelcomeProps; [Template.VERIFY_EMAIL]: VerifyEmailProps; }; ``` When you change the email template, you need to rebuild the API to see the changes when sending emails. ## Usage in API ### Importing To use the **Mailer** package within the API, the necessary services and types need to be imported: ```typescript theme={null} import { Template } from 'types'; import { emailService } from 'services'; ``` ### Sending an Email To send an email, the `sendTemplate` function from the `emailService` is used. This function accepts an object with properties specifying the recipient, subject line, template to be used, and an object with parameters to be filled into the template: ```typescript theme={null} await emailService.sendTemplate({ to: user.email, subject: 'Please Confirm Your Email Address for Ship', template: Template.VERIFY_EMAIL, params: { firstName: user.firstName, href: `${config.API_URL}/account/verify-email?token=${signupToken}`, }, }); ``` In this example, a password reset email is dispatched. The `VERIFY_EMAIL` template is used, and the user's first name and the reset password URL are passed as parameters to be substituted into the template. # Migrator Source: https://ship.paralect.com/docs/migrator As the application grows your database schema also will evolve. The problem is pretty common: when you add some feature that influences a database schema(adding, removing, or replacing some fields) you will have to update already existing documents to the latest schema version. Otherwise working with old documents will be impossible, because you will struggle with errors when you will rely on an updated field in your code. So you will run migrations where you will resolve code and schema mismatching. Migrator is a service that runs MongoDB migrations, handles versioning, and keeps logs for every migration. It performs changes/migrations to current database data, to match new schema or new requirements. Those changes are stored in the `migrator/migrations` folder. Any changes to the project's database are very sensitive and should be consistent. But also frequently migrations have operations that can take a lot of time to apply. It can create downtime for both database and API services. So to reduce these effects - Migrator is running as a separate service. ## How it works in the Ship Every time Migrator is started, it is getting current successful migration version from the `migrationVersion` collection. And tries to apply every migration above this version in a sequence. Every `migration` from `migrator/migrations` will be called one by one, and every time it will be logged to the `migrationLog` collection either with: 1. `completed` status with updating the current version 2. `failed` status *without* updating the current version **Warning:** Sequence of migrations will stop on `failed` migration and won't apply versions above it. ## How to add a new migration. 1. To add new migration - add new `#.ts` file inside еру `migrator/migrations` folder (with the name of the next version, that is higher than the current version)ю 2. Create new `Migration` with the `#.ts` migration number and description. ### Example We already have migration `1.ts`, so let's add another one. We have a collection of `users`, but we need to assign some of them special rights within our app. Let's add a new boolean field `isAdmin` to the user schema. ```typescript user.schema.ts theme={null} const schema = z.object({ _id: z.string(), createdOn: z.date().optional(), updatedOn: z.date().optional(), deletedOn: z.date().optional().nullable(), role: z.string(), isAdmin: z.boolean().default(false) }); ``` After adding this field to the schema, every new user will automatically have `isAdmin` upon creation with `true` or `false`. But old users don't have the field `isAdmin` at all. So let's add it. ```typescript migrator/migrations/2.ts theme={null} const migration = new Migration(2, 'Add isAdmin field to users'); migration.migrate = async () => { const userIds = await userService.distinct('_id', { role: 'admin', }); const updateFn = (userId: string) => userService.atomic.updateOne( { _id: userId }, { $set: { isAdmin: true } }, ); await promiseUtil.promiseLimit(userIds, 50, updateFn); }; export default migration; ``` **Warning:** Don't use methods that emit **[events](packages/node-mongo#reactivity)** during the operation. Instead, use atomic operations like **[atomic.updateOne](packages/node-mongo#atomicupdateone)** or **[updateOne](packages/node-mongo#updateconfig)** with `publishEvents: false` ## Promise Limit Every migration should use `promiseLimit` to perform changes to the collections, to avoid insufficient resources to complete operations: ```typescript theme={null} promiseLimit(documents: unknown[], limit: number, operator: (doc: any) => any) ``` ## How it deploys and runs The main idea behind the ship's Migrator is to run it **before** **[API](api-reference/overview)** and **[Scheduler](scheduler)** deployment. Therefore, if any migration fails, then the API or Scheduler updates will not be applied. And they will always work with an up-to-date schema. More on **[Kubernetes](deployment/kubernetes/overview#deployment-flow)** and **[DO Apps](deployment/digital-ocean-apps#set-up-migrator-and-scheduleroptional)** deployment. ## How to re-run failed migration Migrator always run migrations only above already applied ones. So to re-run failed one, you simply start to migrate the process again. For development: ``` npm run migrate-dev ``` For production: ``` npm run migrate ``` ## How to check failed migration logs First, you can check `migrationLog` and find your migration with the status `failed`. It contains the `error` and `errorStack` fields. For Kubernetes deployment, you can check the log inside its container by: ``` kubectl get pods -A ``` which is used to check `migrator_container_name` and `namespace`, and then: ``` kubectl log -f migrator_container_name -n namespace ``` # Overview Source: https://ship.paralect.com/docs/package-sharing/overview Ship is monorepo, so it lets you share your code across applications to minimize duplications and reduce errors. All shared code are inside **packages/** folder. By default, packages include app-constants, app-types, enums, schemas and mailer. Learn more about the **mailer** package [here](/mailer). ```shell theme={null} /packages /app-constants /app-types /enums /mailer /schemas ``` ## Installation We've included all essential packages in your apps. If you want to add more packages, head to the `package.json` file, and in the dependencies section, add to **dependencies** package name with value "workspace:\*". ```json apps/web/package.json theme={null} "dependencies": { "app-constants": "workspace:*", "app-types": "workspace:*", "schemas": "workspace:*", }, ``` The **enums** package comes with **app-types**, so no separate import is needed. You can read more about **package sharing** in [Turborepo documentation](https://turbo.build/repo/docs/handbook/sharing-code/internal-packages). # Schemas Source: https://ship.paralect.com/docs/package-sharing/schemas ## Overview Schemas package contains **data schemas** for the applications, including resource schemas. **Data schema** — is a Zod schema that defines shape of the entity. It must strictly define all fields. Resource schema is defined in entity.schema file e.x. `user.schema`. ```typescript theme={null} import { z } from 'zod'; import dbSchema from './db.schema'; const schema = dbSchema.extend({ firstName: z.string(), lastName: z.string(), fullName: z.string(), email: z.string(), passwordHash: z.string().nullable().optional(), isEmailVerified: z.boolean().default(false), signupToken: z.string().nullable().optional(), resetPasswordToken: z.string().nullable().optional(), avatarUrl: z.string().nullable().optional(), oauth: z.object({ google: z.boolean().default(false), }).optional(), lastRequest: z.date().optional(), }).strict(); export default schema; ``` ## Validation Zod schemas simplify form validation in react-hook-form: ```tsx theme={null} import { z } from 'zod'; import { useForm } from 'react-hook-form'; import { zodResolver } from '@hookform/resolvers/zod'; import { EMAIL_REGEX, PASSWORD_REGEX } from 'app-constants'; const schema = z.object({ firstName: z.string().min(1, 'Please enter First name').max(100), lastName: z.string().min(1, 'Please enter Last name').max(100), email: z.string().regex(EMAIL_REGEX, 'Email format is incorrect.'), password: z.string().regex(PASSWORD_REGEX, 'The password must contain 6 or more characters with at least one letter (a-z) and one number (0-9).'), }); type SignUpParams = z.infer const SignUp = () => { const methods = useForm({ resolver: zodResolver(schema), }); return ( //your code here ) } export default SignUp; ``` Additionally, data can be validated using the `saveParse` method: ```typescript theme={null} const parsed = zodSchema.saveParse(data); if (!parsed.success) { throw new Error('Invalid data'); } ``` For more details on Zod, check the [documentation](https://zod.dev/). # create-ship-app Source: https://ship.paralect.com/docs/packages/create-ship-app [![npm version](https://badge.fury.io/js/create-ship-app.svg)](https://badge.fury.io/js/create-ship-app) Simple CLI tool for bootstrapping Ship applications. Downloads actual template from Ship [monorepo](https://github.com/paralect/ship) and configures it to run. Init project ## Build options ### Deployment type * [**Digital Ocean Apps**](/deployment/digital-ocean-apps) * [**Render**](/deployment/render) * [**Digital Ocean Managed Kubernetes**](/deployment/kubernetes/digital-ocean) * [**AWS EKS**](/deployment/kubernetes/aws) ## Usage ```shell theme={null} npx create-ship-app@latest init ``` or ```shell theme={null} npx create-ship-app@latest my-project ``` # Events Source: https://ship.paralect.com/docs/packages/node-mongo/api-reference/events Events API reference ### `eventBus.on` ```typescript theme={null} on: ( eventName: string, handler: InMemoryEventHandler, ): void ``` ```typescript theme={null} import { eventBus, InMemoryEvent } from '@paralect/node-mongo'; const collectionName = 'users'; eventBus.on(`${collectionName}.created`, (data: InMemoryEvent) => { try { const user = data.doc; console.log('user created', user); } catch (err) { logger.error(`${USERS}.created handler error: ${err}`); } }); eventBus.on(`${collectionName}.updated`, (data: InMemoryEvent) => {}); eventBus.on(`${collectionName}.deleted`, (data: InMemoryEvent) => {}); ``` In-memory events handler that listens for a CUD events. **Parameters** * eventName: `string`;\ Events names to listen.\ Valid format: `${collectionName}.created`, `${collectionName}.updated`, `${collectionName}.deleted`. * handler: [`InMemoryEventHandler`](#inmemoryeventhandler); **Returns** `void`. ### `eventBus.once` ```typescript theme={null} once: ( eventName: string, handler: InMemoryEventHandler, ): void ``` ```typescript theme={null} eventBus.once(`${USERS}.updated`, (data: InMemoryEvent) => { try { const user = data.doc; console.log('user updated', user); } catch (err) { logger.error(`${USERS}.updated handler error: ${err}`); } }); ``` In-memory events handler that listens for a CUD events. **It will be called only once**. **Parameters** * eventName: `string`;\ Events names to listen.\ Valid format: `${collectionName}.created`, `${collectionName}.updated`, `${collectionName}.deleted`. * handler: [`InMemoryEventHandler`](#inmemoryeventhandler); **Returns** `void`. ### `eventBus.onUpdated` ```typescript theme={null} onUpdated: ( entity: string, properties: OnUpdatedProperties, handler: InMemoryEventHandler, ): void ``` ```typescript theme={null} import { eventBus, InMemoryEvent } from '@paralect/node-mongo'; eventBus.onUpdated('users', ['firstName', 'lastName'], async (data: InMemoryEvent) => { try { await userService.atomic.updateOne( { _id: data.doc._id }, { $set: { fullName: `${data.doc.firstName} ${data.doc.lastName}` } }, ); } catch (err) { console.log(`users onUpdated ['firstName', 'lastName'] handler error: ${err}`); } }); eventBus.onUpdated('users', [{ fullName: 'John Wake', firstName: 'John' }, 'lastName'], () => {}); eventBus.onUpdated('users', ['oauth.google'], () => {}); ``` In-memory events handler that listens for specific fields updates. It will be called when one of the provided `properties` updates. **Parameters** * entity: `string`;\ Collection name for events listening. * properties: [`OnUpdatedProperties`](#onupdatedproperties);\ Properties whose update will trigger the event. * handler: [`InMemoryEventHandler`](#inmemoryeventhandler); **Returns** `void`. # Service Source: https://ship.paralect.com/docs/packages/node-mongo/api-reference/service Service API reference ### `find` ```typescript theme={null} find = async ( filter: Filter, readConfig: ReadConfig & { page?: number; perPage?: number } = {}, findOptions: FindOptions = {}, ): Promise> ``` ```typescript theme={null} const { results: users, count: usersCount } = await userService.find( { status: 'active' }, ); ``` Fetches documents that matches the filter. Returns an object with the following fields(`FindResult`): | Field | Description | | ---------- | ---------------------------------------------------------------------------------------------- | | results | documents, that matches the filter | | count | total number of documents, that matches the filter | | pagesCount | total number of documents, that matches the filter divided by the number of documents per page | Pass `page` and `perPage` params to get a paginated result. Otherwise, all documents will be returned. **Parameters** * filter: [`Filter`](https://mongodb.github.io/node-mongodb-native/4.10/modules.html#Filter); * readConfig: [`ReadConfig`](#readconfig) `& { page?: number; perPage?: number }`; * findOptions: [`FindOptions`](https://mongodb.github.io/node-mongodb-native/4.10/interfaces/FindOptions.html); **Returns** `Promise>`. ### `findOne` ```typescript theme={null} findOne = async ( filter: Filter, readConfig: ReadConfig = {}, findOptions: FindOptions = {}, ): Promise ``` ```typescript theme={null} const user = await userService.findOne({ _id: u._id }); ``` Fetches the first document that matches the filter. Returns `null` if document was not found. **Parameters** * filter: [`Filter`](https://mongodb.github.io/node-mongodb-native/4.10/modules.html#Filter); * readConfig: [`ReadConfig`](#readconfig); * findOptions: [`FindOptions`](https://mongodb.github.io/node-mongodb-native/4.10/interfaces/FindOptions.html); **Returns** `Promise`. ### `updateOne` ```typescript theme={null} updateOne = async ( filter: Filter, updateFilterOrFn: (doc: U) => Partial | UpdateFilter, updateConfig: UpdateConfig = {}, updateOptions: UpdateOptions = {}, ): Promise ``` ```typescript theme={null} const updatedUserWithEvent = await userService.updateOne( { _id: u._id }, (doc) => ({ fullName: 'Updated fullname' }), ); const updatedUser = await userService.updateOne( { _id: u._id }, (doc) => ({ fullName: 'Updated fullname' }), { publishEvents: false } ); const updatedUserWithUpdateFilter = await userService.updateOne( { _id: u._id }, { $set: { fullName: 'Updated fullname' }}, ); ``` Updates a single document and returns it. Returns `null` if document was not found. **Parameters** * filter: [`Filter`](https://mongodb.github.io/node-mongodb-native/4.10/modules.html#Filter); * updateFilterOrFn: `(doc: U) => Partial` | [`UpdateFilter`](https://mongodb.github.io/node-mongodb-native/4.10/modules.html#UpdateFilter); Function that accepts current document and returns object containing fields to update. * updateConfig: [`UpdateConfig`](#updateconfig); * updateOptions: [`UpdateOptions`](https://mongodb.github.io/node-mongodb-native/4.10/interfaces/UpdateOptions.html); **Returns** `Promise`. ### `updateMany` ```typescript theme={null} updateMany = async ( filter: Filter, updateFilterOrFn: (doc: U) => Partial | UpdateFilter, updateConfig: UpdateConfig = {}, updateOptions: UpdateOptions = {}, ): Promise ``` ```typescript theme={null} const updatedUsers = await userService.updateMany( { status: 'active' }, (doc) => ({ isEmailVerified: true }), ); const updatedUsersWithUpdateFilter = await userService.updateMany( { status: 'active' }, { $set: { isEmailVerified: true } }, ); ``` Updates multiple documents that match the query. Returns array with updated documents. **Parameters** * filter: [`Filter`](https://mongodb.github.io/node-mongodb-native/4.7/modules.html#Filter); * updateFilterOrFn: `(doc: U) => Partial` | [`UpdateFilter`](https://mongodb.github.io/node-mongodb-native/4.10/modules.html#UpdateFilter);\ Function that accepts current document and returns object containing fields to update. * updateConfig: [`UpdateConfig`](#updateconfig); * updateOptions: [`UpdateOptions`](https://mongodb.github.io/node-mongodb-native/4.10/interfaces/UpdateOptions.html); **Returns** `Promise`. ### `insertOne` ```typescript theme={null} insertOne = async ( object: Partial, createConfig: CreateConfig = {}, insertOneOptions: InsertOneOptions = {}, ): Promise ``` ```typescript theme={null} const user = await userService.insertOne({ fullName: 'John', }); ``` Inserts a single document into a collection and returns it. **Parameters** * object: `Partial`; * createConfig: [`CreateConfig`](#createconfig); * insertOneOptions: [`InsertOneOptions`](https://mongodb.github.io/node-mongodb-native/4.10/interfaces/InsertOneOptions.html); **Returns** `Promise`. ### `insertMany` ```typescript theme={null} insertMany = async ( objects: Partial[], createConfig: CreateConfig = {}, bulkWriteOptions: BulkWriteOptions = {}, ): Promise ``` ```typescript theme={null} const users = await userService.insertMany([ { fullName: 'John' }, { fullName: 'Kobe' }, ]); ``` Inserts multiple documents into a collection and returns them. **Parameters** * objects: `Partial[]`; * createConfig: [`CreateConfig`](#createconfig); * bulkWriteOptions: [`BulkWriteOptions`](https://mongodb.github.io/node-mongodb-native/4.10/interfaces/BulkWriteOptions.html); **Returns** `Promise`. ### `deleteSoft` ```typescript theme={null} deleteSoft = async ( filter: Filter, deleteConfig: DeleteConfig = {}, deleteOptions: DeleteOptions = {}, ): Promise ``` ```typescript theme={null} const deletedUsers = await userService.deleteSoft( { status: 'deactivated' }, ); ``` Adds `deletedOn` field to the documents that match the query and returns them. **Parameters** * filter: [`Filter`](https://mongodb.github.io/node-mongodb-native/4.10/modules.html#Filter); * deleteConfig: [`DeleteConfig`](#deleteconfig); * deleteOptions: [`DeleteOptions`](https://mongodb.github.io/node-mongodb-native/4.10/interfaces/DeleteOptions.html); **Returns** `Promise`. ### `deleteOne` ```typescript theme={null} deleteOne = async ( filter: Filter, deleteConfig: DeleteConfig = {}, deleteOptions: DeleteOptions = {}, ): Promise ``` ```typescript theme={null} const deletedUser = await userService.deleteOne( { _id: u._id }, ); ``` Deletes a single document and returns it. Returns `null` if document was not found. **Parameters** * filter: [`Filter`](https://mongodb.github.io/node-mongodb-native/4.10/modules.html#Filter); * deleteConfig: [`DeleteConfig`](#deleteconfig); * deleteOptions: [`DeleteOptions`](https://mongodb.github.io/node-mongodb-native/4.10/interfaces/DeleteOptions.html); **Returns** `Promise`. ### `deleteMany` ```typescript theme={null} deleteMany = async ( filter: Filter, deleteConfig: DeleteConfig = {}, deleteOptions: DeleteOptions = {}, ): Promise ``` ```typescript theme={null} const deletedUsers = await userService.deleteMany( { status: 'deactivated' }, ); ``` Deletes multiple documents that match the query. Returns array with deleted documents. **Parameters** * filter: [`Filter`](https://mongodb.github.io/node-mongodb-native/4.10/modules.html#Filter); * deleteConfig: [`DeleteConfig`](#deleteconfig); * deleteOptions: [`DeleteOptions`](https://mongodb.github.io/node-mongodb-native/4.10/interfaces/DeleteOptions.html); **Returns** `Promise`. ### `replaceOne` ```typescript theme={null} replaceOne: ( filter: Filter, replacement: Partial, readConfig: ReadConfig = {}, replaceOptions: ReplaceOptions = {}, ): Promise ``` ```typescript theme={null} await usersService.replaceOne( { _id: u._id }, { fullName: fullNameToUpdate }, ); ``` Replaces a single document within the collection based on the filter. **Doesn't validate schema or publish events**. **Parameters** * filter: [`Filter`](https://mongodb.github.io/node-mongodb-native/4.10/modules.html#Filter); * replacement: `Partial`; * readConfig: [`ReadConfig`](#readconfig); * replaceOptions: [`ReplaceOptions`](https://mongodb.github.io/node-mongodb-native/4.10/interfaces/ReplaceOptions.html); **Returns** `Promise<`[UpdateResult](https://mongodb.github.io/node-mongodb-native/4.10/interfaces/UpdateResult.html) `|` [Document](https://mongodb.github.io/node-mongodb-native/4.10/interfaces/Document.html)`>`. ### `atomic.updateOne` ```typescript theme={null} updateOne: ( filter: Filter, updateFilter: UpdateFilter, readConfig: ReadConfig = {}, updateOptions: UpdateOptions = {}, ): Promise ``` ```typescript theme={null} await userService.atomic.updateOne( { _id: u._id }, { $set: { fullName: `${u.firstName} ${u.lastName}` } }, ); ``` Updates a single document. **Doesn't validate schema or publish events**. **Parameters** * filter: [`Filter`](https://mongodb.github.io/node-mongodb-native/4.10/modules.html#Filter); * updateFilter: [`UpdateFilter`](https://mongodb.github.io/node-mongodb-native/4.10/modules.html#UpdateFilter); * readConfig: [`ReadConfig`](#readconfig); * updateOptions: [`UpdateOptions`](https://mongodb.github.io/node-mongodb-native/4.10/interfaces/UpdateOptions.html); **Returns** `Promise<`[UpdateResult](https://mongodb.github.io/node-mongodb-native/4.10/interfaces/UpdateResult.html)`>`. ### `atomic.updateMany` ```typescript theme={null} updateMany: ( filter: Filter, updateFilter: UpdateFilter, readConfig: ReadConfig = {}, updateOptions: UpdateOptions = {}, ): Promise ``` ```typescript theme={null} await userService.atomic.updateMany( { firstName: { $exists: true }, lastName: { $exists: true } }, { $set: { fullName: `${u.firstName} ${u.lastName}` } }, ); ``` Updates all documents that match the specified filter. **Doesn't validate schema or publish events**. **Parameters** * filter: [`Filter`](https://mongodb.github.io/node-mongodb-native/4.10/modules.html#Filter); * updateFilter: [`UpdateFilter`](https://mongodb.github.io/node-mongodb-native/4.10/modules.html#UpdateFilter); * readConfig: [`ReadConfig`](#readconfig); * updateOptions: [`UpdateOptions`](https://mongodb.github.io/node-mongodb-native/4.10/interfaces/UpdateOptions.html); **Returns** `Promise<`[UpdateResult](https://mongodb.github.io/node-mongodb-native/4.10/interfaces/UpdateResult.html) `|` [Document](https://mongodb.github.io/node-mongodb-native/4.10/interfaces/Document.html)`>`. ### `exists` ```typescript theme={null} exists( filter: Filter, readConfig: ReadConfig = {}, findOptions: FindOptions = {}, ): Promise ``` ```typescript theme={null} const isUserExists = await userService.exists( { email: 'example@gmail.com' }, ); ``` Returns ***true*** if document exists, otherwise ***false***. **Parameters** * filter: [`Filter`](https://mongodb.github.io/node-mongodb-native/4.10/modules.html#Filter); * readConfig: [`ReadConfig`](#readconfig); * findOptions: [`FindOptions`](https://mongodb.github.io/node-mongodb-native/4.10/interfaces/FindOptions.html); **Returns** `Promise`. ### `countDocuments` ```typescript theme={null} countDocuments( filter: Filter, readConfig: ReadConfig = {}, countDocumentOptions: CountDocumentsOptions = {}, ): Promise ``` ```typescript theme={null} const documentsCount = await userService.countDocuments( { status: 'active' }, ); ``` Returns amount of documents that matches the query. **Parameters** * filter: [`Filter`](https://mongodb.github.io/node-mongodb-native/4.10/modules.html#Filter); * readConfig: [`ReadConfig`](#readconfig); * countDocumentOptions: [`CountDocumentsOptions`](https://mongodb.github.io/node-mongodb-native/4.10/interfaces/CountDocumentsOptions.html); **Returns** `Promise`. ### `distinct` ```typescript theme={null} distinct( key: string, filter: Filter, readConfig: ReadConfig = {}, distinctOptions: DistinctOptions = {}, ): Promise ``` ```typescript theme={null} const statesList = await userService.distinct('states'); ``` Returns distinct values for a specified field across a single collection or view and returns the results in an array. **Parameters** * key: `string`; * filter: [`Filter`](https://mongodb.github.io/node-mongodb-native/4.10/modules.html#Filter); * readConfig: [`ReadConfig`](#readconfig); * distinctOptions: [`DistinctOptions`](https://mongodb.github.io/node-mongodb-native/4.10/modules.html#DistinctOptions); **Returns** `Promise`. ### `aggregate` ```typescript theme={null} aggregate: ( pipeline: any[], options: AggregateOptions = {}, ): Promise ``` ```typescript theme={null} const sortedActiveUsers = await userService.aggregate([ { $match: { status: "active" } }, { $sort: { firstName: -1, lastName: -1 } } ]); ``` Executes an aggregation framework pipeline and returns array with aggregation result of documents. **Parameters** * pipeline: `any[]`; * options: [`AggregateOptions`](https://mongodb.github.io/node-mongodb-native/4.10/interfaces/AggregateOptions.html); **Returns** `Promise`. ### `watch` ```typescript theme={null} watch: ( pipeline: Document[] | undefined, options: ChangeStreamOptions = {}, ): Promise ``` ```typescript theme={null} const watchCursor = userService.watch(); ``` Creates a new Change Stream, watching for new changes and returns a cursor. **Parameters** * pipeline: `Document[] | undefined`; * options: [`ChangeStreamOptions`](https://mongodb.github.io/node-mongodb-native/4.10/interfaces/ChangeStreamOptions.html); **Returns** `Promise`. ### `drop` ```typescript theme={null} drop: ( recreate: boolean = false, ): Promise ``` ```typescript theme={null} await userService.drop(); ``` Removes a collection from the database. The method also removes any indexes associated with the dropped collection. **Parameters** * recreate: `boolean`; Should create collection after deletion. **Returns** `Promise`. ### `indexExists` ```typescript theme={null} indexExists: ( indexes: string | string[], indexInformationOptions: IndexInformationOptions = {}, ): Promise ``` ```typescript theme={null} const isIndexExists = await usersService.indexExists(index); ``` Checks if one or more indexes exist on the collection, fails on first non-existing index. **Parameters** * indexes: `string | string[]`; * indexInformationOptions: [`IndexInformationOptions`](https://mongodb.github.io/node-mongodb-native/4.10/interfaces/IndexInformationOptions.html); **Returns** `Promise`. ### `createIndex` ```typescript theme={null} createIndex: ( indexSpec: IndexSpecification, options: CreateIndexesOptions = {}, ): Promise ``` ```typescript theme={null} await usersService.createIndex({ fullName: 1 }); ``` Creates collection index. **Parameters** * indexSpec: [`IndexSpecification`](https://mongodb.github.io/node-mongodb-native/4.10/modules.html#IndexSpecification); * options: [`CreateIndexesOptions`](https://mongodb.github.io/node-mongodb-native/4.10/interfaces/CreateIndexesOptions.html); **Returns** `Promise`. ### `createIndexes` ```typescript theme={null} createIndexes: ( indexSpecs: IndexDescription[], options: CreateIndexesOptions = {}, ): Promise ``` ```typescript theme={null} await usersService.createIndexes([ { key: { fullName: 1 } }, { key: { createdOn: 1 } }, ]); ``` Creates one or more indexes on a collection. **Parameters** * indexSpecs: [`IndexDescription[]`](https://mongodb.github.io/node-mongodb-native/4.10/modules.html#IndexSpecification); * options: [`CreateIndexesOptions`](https://mongodb.github.io/node-mongodb-native/4.10/interfaces/CreateIndexesOptions.html); **Returns** `Promise`. ### `dropIndex` ```typescript theme={null} dropIndex: ( indexName: string, options: DropIndexesOptions = {}, ): Promise ``` ```typescript theme={null} await userService.dropIndex({ firstName: 1, lastName: -1 }); ``` Removes the specified index from a collection. **Parameters** * indexName: `string`; * options: [`DropIndexesOptions`](https://mongodb.github.io/node-mongodb-native/4.10/modules.html#DropIndexesOptions); **Returns** `Promise`. ### `dropIndexes` ```typescript theme={null} dropIndexes: ( options: DropIndexesOptions = {}, ): Promise ``` Removes all but the `_id` index from a collection. ```typescript theme={null} await userService.dropIndexes(); ``` **Parameters** * options: [`DropIndexesOptions`](https://mongodb.github.io/node-mongodb-native/4.10/modules.html#DropIndexesOptions); **Returns** `Promise`. # Transactions Source: https://ship.paralect.com/docs/packages/node-mongo/api-reference/transactions Transactions API reference ### `withTransaction` ```typescript theme={null} withTransaction: ( transactionFn: (session: ClientSession) => Promise, ): Promise ``` Runs callbacks and automatically commits or rollbacks transaction. ```typescript theme={null} import db from 'db'; const { user, company } = await db.withTransaction(async (session) => { const createdUser = await usersService.insertOne({ fullName: 'Bahrimchuk' }, {}, { session }); const createdCompany = await companyService.insertOne( { users: [createdUser._id] }, {}, { session }, ); return { user: createdUser, company: createdCompany }; }); ``` **Parameters** * transactionFn: `(session: ClientSession) => Promise`;\ Function that accepts a client session and manages some business logic. Must return a `Promise`. **Returns** `Promise`. # Changelog Source: https://ship.paralect.com/docs/packages/node-mongo/changelog Release history and updates for the node-mongo package. * You can now pass TypeScript generics to all service methods for precise type inference and enhanced type safety. This helps you catch mistakes at compile time and improves your development experience. ```typescript theme={null} const user = await userService.findOne({ _id: '...' }); // user.subscription is available if defined in schema user.subscription // Pass a generic to override the provided type // TypeScript will enforce the structure you specify: const admin = await userService.findOne({ _id: '...' }); // TypeScript Error: // Property 'subscription' doesn't exist on type 'Admin' admin.subscription ``` * Added `escapeRegExp` option to service methods, enabling automatic escaping of `$regex` filter values to prevent special characters from being interpreted as patterns. Before ```typescript theme={null} const users = await userService.find({ name: { $regex: 'John\' }, }); // Throws error 'Regular expression is invalid: \\ at end of pattern' // Provided string by user needs to be escaped before query ``` After ```typescript {3} theme={null} const service = db.createService(DATABASE_DOCUMENTS.USERS, { schemaValidator: (obj) => userSchema.parseAsync(obj), escapeRegExp: true, }); // No need to escape string before query const users = await userService.find({ name: { $regex: 'John\' }, }); ``` * Upgraded `mongodb` dependency to v6.17.0 and updated related dependencies (mocha, @types/mocha, zod, etc.). * Fixed all known vulnerabilities. * Upgraded `mongodb` dependency from v4.10.0 to v6.1.0 (requires Node.js >=16.16.0). * Cleaned up legacy MongoDB connection options. * Improved typings and compatibility for bulk operations and index management. * Synchronized README.md with the documentation website for consistency and completeness. * Fixed: `atomic.updateOne` and `atomic.updateMany` now use `readConfig` for query validation, ensuring consistent handling of soft-delete and query options. * Refactored `Service` and `Database` classes for better type safety: * `IDocument` now extends MongoDB's `Document` and requires `_id: string`. * Many methods now use stricter generic constraints (``). * Added support for custom schema validation via `schemaValidator` in `ServiceOptions`. * Introduced new config types: `CreateConfig`, `ReadConfig`, `UpdateConfig`, `DeleteConfig` for more granular operation control. * Improved transaction support: `database.withTransaction` now uses default transaction options and better error handling. * Enhanced logging for event publishing and warnings in development mode. * Internal: Cleaned up and unified method signatures, improved property type checks, and removed legacy/duplicate code. * Changed all date fields (`createdOn`, `updatedOn`, `deletedOn`) to use native `Date` objects instead of ISO strings. * `service.find()` options `page` and `perPage` are now optional; defaults are set internally if omitted. * `eventBus.onUpdated` now supports generics for improved type safety. * `service.aggregate()` now returns an array of results directly. * In-memory event listeners now require an entity name (e.g., `onUpdated('users', ...)`). * Added logging for published in-memory events. The release includes a lot of changes to make sure that the package is compatible with the latest MongoDB version. Most notable changes: * Rewritten in TypeScript * Removed [monk](https://github.com/Automattic/monk) dependency. * Added [mongodb native Node.JS sdk](https://www.mongodb.com/docs/drivers/node/current/) as dependency. * Added support for transactional events using [transactional outbox pattern](https://microservices.io/patterns/data/transactional-outbox.html) * Introduced shared in-memory events bus. It should be used to listen for CUD updates. ### Features #### Manager [createService](API.md#createservice) * Add [emitter](API.md#createservice) option. * Update dependencies. ### Breaking Changes #### [Manager](API.md#manager) [createQueryService](API.md#createqueryservice) * Rename `validateSchema` option to `validate`. * Change `addCreatedOnField` default to `true`. * Change `addUpdatedOnField` default to `true`. [createService](API.md#createservice) * Rename `validateSchema` option to `validate`. * Change `addCreatedOnField` default to `true`. * Change `addUpdatedOnField` default to `true`. #### [Query Service](API.md#query-service) * Remove `generateId` method. * Remove `expectDocument` method. #### [Service](API.md#service) * Remove `update` method. Use [updateOne](API.md#updateone) or [updateMany](API.md#updatemany). * Remove `ensureIndex`. Use [atomic.createIndex](API.md#atomiccreateindex). * Remove `createOrUpdate`. Use [create](API.md#create) or [updateOne](API.md#updateone) or [updateMany](API.md#updatemany). * Remove `findOneAndUpdate`. Use [findOne](API.md#findone) and [updateOne](API.md#updateone). ### Features #### Manager [createQueryService](API.md#createqueryservice) * Add `useStringId` option. [createService](API.md#createservice) * Add `useStringId` option. #### [Query Service](API.md#query-service) * Add more monk's methods. [See full list](API.md#query-service) #### [Service](API.md#service) * Add [generateId](API.md#generateid) method. * Add [updateOne](API.md#updateone) method. * Add [updateMany](API.md#updatemany) method. * Add [performTransaction](API.md#performtransaction) method. * Add more monk's methods in `atomic` namespace. [See full list](API.md#service) * Update dependencies. * Fix required version of the Node.js. ### Breaking Changes * Now `update` function will work via [set](https://docs.mongodb.com/manual/reference/operator/update/set/) operator. It means the new doc will be the result of merge of the old doc and the provided one. * Update dependencies. * Add tests. * Fix required version of the Node.js. ### Breaking Changes * Now, by default, we do not add the fields `createdOn` and` updatedOn` automatically to the model. If you want to save the current behavior, add the appropriate `addCreatedOnField` and` addUpdatedOnField` options to the service definitions. * Stop using deprecated method `ensureIndex` of the `monk`. * Add ability to create custom methods for service and query service. * Add tests. * Add support of the [joi](https://github.com/hapijs/joi) for validating data schema. * Add tests for validating of the schema. # Overview Source: https://ship.paralect.com/docs/packages/node-mongo/overview [![npm version](https://badge.fury.io/js/%40paralect%2Fnode-mongo.svg)](https://badge.fury.io/js/%40paralect%2Fnode-mongo) Lightweight reactive extension to official Node.js MongoDB [driver](https://mongodb.github.io/node-mongodb-native/4.10/). ## Features * **ObjectId mapping**. Automatically converts the `_id` field from the `ObjectId` to a `string`. * ️️**Reactive**. Fires events as a document created, updated, or deleted from the database; * **CUD operations timestamps**. Automatically sets `createdOn`, `updatedOn`, and `deletedOn` timestamps for CUD operations; * **Schema validation**. Validates your data before saving; * **Paging**. Implements high-level paging API; * **Soft delete**. By default, documents don't remove from the collection, but are marked with the `deletedOn` field; * **Extendable**. API is easily extendable, you can add new methods or override existing ones; * **Outbox support**. node-mongo can create collections with `_outbox` postfix that stores all CUD events for implementing the [transactional outbox](about:blank) pattern; The following example shows some of these features: ```typescript theme={null} import { eventBus, InMemoryEvent } from '@paralect/node-mongo'; await userService.updateOne( { _id: '62670b6204f1aab85e5033dc' }, (doc) => ({ firstName: 'Mark' }), ); eventBus.onUpdated('users', ['firstName', 'lastName'], async (data: InMemoryEvent) => { await userService.atomic.updateOne( { _id: data.doc._id }, { $set: { fullName: `${data.doc.firstName} ${data.doc.lastName}` } }, ); }); ``` ## Installation ``` npm i @paralect/node-mongo ``` ## Connect to Database Usually, you need to define a file called `db` that does two things: 1. Creates database instance and connects to the database; 2. Exposes factory method `createService` to create different [Services](#services) to work with MongoDB; ```typescript db.ts theme={null} import { Database, Service, ServiceOptions, IDocument } from '@paralect/node-mongo'; import config from 'config'; const database = new Database(config.mongo.connection, config.mongo.dbName); database.connect(); class CustomService extends Service { // You can add new methods or override existing here } function createService(collectionName: string, options: ServiceOptions = {}) { return new CustomService(collectionName, database, options); } export default { database, createService, }; ``` ## Services Service is a collection wrapper that adds all node-mongo features. Under the hood it uses Node.js MongoDB native methods. `createService` method returns the service instance. It accepts two parameters: collection name and [ServiceOptions](#serviceoptions). ```typescript user.service.ts theme={null} import { z } from 'zod'; import db from 'db'; const schema = z.object({ _id: z.string(), createdOn: z.date().optional(), updatedOn: z.date().optional(), deletedOn: z.date().optional().nullable(), fullName: z.string(), }).strict(); type User = z.infer; const service = db.createService('users', { schemaValidator: (obj) => schema.parseAsync(obj), }); export default service; ``` ```typescript update-user.ts theme={null} import userService from 'user.service'; await userService.insertOne({ fullName: 'Max' }); ``` ## Schema validation Node-mongo supports any schema library, but we recommend [Zod](https://zod.dev/), due to this ability to generate TypeScript types from the schemas. ### Zod ```typescript theme={null} const schema = z.object({ _id: z.string(), createdOn: z.date().optional(), updatedOn: z.date().optional(), deletedOn: z.date().optional().nullable(), fullName: z.string(), }); type User = z.infer; const service = createService('users', { schemaValidator: (obj) => schema.parseAsync(obj), }); ``` ### Joi ```typescript theme={null} const schema = Joi.object({ _id: Joi.string().required(), createdOn: Joi.date(), updatedOn: Joi.date(), deletedOn: Joi.date().allow(null), fullName: Joi.string().required(), }); type User = { _id: string; createdOn?: Date; updatedOn?: Date; deletedOn?: Date | null; fullName: string; }; const service = createService('users', { schemaValidator: (obj) => schema.validateAsync(obj), }); ``` Node-mongo validates documents before save. ## Reactivity The key feature of the `node-mongo` is that each create, update or delete operation publishes a CUD event. * `${collectionName}.created` * `${collectionName}.updated` * `${collectionName}.deleted` Events are used to easily update denormalized data and also to implement complex business logic without tight coupling of different entities. SDK support two type of events: ### In-memory events * Enabled by default; * Events can be lost on service failure; * Events are stored in `eventBus` (Node.js [EventEmitter](https://nodejs.org/api/events.html#events) instance); * For handling these events type you will use [Events API](#events); * Designed for transferring events inside a single Node.js process. Events handlers listens node-mongo `eventBus`. ### Transactional events * Can be enabled by setting `{ outbox: true }` when creating a service; * Guarantee that every database write will produce an event; * Events are stored in special collections with `_outbox` postfix; * For handling these events type you will use `watch` (method for working with Change Streams) on the outbox table; * Designed for transferring events to messages broker like Kafka. Events handlers should listen to message broker events (You need to implement this layer yourself). On the project start, we recommend using `in-memory` events. When your application becomes tougher you should migrate to `transactional` events. ## Options and Types ### `ServiceOptions` ```typescript theme={null} interface ServiceOptions { skipDeletedOnDocs?: boolean, schemaValidator?: (obj: any) => Promise, publishEvents?: boolean, addCreatedOnField?: boolean, addUpdatedOnField?: boolean, outbox?: boolean, collectionOptions?: CollectionOptions; collectionCreateOptions?: CreateCollectionOptions; } ``` | Option | Description | Default value | | ------------------------- | ----------------------------------------------------------------------------------------------------------------------------- | ------------- | | `skipDeletedOnDocs` | Skip documents with the `deletedOn` field | `true` | | `schemaValidator` | Validation function that will be called on data save | - | | `publishEvents` | Publish [CUD events](#reactivity) on save. | `true` | | `addCreatedOnField` | Set the `createdOn` field to the current timestamp on document creation. | `true` | | `addUpdatedOnField` | Set `updateOne` field to the current timestamp on the document update. | `true` | | `outbox` | Use [transactional](#transactional-events) events instead of [in-memory events](#in-memory-events) | `false` | | `escapeRegExp` | Escape `$regex` values to prevent special characters from being interpreted as patterns. | `false` | | `collectionOptions` | MongoDB [CollectionOptions](https://mongodb.github.io/node-mongodb-native/4.10/interfaces/CollectionOptions.html) | `{}` | | `collectionCreateOptions` | MongoDB [CreateCollectionOptions](https://mongodb.github.io/node-mongodb-native/4.10/interfaces/CreateCollectionOptions.html) | `{}` | ### `CreateConfig` Overrides `ServiceOptions` parameters for create operations. ```typescript theme={null} type CreateConfig = { validateSchema?: boolean, publishEvents?: boolean, }; ``` ### `ReadConfig` Overrides `ServiceOptions` parameters for read operations. ```typescript theme={null} type ReadConfig = { skipDeletedOnDocs?: boolean, }; ``` ### `UpdateConfig` Overrides `ServiceOptions` parameters for update operations. ```typescript theme={null} type UpdateConfig = { skipDeletedOnDocs?: boolean, validateSchema?: boolean, publishEvents?: boolean, }; ``` ### `DeleteConfig` Overrides `ServiceOptions` parameters for delete operations. ```typescript theme={null} type DeleteConfig = { skipDeletedOnDocs?: boolean, publishEvents?: boolean, }; ``` ### `InMemoryEvent` ```typescript theme={null} type InMemoryEvent = { doc: T, prevDoc?: T, name: string, createdOn: Date }; ``` ### `InMemoryEventHandler` ```typescript theme={null} type InMemoryEventHandler = (evt: InMemoryEvent) => Promise | void; ``` ### `OnUpdatedProperties` ```typescript theme={null} type OnUpdatedProperties = Array | string>; ``` ## Extending API Extending API for a single service. ```typescript theme={null} const service = db.createService('users', { schemaValidator: (obj) => schema.parseAsync(obj), }); const privateFields = [ 'passwordHash', 'signupToken', 'resetPasswordToken', ]; const getPublic = (user: User | null) => _.omit(user, privateFields); export default Object.assign(service, { updateLastRequest, getPublic, }); ``` Extending API for all services. ```typescript theme={null} const database = new Database(config.mongo.connection, config.mongo.dbName); class CustomService extends Service { createOrUpdate = async (query: any, updateCallback: (item?: T) => Partial) => { const docExists = await this.exists(query); if (!docExists) { const newDoc = updateCallback(); return this.insertOne(newDoc); } return this.updateOne(query, (doc) => updateCallback(doc)); }; } function createService(collectionName: string, options: ServiceOptions = {}) { return new CustomService(collectionName, database, options); } const userService = createService('users', { schemaValidator: (obj) => schema.parseAsync(obj), }); await userService.createOrUpdate( { _id: 'some-id' }, () => ({ fullName: 'Max' }), ); ``` # Retool Source: https://ship.paralect.com/docs/retool ## Installation guideline 1. Create an account at [Retool](https://login.retool.com/auth/signup) or request an invite to the existing project. At the moment Retool doesn't allow users to have multiple projects. So keep in mind that it's impossible to use one account on different projects. 2. Download the following `JSON` [config for users management](https://ship-docs.fra1.cdn.digitaloceanspaces.com/retool/users-management.json) 3. Import prebuilt applications: open `Apps` > ` Create New` > `From JSON` and choose downloaded file above.