Introduction
Best practices for Node.js come down to a few things that matter in production: how you handle errors, how you structure your code, and how you avoid the security mistakes that cause incidents. This guide covers the practices we follow at Mobile Reality across our production Node.js deployments — from environment configuration and Docker multi-stage builds to MikroORM migrations, TypeScript setup, and vertical slice architecture. Every recommendation includes a concrete code example you can adapt for your project. If you're building Node.js APIs for production, this is the reference we wish we had when we started.
Environment Configuration
Managing configuration through environment variables is the foundation of a deployable Node.js application. Hardcoded values in source code cause incidents. Environment variables prevent them.
.env Files and Validation
Create a .env.example file documenting every required variable. New developers copy it to .env and fill in their values. No guessing, no Slack messages asking "what's the database URL?"
# .env.example
NODE_ENV=development
PORT=3000
DATABASE_URL=postgresql://user:password@localhost:5432/mydb
REDIS_URL=redis://localhost:6379
JWT_SECRET=your-secret-here
LOG_LEVEL=debug
Map and validate every variable at startup using Zod so the app fails fast with a clear error instead of crashing later with a cryptic message:
// src/config/env.ts
import { z } from 'zod';
const envSchema = z.object({
NODE_ENV: z.enum(['development', 'production', 'test']),
PORT: z.coerce.number().default(3000),
DATABASE_URL: z.string().url(),
REDIS_URL: z.string().url(),
JWT_SECRET: z.string().min(32),
LOG_LEVEL: z.enum(['debug', 'info', 'warn', 'error']).default('info'),
});
export const env = envSchema.parse(process.env);
If any variable is missing or malformed, Zod throws a readable error at startup — not 30 minutes into a production deployment when the database connection fails.
Setting Up the Development Environment
cp .env.example .env
npm install
npm run dev
Three commands from clone to running application. That's the standard we target for every project.
Docker Multi-Stage Builds
Docker provides consistent environments across development, testing, and production. At Mobile Reality, every Node.js project ships with a multi-stage Dockerfile that keeps production images small and secure.
Two-Stage Dockerfile
# Stage 1: Build
FROM node:22-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
# Stage 2: Production
FROM node:22-alpine AS production
WORKDIR /app
ENV NODE_ENV=production
COPY package*.json ./
RUN npm ci --omit=dev && npm cache clean --force
COPY --from=builder /app/dist ./dist
USER node
EXPOSE 3000
CMD ["node", "dist/main.js"]
Key details: npm ci installs from package-lock.json for reproducible builds. The production stage excludes dev dependencies with --omit=dev. The USER node directive avoids running as root — a common security mistake. According to the Node.js Docker working group, setting NODE_ENV=production alone can improve Express performance by 3x.
Docker Compose for Local Services
Most Node.js apps depend on a database, cache, or message broker. Docker Compose manages them all:
# docker-compose.yml
services:
postgres:
image: postgres:16-alpine
environment:
POSTGRES_USER: ${POSTGRES_USER:-dev}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-dev}
POSTGRES_DB: ${POSTGRES_DB:-myapp}
ports:
- "5432:5432"
volumes:
- postgres_data:/var/lib/postgresql/data
redis:
image: redis:7-alpine
ports:
- "6379:6379"
volumes:
postgres_data:
Start everything with docker compose up -d. Docker Compose automatically loads the .env file, so your database credentials stay in one place.
Error Handling
Unhandled errors crash Node.js processes. Poorly handled errors leak information to attackers. Both are avoidable.
Express Error Middleware
Centralize error handling in a single middleware at the end of your Express chain:
// src/middleware/error-handler.ts
import { Request, Response, NextFunction } from 'express';
import { logger } from '../config/logger';
class AppError extends Error {
constructor(
public statusCode: number,
public message: string,
public isOperational = true
) {
super(message);
}
}
function errorHandler(err: Error, req: Request, res: Response, next: NextFunction) {
if (err instanceof AppError) {
logger.warn({ err, path: req.path }, 'Operational error');
return res.status(err.statusCode).json({ error: err.message });
}
logger.error({ err, path: req.path }, 'Unexpected error');
return res.status(500).json({ error: 'Internal server error' });
}
Operational errors (bad input, missing resource) get specific status codes and messages. Unexpected errors get logged with full details but return a generic 500 — never expose stack traces or internal details to clients.
Process-Level Handlers
Catch unhandled rejections and uncaught exceptions to prevent silent crashes:
process.on('unhandledRejection', (reason) => {
logger.fatal({ reason }, 'Unhandled rejection — shutting down');
process.exit(1);
});
process.on('uncaughtException', (err) => {
logger.fatal({ err }, 'Uncaught exception — shutting down');
process.exit(1);
});
In production, your process manager (PM2, Docker, Kubernetes) restarts the process. The key is logging the error before exiting so you can diagnose the root cause.
Input Validation with Zod
Never trust incoming data. Validate request bodies, query parameters, and path parameters before processing. Zod provides runtime type checking that integrates with your TypeScript types:
// src/modules/users/user.schema.ts
import { z } from 'zod';
export const createUserSchema = z.object({
body: z.object({
email: z.string().email(),
name: z.string().min(2).max(100),
role: z.enum(['admin', 'user', 'viewer']),
}),
});
export type CreateUserInput = z.infer<typeof createUserSchema>['body'];
Apply validation as middleware:
// src/middleware/validate.ts
import { AnyZodObject, ZodError } from 'zod';
export function validate(schema: AnyZodObject) {
return (req: Request, res: Response, next: NextFunction) => {
try {
schema.parse({ body: req.body, query: req.query, params: req.params });
next();
} catch (err) {
if (err instanceof ZodError) {
return res.status(400).json({ errors: err.errors });
}
next(err);
}
};
}
// Usage in route
router.post('/users', validate(createUserSchema), userController.create);
This catches malformed input at the boundary — before it reaches your business logic or database. According to the Node.js best practices list, schema validation at the boundary is the single most effective defense against injection attacks.
Structured Logging with Pino
console.log is not logging. In production, you need structured JSON logs that can be queried, aggregated, and alerted on. Pino is the fastest Node.js logger — significantly faster than Winston with lower overhead:
// src/config/logger.ts
import pino from 'pino';
import { env } from './env';
export const logger = pino({
level: env.LOG_LEVEL,
transport: env.NODE_ENV === 'development'
? { target: 'pino-pretty' }
: undefined,
});
Add correlation IDs so you can trace a request across your entire system:
// src/middleware/request-id.ts
import { randomUUID } from 'crypto';
export function requestId(req: Request, res: Response, next: NextFunction) {
req.id = req.headers['x-request-id'] as string || randomUUID();
req.log = logger.child({ requestId: req.id });
res.setHeader('x-request-id', req.id);
next();
}
Every log entry includes the request ID. When a user reports an issue, you search your log aggregator by that ID and see the complete request path — from HTTP ingress through service calls to database queries.
Database Management with MikroORM
MikroORM is a TypeScript ORM for Node.js based on Data Mapper, Unit of Work, and Identity Map patterns. We use it across our Node.js projects because it enforces type safety at the database layer.
Configuration
// src/mikro-orm.config.ts
import { defineConfig } from '@mikro-orm/postgresql';
import { env } from './config/env';
export default defineConfig({
clientUrl: env.DATABASE_URL,
entities: ['./dist/**/*.entity.js'],
entitiesTs: ['./src/**/*.entity.ts'],
migrations: {
path: './dist/migrations',
pathTs: './src/migrations',
},
debug: env.NODE_ENV === 'development',
});
Always specify both entities and entitiesTs — the first points to compiled JS for production, the second to TypeScript sources for development and migration generation. According to MikroORM docs, failing to specify entitiesTs when using folder-based discovery is a common cause of migration errors.
SQL Naming Conventions
Keep your database schema consistent and readable:
- Tables: plural, snakecase —
users,usercomments,order_items - Columns: singular, snakecase —
email,isblocked,created_at - Primary keys:
id(UUID or auto-increment) - Foreign keys:
<entity>id—userid,order_id
Migration Commands
# Generate a migration from entity changes
npx mikro-orm migration:create
# Apply pending migrations
npx mikro-orm migration:up
# Revert the last migration
npx mikro-orm migration:down
# Check migration status
npx mikro-orm migration:pending
Run migrations in CI before deployment. Never run migration:create in production — generate migration files locally, review them, commit to git, and apply through your deployment pipeline.
TypeScript Configuration
Every Node.js project at Mobile Reality uses TypeScript. The type system catches errors at build time that would otherwise surface in production.
tsconfig.json
{
"compilerOptions": {
"target": "ES2022",
"module": "NodeNext",
"moduleResolution": "NodeNext",
"outDir": "./dist",
"rootDir": "./src",
"strict": true,
"esModuleInterop": true,
"skipLibCheck": true,
"forceConsistentCasingInFileNames": true,
"resolveJsonModule": true,
"declaration": true,
"declarationMap": true,
"sourceMap": true
},
"include": ["src"],
"exclude": ["node_modules", "dist"]
}
Key settings: "module": "NodeNext" enables native ESM support with .js extensions in imports. "strict": true enables all strict type-checking options — don't skip this. According to MikroORM docs, experimentalDecorators is required if you use decorator-based entity definitions.
Production vs Development Dependencies
Be explicit about which dependencies belong where:
# Production dependency — used at runtime
npm install express pino @mikro-orm/core zod
# Development dependency — used only during build/test
npm install -D typescript vitest @types/express eslint prettier
Your Dockerfile's npm ci --omit=dev strips everything installed with -D. If a runtime library ends up in devDependencies, your production container crashes. If a test framework ends up in dependencies, your production image gets unnecessarily large.
Code Quality: ESLint, Prettier, and Husky
Consistent code style eliminates bikeshedding in code reviews. Automated enforcement means no one has to remember the rules.
ESLint + Prettier Configuration
// .eslintrc.json
{
"extends": [
"eslint:recommended",
"plugin:@typescript-eslint/recommended",
"prettier"
],
"parser": "@typescript-eslint/parser",
"plugins": ["@typescript-eslint"],
"rules": {
"@typescript-eslint/no-unused-vars": "error",
"@typescript-eslint/explicit-function-return-type": "warn",
"no-console": "warn"
}
}
// .prettierrc
{
"semi": true,
"singleQuote": true,
"trailingComma": "all",
"printWidth": 100,
"tabWidth": 2
}
Automated Enforcement with Husky
Run linting and formatting on every commit:
// package.json (partial)
{
"scripts": {
"lint": "eslint 'src/**/*.ts'",
"lint:fix": "eslint 'src/**/*.ts' --fix",
"format": "prettier --write 'src/**/*.ts'",
"prepare": "husky"
},
"lint-staged": {
"*.ts": ["eslint --fix", "prettier --write"]
}
}
# .husky/pre-commit
npx lint-staged
Project Architecture: Vertical Slices
Organizing code by technical layers (controllers/, services/, repositories/) forces you to jump between directories for every feature. Vertical slice architecture groups everything by feature:
src/
├── config/ # Environment, logger, database config
├── middleware/ # Error handler, auth, validation, request-id
├── modules/
│ ├── users/
│ │ ├── user.entity.ts
│ │ ├── user.service.ts
│ │ ├── user.controller.ts
│ │ ├── user.schema.ts
│ │ └── user.test.ts
│ ├── orders/
│ │ ├── order.entity.ts
│ │ ├── order.service.ts
│ │ ├── order.controller.ts
│ │ ├── order.schema.ts
│ │ └── order.test.ts
│ └── auth/
│ ├── auth.service.ts
│ ├── auth.controller.ts
│ └── auth.test.ts
├── migrations/
└── main.ts
Each module is self-contained. You can understand, test, and deploy a feature without reading the entire codebase. According to the Node.js best practices list, structuring by feature rather than technical role is the most impactful architectural decision for maintainability.
GraphQL vs REST Module Structure
The internal structure of each module differs depending on your API style. At Mobile Reality, we use both patterns depending on the project.
REST modules:
src/modules/users/
├── user.entity.ts
├── user.service.ts
├── user.controller.ts # Express route handlers
├── user.schema.ts # Zod request/response schemas
├── user.router.ts # Express router definition
└── user.test.ts
GraphQL modules:
src/modules/users/
├── user.entity.ts
├── user.service.ts
├── user.resolver.ts # GraphQL resolvers
├── user.type.ts # GraphQL type definitions
├── user.input.ts # GraphQL input types
└── user.test.ts
The service layer is identical in both cases — only the transport layer (controller vs resolver) changes. This means you can switch between REST and GraphQL without rewriting business logic.
Naming conventions: Use kebab-case for directories and files (user-comments/, order-items.service.ts). This avoids issues with case sensitivity across operating systems — a common source of "works on my machine" bugs.
Testing
Unit Tests with Vitest
Vitest is the modern standard for Node.js testing — native ESM support, TypeScript out of the box, and fast parallel execution:
// src/modules/users/user.service.test.ts
import { describe, it, expect, vi } from 'vitest';
import { UserService } from './user.service';
describe('UserService', () => {
it('creates a user with valid input', async () => {
const mockRepo = {
create: vi.fn().mockReturnValue({ id: '1', email: 'test@test.com' }),
persistAndFlush: vi.fn(),
};
const service = new UserService(mockRepo as any);
const result = await service.create({ email: 'test@test.com', name: 'Test' });
expect(result.email).toBe('test@test.com');
expect(mockRepo.persistAndFlush).toHaveBeenCalled();
});
});
E2E Tests with Supertest
// test/users.e2e.test.ts
import { describe, it, expect } from 'vitest';
import request from 'supertest';
import { app } from '../src/main';
describe('POST /users', () => {
it('returns 400 for invalid email', async () => {
const res = await request(app)
.post('/users')
.send({ email: 'not-an-email', name: 'Test' });
expect(res.status).toBe(400);
expect(res.body.errors).toBeDefined();
});
});
Test Scripts
{
"scripts": {
"test": "vitest run",
"test:watch": "vitest",
"test:e2e": "vitest run --config vitest.e2e.config.ts",
"test:coverage": "vitest run --coverage"
}
}
Target minimum 80% coverage on business logic. 100% coverage on utility functions is easy; 100% on controllers is usually waste.
Dependency Management and Security
Vulnerable dependencies are the most common attack vector for Node.js applications. Managing them is not optional.
# Audit for known vulnerabilities
npm audit
# Update packages to latest compatible versions
npm update
# Check which packages are outdated
npm outdated
# Install with exact lockfile (CI/CD only)
npm ci
Run npm audit in your CI pipeline and fail the build on high-severity findings. Use npm ci (not npm install) in CI — it installs from package-lock.json exactly, ensuring reproducible builds.
For production Docker images, always run npm ci --omit=dev to exclude development dependencies. A production image doesn't need your test framework or linter.
Conclusion
Best practices for Node.js in 2026 come down to failing fast, validating early, and automating everything you can.
- Validate environment variables at startup with Zod — fail immediately, not 30 minutes into deployment
- Use Docker multi-stage builds — production images should contain only runtime dependencies
- Centralize error handling in Express middleware — never expose stack traces to clients
- Validate all input at the boundary with Zod schemas — the most effective defense against injection
- Use Pino for structured JSON logging with correlation IDs —
console.logis not production logging - Manage your database with MikroORM — type-safe migrations, consistent naming conventions
- Automate code quality with ESLint, Prettier, and Husky — no manual enforcement
- Organize code by feature (vertical slices) — not by technical layer
- Test with Vitest and Supertest — unit tests for logic, E2E tests for API contracts
- Audit dependencies in CI —
npm auditandnpm cion every build
Frequently Asked Questions
Why should I validate environment variables at startup instead of when they're used?
Validating at startup with Zod ensures your application fails immediately with a readable error if any required variable is missing or malformed, rather than crashing 30 minutes into a production deployment when the database connection fails. This fail-fast approach prevents cryptic runtime errors and makes configuration issues obvious during the deployment phase rather than mid-operation.
What's the difference between vertical slice architecture and traditional layered architecture?
Traditional layered architecture organizes code by technical roles like controllers, services, and repositories, forcing you to jump between directories for every feature. Vertical slice architecture groups all files related to a feature — including entity, service, controller, schema, and tests — into a single module directory, making each feature self-contained and easier to understand without reading the entire codebase.
Why use Docker multi-stage builds for Node.js applications?
Multi-stage builds create smaller, more secure production images by separating the build environment from the runtime environment, compiling TypeScript in one stage and copying only the compiled output and production dependencies to the final image. This approach excludes dev dependencies with npm ci --omit-dev, reduces attack surface by running as the node user instead of root, and setting NODE_ENV=production can improve Express performance by up to 3x.
Should I use TypeScript strict mode in Node.js projects?
Yes, you should always enable strict mode in your tsconfig.json because it enables all strict type-checking options that catch errors at build time before they surface in production. Strict mode ensures type safety across your application and prevents runtime errors that would otherwise only appear during execution, complementing the validation provided by tools like Zod and MikroORM.
How do structured logs with correlation IDs help in production debugging?
Structured JSON logs with correlation IDs using Pino allow you to trace a complete request path across your entire system by attaching a unique ID to every log entry generated during that request. When a user reports an issue, you can search your log aggregator by that specific request ID to see the full execution flow from HTTP ingress through service calls to database queries, rather than sifting through unstructured console logs.
Backend Development Insights
Dive into the web and backend development world with a focus on Node JS and Nest JS frameworks at Mobile Reality. We are thrilled to share our deep knowledge and practical insights into these robust technologies. Explore the unique challenges and cutting-edge solutions we encounter in backend development. Our expert team has curated a series of articles to provide you with a comprehensive understanding of backend development, NodeJS, and NestJS:
- Creating your own streaming app with WebRTC
- NestJS Pros and Cons (2026): Complete CTO's Guide
- Types of Apps You Can Build with Node JS in 2024
- Node JS vs PHP: A Complete Comparison for CTOs
- Pros & Cons of TypeScript In Web Development
- Top 5 Node.JS Backend Tools and Libraries in 2024
- GO vs Node JS : A Complete Comparison for CTOs
- Thumbnails on-the-fly with S3, LambdaEdge and Terraform
- Top 5 Node JS Packages and Tools by Mobile Reality in 2025
Gain a deeper understanding of these powerful backend technologies. If you want to collaborate on Node JS development projects, please contact our sales team for potential partnerships. For those looking to join our dynamic team, we encourage you to contact our recruitment department. We're always looking for talented individuals to enhance our backend development capabilities. Let's connect and explore how we can achieve great things together!
