What Are Containers?
This article is a based on what I have learned from a frontend masters course on Containers and the docker container projects.
Containers leverage three core Linux features to isolate processes and manage resources:
- Chroot (Jailed Linux): Restricts a process to a specific directory subtree, preventing access to the rest of the filesystem.
- Namespaces: Ensures that a containerized process can only “see” resources within its own namespace (e.g., processes, network interfaces, etc.), isolating it from processes in the host or other containers.
- Control Groups (cgroups): Allocates a defined amount of CPU, memory, and other resources to each container, preventing one container from monopolizing resources on a shared host.
The Evolution of Servers
1. Bare Metal
Traditionally, organizations housed their own physical servers in dedicated rooms (“server rooms”). Developers would deploy software directly on these machines. While this offered full control over the hardware, it required significant maintenance, and scaling to meet fluctuations in traffic was difficult.
2. Virtual Machines
Virtual machines (VMs) introduced a layer of abstraction between physical servers and software by allowing multiple guest operating systems to run on a single host. This improved hardware utilization and eased server provisioning. However, each VM includes its own operating system kernel, which can be resource-intensive.
3. The Cloud
Cloud computing (e.g., AWS, Azure) removes the need to manage physical hardware directly. Developers rent compute resources on-demand, spinning up or down VMs as needed. While powerful, many cloud-based solutions still rely on traditional VMs, which may be heavier to run for certain workloads.
Why Containers?
Compared to traditional VMs, containers share the host machine’s operating system kernel but remain isolated through namespaces and cgroups. This approach offers several advantages:
- OS-Level Virtualization: All containers share the host OS kernel, reducing overhead.
- Lightweight and Fast: Containers do not need to run a separate kernel, so they can start, stop, and scale very quickly.
- Consistent and Portable: Containers package the application and its dependencies in a standardized way, making it easier to move from development to production without “it works on my machine” issues.
The Basics of Docker
Docker is one of the most popular container platforms. There are two main steps to creating a Docker-based application:
- Define a Dockerfile: This file, placed in the project’s root directory, describes how to build the container (e.g., base image, dependencies, environment setup).
- Use Docker Compose: A
docker-compose.yml
file orchestrates multiple containers (e.g., databases, backend services) and defines how they interact.
Example Dockerfile
dockerfile CopyEdit # syntax=docker/dockerfile:1 # 1. Use an official Node.js runtime as a parent image FROM node:18-alpine # 2. Set the working directory inside the container WORKDIR /app # 3. Copy package.json and package-lock.json first COPY package*.json ./ # 4. Install build tools needed for bcrypt (as an example) RUN apk add --no-cache python3 make g++ # 5. Install dependencies RUN npm install # 6. Copy the rest of your app's source code COPY . . # 7. (Optional) Build step if using TypeScript or a build process RUN npm run build # 8. Expose the port your app runs on (for documentation only) EXPOSE 3000 # 9. Run Prisma migrations (optional) or any setup commands RUN npx prisma migrate deploy # 10. Start the Node.js app CMD ["npm", "run", "start"]
Example Docker Compose File
yaml CopyEdit version: '3.8' services: # 1. Postgres Service postgres: image: postgres:15-alpine container_name: my-postgres environment: POSTGRES_USER: postgres POSTGRES_PASSWORD: postgres POSTGRES_DB: mydb ports: - "5433:5432" volumes: - dbdata:/var/lib/postgresql/data # 2. Application Service (Express + Prisma) # app: # build: . # container_name: my-express-app # ports: # - "3000:3000" # environment: # DATABASE_URL: "postgresql://postgres:postgres@postgres:5432/mydb" # depends_on: # - postgres # restart: unless-stopped volumes: dbdata:
Multi-Stage Builds
Sometimes, you need different environments for building and running your application. Multi-stage builds allow you to install and compile dependencies in one stage, then copy only the necessary files into a minimal runtime image.
dockerfile CopyEdit # Build stage FROM node:20 AS node-builder WORKDIR /build COPY package-lock.json package.json ./ RUN npm ci COPY . . # Add build commands here if needed (e.g., npm run build) # Runtime stage FROM gcr.io/distroless/nodejs20 COPY --from=node-builder --chown=node:node /build /app WORKDIR /app CMD ["index.js"]
This approach keeps your final image small and reduces security risks by removing unnecessary build tools.