Theodo apps

Dockerize your app and keep hot-reloading !

Your project is organized though different technical services e.g. back-office, front-end application, back-end application, database.

While you could run all these services at once on one server, wouldn't it be nice to make them live separatly? Then you could manage each of your technical services individually:

  • how they interact with one another;
  • isolate and choose the best stack for each one of them;
  • watch running parts of your application (your microservicesdeployed, shared, updated and scaled independently.

Still, it is really difficult to make them live separately on several physical servers in terms of security and portablilty. Docker may be the solution: it lets you separate your architecture in several autonomous services. You can get more information about Docker Use Cases here. By the way, if you wonder what will happen to all your great webpack plugins such as React-hot-loader, don't worry we'll get it covered ;)

Docker Header
  • This tutorial will guide you through Dockerizing a sample Express/React project which uses Webpack and its plugins in a development environnement. The architecture I describe might not be the best for your project. However the goal of this tutorial is to understand how to split your Express/React app in microservices, make it portable with Docker and keep using Reat hot loader :)

For this tutorial, you will need Docker and Docker-compose.

Kicking off our example project.

For this tutorial, let's assume we own a simple React web-application supported by three services :

  • a PostgreSQL database which uses port ++code>5432++/code> on ++code>localhost++/code>;
  • NodeJS backend which listen to port ++code>8080++/code> on ++code>localhost++/code>;
  • a React frontend served by Webpack development server on port ++code>3000++/code> on ++code>localhost++/code>.

The following diagram represents the current stack.

Current stack
  • You can follow this tutorial with this application example available here. You can clone it and follow the tutorial.

You might have experienced that using a lot of different ports is confusing while developping services locally : it often involves cross-origin resource sharing which need to be allowed. Cross-origin resource sharing let another domain access your data. When you need to access your data from a different domain, you need to allow this domain to query the data.

  • Cross-origin resource sharing (CORS) are allowed in the example project ;)

Step 1: Containerize services.

First, let's containerize these three services. Create the ++code>docker++/code> directory, and one Dockerfile per development service:

App

We create our Dockerfile with these characteristics:

  • it should run node with a command (++code>yarn run start++/code>);
  • it should create a ++code>/usr/src/app++/code> working directory;
  • it should expose its ++code>3000++/code> port (Webpack Dev Server port).


++pre>++code>? docker
│ ...
└ ? app
 └ ? Dockerfile.dev
++/code>++/pre>

++pre>FROM node:8.1.0-alpine

WORKDIR /usr/src/app

EXPOSE 3000
CMD ["yarn", "run", "start"]++/pre>

Here we use one Alpine image in which node lives. In your project, your free to find more usable Docker image such as Ubuntu for example.

Api

We create our Dockerfile with these characteristics, very similar to our App service characteristics:

  • it should run node with a command (++code>yarn run serve++/code>);
  • it should create a ++code>/usr/src/api++/code> working directory;
  • ir should expose its ++code>8080++/code> port (our Node's Express server port).


++pre>++code>? docker
│ ...
└ ? api
 └ ? Dockerfile.dev
++/code>++/pre>

++pre>FROM node:8.1.0-alpine

WORKDIR /usr/src/api

EXPOSE 8080
CMD ["yarn", "run", "serve"]++/pre>

Modify your api ++code>package.json++/code> scripts to add the following line; it will run migrations and seed our database on startup !

++code>"serve": "sequelize db:migrate && sequelize db:seed:all && nodemon index.js"++/code>

Db

We'll use postgres official container.

++pre>++code>? docker
│ ...
└ ? db
 ├ ? psql.env
 └ ? Dockerfile.dev
++/code>++/pre>

We first need to create a ++code>psql.env++/code> configuration file.

++pre>++code>POSTGRES_USER=myappuser
POSTGRES_PASSWORD=myapppassword
POSTGRES_DB=myappdb
PGDATA=/data
++/code>++/pre>

Finally, we create our Dockerfile with these characteristics:

  • it should run postgres (which by default expose its ++code>5432++/code> port.


++pre>FROM postgres:9.5++/pre>

Step 2: Draw your target architecture.

Now let's think about how our services should run in our production environnement :

  • the React application should be served statically by one server : this is our first service.
  • the backend should be accessed with the same root URL than our frontend : the API is our second service and will be discovered behind a proxy of our first server. This way we won't have any problem of browsers throwing Cross-origin resource sharing issues.
  • the database should be accessed with a URL and some credentials.

In order to achieve our goal (which is to make each of our services manageable), we use Docker to containerize them. The target container architecture is given here:

Production stack
  • Server: this service runs NGINX. The server is accessible from outside through port ++code>80++/code>. We then need to connect our production server ++code>80++/code> port to this service ++code>80++/code> port. It serves our React application on ++code>/++/code>route and redirects queries to our Api on ++code>/api++/code> route.
  • Api: this service runs Node and its middlewares. It connects our external DB (known host, port and credentials).

Step 3: Draw your development architecture.

One of our standard at BAM is to be as iso-production as possible when developping. On one hand, it ensures we share the same specification on our machines and on the staging/production servers and reduce regression risk when pushing to remote servers. On the other hand, we should not forget efficient development tools: running Docker services on our machine should not slow down features development.

Instead of building the entire architecture each time we make a change in our code, we would still use our Webpack development server. The following diagram shows our development architecture (differences with target architecture are showed in green):

Development stack
  • App: instead of serving static React application, our Server will redirect all requests on ++code>/++/code> route to our App service which will run Webpack Dev Server. Webpack Dev Server helps us a lot automatically reloading our navigator erverytime we make a change to our app (thanks to its Hot Module Replacement and the React Hot Loader plugin).
  • Db: instead of using our machine database, we'll use a Db service which runs Postgres with the target version. We'll connect the ++code>5431++/code> port of our machine to the Db ++code>5432++/code> port in order to connect directly to our database with a terminal.

++table>++thead>++tr>++th>+++/th>++th>-++/th>++/tr>++/thead>++tbody>++tr>++td>Fast and efficient development++/td>++td>One level of abstraction added++/td>++/tr>++tr>++td>Fully iso between developpers (the way the code is executed is the same)++/td>++td>Not iso-production (but good effort is made)++/td>++/tr>++tr>++td>See services logs in one place organised by color++/td>++td>++/td>++/tr>++tr>++td>Clear definition of services++/td>++td>++/td>++/tr>++/tbody>++/table>

Step 4: Connect everthing.

Drawing our architecture was an important step to achieve our goal! You'll see it makes our Docker container easier to write.

The following steps will lead you to execute the development environement on your machine.

4.1: Modify our containers.

Server

We first create ++code>nginx.dev.conf++/code> file.

++pre>++code>? config
└ ? nginx
 └ ? nginx.dev.conf
++/code>++/pre>

Declare connections to other services :

++pre>++code>  upstream api {
     least_conn;
     server api:8080 max_fails=3 fail_timeout=30s;
 }

 upstream app {
     least_conn;
     server app:3000 max_fails=3 fail_timeout=30s;
 }
++/code>++/pre>

Declare proxies:

++pre>++code>  location / {
     proxy_pass http://app;
     proxy_http_version 1.1;
     proxy_set_header Upgrade $http_upgrade;
     proxy_set_header Connection 'upgrade';
     proxy_set_header Host $host;
     proxy_cache_bypass $http_upgrade;
     break;
 }

 location ~ /api/(?<url>.*) {
     proxy_pass http://api/$url;
     proxy_http_version 1.1;
     proxy_set_header Upgrade $http_upgrade;
     proxy_set_header Connection 'upgrade';
     proxy_set_header Host $host;
     proxy_cache_bypass $http_upgrade;
 }
++/code>++/pre>

Use ++code>Access-Control-Allow-Origin++/code> header to allow pre-flight request checks :)

++pre>++code>        location ~* \.(eot|otf|ttf|woff|woff2)$ {
           add_header Access-Control-Allow-Origin *;
       }
++/code>++/pre>

The content of ++code>nginx.dev.conf++/code> will not be explained here as NGINX is not the purpose of this tutorial. However, my fullconfiguration file is given here.

Finally, we create our Dockerfile with these characteristics:

  • it should run nginx;
  • it should use our ++code>nginx.dev.conf++/code> configuration file;
  • it should expose its ++code>80++/code> port.


++pre>++code>? docker
└ ? server
 └ ? Dockerfile.dev
++/code>++/pre>

++pre>FROM nginx

ADD /config/nginx/nginx.dev.conf /etc/nginx/nginx.conf

EXPOSE 80++/pre>

App

In our application example, you need to change the ++code>API_URL++/code> to ++code>/api++/code> in ++code>app/src/App.js++/code> at line 6:

++code>const API_URL = 'http://localhost/api';++/code>

Db

You will need to change the way your api connect to our database locally. Change your host: previously ++code>127.0.0.1++/code> to ++code>db++/code> - isn't it beautiful? Docker taking care of hostnames ? ;-)

In our example, go to ++code>api/server/config/config.json++/code> and change line 6.

4.3: Actually connect everything.

We now create the ++code>docker/docker-compose.dev.yml++/code> configuration file and connect our services.

++pre>++code>? docker
│ ...
└ ? docker-compose.dev.yml
++/code>++/pre>

++pre>version: '3'
services:
 server:
   build:
     context: ../.
     dockerfile: docker/server/Dockerfile.dev
   image: myapp-server
   deploy:
     resources: # Set these values when you know what you do!
       limits:
         cpus: '0.001'
         memory: 50M
       reservations:
         cpus: '0.0001'
         memory: 20M
   ports:
     - '80:80' # Connect localhost 80 port to container 80 port
   links: # Link services to access http://app and  http://api inside the container
     - api:api
     - app:app
 app:
   build:
     context: ../.
     dockerfile: docker/app/Dockerfile.dev
   image: myapp-app
   environment:
     - NODE_ENV=development
   volumes: # For webpack dev server to use our local files
     - ./../app:/usr/src/app
   ports:
     - '3000:3000' # For docker to now where to redirect HMR queries
 api:
   deploy:
     resources: # Set these values when you know what you do!
       limits:
         cpus: '0.001'
         memory: 50M
       reservations:
         cpus: '0.0001'
         memory: 20M
   build:
     context: ../.
     dockerfile: docker/api/Dockerfile.dev
   image: myapp-api
   environment:
     - DB_NAME=myappdb
     - DB_USER=myappuser
     - DB_PASSWORD=myapppassword
     - DB_HOST="db"
     - DB_PORT=5432
     - NODE_ENV=development
   links:
     - db:db
   volumes:
     - ./../api:/usr/src/api
   ports:
     - '8080'
   depends_on:
     - "db"
 db:
   build:
     context: ../.
     dockerfile: docker/db/Dockerfile.dev
   env_file: db/psql.env
   image: myapp-db
   ports:
     - '5431:5432'++/pre>

You can see in this file that we set resources limits: the hosting server will share its resources to these containers. Limiting resources prevent one container draining all resources leaving the others dying (more info here). You can either do this way or use Docker Compose version 2.

4.4: Create installation script.

++pre>++code>? script
│ ...
└ ? 00-install-dev.sh
++/code>++/pre>

++pre>#!/usr/bin/env bash
set -e

# Build app and api containers
docker-compose -f docker/docker-compose.dev.yml build

# Launch the db alone once and give it time to create db user and database
# This is a quickfix to avoid waiting for database to startup on first execution (more details [here](https://docs.docker.com/compose/startup-order/))
docker-compose -f docker/docker-compose.dev.yml up -d db
sleep 5
docker-compose -f docker/docker-compose.dev.yml stop db
++/pre>

Make this script executable with the command ++code>sudo chmod 744 ./script/00-install-dev.sh++/code>

In our root ++code>package.json++/code> file, add the following scripts:

++pre>...
 "scripts": {
   "dev:install": "./script/00-install-dev.sh",
   "dev:up": "docker-compose -f docker/docker-compose.dev.yml up",
   "dev:down": "docker-compose -f docker/docker-compose.dev.yml down",
   "dev:uninstall": "docker-compose -f docker/docker-compose.dev.yml down --rmi all",
   "dev:connect:api": "docker exec -it target_api_1 /bin/sh",
   "dev:connect:db": "psql -h localhost -p 5431 -U myappuser -d myappdb"
 }++/pre>

Now you can use either one of the following commands:

  • ++code>yarn dev:install++/code> : Install the development environment (by building Docker images);
  • ++code>yarn dev:up++/code> : Execute all development services;
  • ++code>yarn dev:down++/code> : Stop all development services;
  • ++code>yarn dev:connect:api++/code> : Connect to the api (then you can run migrations for example);
  • ++code>yarn dev:connect:db++/code> : Connect to the db.

Ready for magic to happen ?

Simply install your containers with ++code>yarn dev:install++/code>... and run ++code>yarn dev:up++/code> to gracefully launch all your services at once !

Visit localhost to see your app live !

Docker Compose launched every container accordingly to our configuration file. Once launched, every service print their logs in one single terminal. We made every service resource manageable and portable. Furthermore, we still use our efficient development tools !

You can have the final result at the ++code>containerfull++/code> branch of the repo of this tutorial.

What's next ?

You can create all scripts and derived Dockerfiles to release and deploy your containers in CI/CD !

You can first create ++code>nginx.prod.conf++/code> file, then create the ++code>docker/docker-compose.prod.yml++/code> configuration file. Get inspire by your development configuration and create the scripts needed in your CI/CD pipelines.

Cheers :)

Développeur mobile ?

Rejoins nos équipes