Using Docker with Node image to develop a VuejS (NuxtJs) app - node.js

The situtaion
I have to work on a VueJs (NuxtJs) spa, so I'm trying to use Docker with a Node image to avoid installing it on my pc, but can't figure out how to make it work.
The project
The source cose is in its own application folder, since it is versioned, and at the root level there is the docker-compose.yaml file
The folder structure
my-project-folder
├ application
| └ ...
└ docker-compose.yaml
The docker-compose.yaml
version: "3.3"
services:
node:
# container_name: prova_node
restart: 'no'
image: node:lts-alpine
working_dir: /app
volumes:
- ./application:/app
The problem
The container start but quit immediately with exit status 0 (so it executed correctly), but this way I can't use it to work on the project.
Probably there is something I'm missing about the Node image or Docker in general; what i would like to to do is connecting to the docker container to run npm commands like install, run start etc and then check the application on the browser on localhost:3000 or whatever it is.

I would suggest to use Dockerfile with base image as node and then create your entrypoint which runs the application. That will eliminate the need to use volumes which is used when we want to maintain some state for our containers.
Your Dockerfile may look something like this:
FROM node:lts-alpine
RUN mkdir /app
COPY application/ /app/
EXPOSE 3000
CMD npm start --prefix /app
You can then either run it directly through docker run command or use docker-compose.yaml as following :
version: "3.3"
services:
node:
# container_name: prova_node
restart: 'no'
build:
context: .
ports:
- 3000:3000

Related

How to build a react/vue application outside of a docker container

I have several applications (vue & react each of the applications is built on a different version of the node). I want to set up a project deployment so that I can run a docker container with the correct version of the node for each of the projects. A build (npm i & npm run build) should happen in the container, but I go to give the result from the container to /var/www/project_name already on the server itself.
Next, set up a container with nginx which, depending on the subdomain, will give the desired build
My question is how to return the folder with files from the container to the operating system area?
my docker-compose file:
version: "3.1"
services:
redis:
restart: always
image: redis:alpine
container_name: redis
build-adminapp:
build: adminapp/
container_name: adminapp
working_dir: /var/www/adminapp
volumes:
- ./adminapp:/var/www/adminapp
build-clientapp:
build: clientapp/
container_name: clientapp
working_dir: /var/www/clientapp
volumes:
- ./clientapp:/var/www/clientapp`
my docker files:
FROM node:10-alpine as build
# Create app directory
WORKDIR /var/www/adminapp/
COPY . /var/www/adminapp/
RUN npm install
RUN npm run build
second docker file:
FROM node:12-alpine as build
# Create app directory
WORKDIR /var/www/clientapp/
COPY . /var/www/clientapp/
RUN npm install
RUN npm run build
If you already have a running container, you can use docker cp command to move files between local machine and docker containers.

Django/react Dockerfile stages not working together

I have a multistage Dockerfile for a Django/React app that is creating the following error upon running docker-compose up --build:
backend_1 | File "/code/myapp/manage.py", line 17
backend_1 | ) from exc
backend_1 | ^
backend_1 | SyntaxError: invalid syntax
backend_1 exited with code 1
As it stands now, only the frontend container can run with the two below files:
Dockerfile:
FROM python:3.7-alpine3.12
ENV PYTHONUNBUFFERED=1
RUN mkdir /code
WORKDIR /code
COPY . /code
RUN pip install -r ./myapp/requirements.txt
FROM node:10
RUN mkdir /app
WORKDIR /app
# Copy the package.json file into our app directory
# COPY /myapp/frontend/package.json /app
COPY /myapp/frontend /app
# Install any needed packages specified in package.json
RUN npm install
EXPOSE 3000
# COPY /myapp/frontend /app
# COPY /myapp/frontend/src /app
CMD npm start
docker-compose-yml:
version: "2.0"
services:
backend:
build: .
command: python /code/myapp/manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
networks:
- reactdrf
web:
build: .
depends_on:
- backend
restart: always
ports:
- "3000:3000"
stdin_open: true
networks:
- reactdrf
networks:
reactdrf:
Project structure (relevant parts):
project (top level directory)
api (the django backend)
frontend
public
src
package.json
myapp
manage.py
requirements.txt
docker-compose.yml
Dockerfile
The interesting thing is when commenting out one service of the Dockerfile or docker-compose.yml or the other, the part left runs fine ie. each service can run perfectly fine on its own with their given syntaxes. Only when combined do I get the error above.
I thought this might be a docker network issue (as in the containers couldn't see each other) but the error seems more like a location issue yet each service can run fine on their own?
Not sure which direction to take this, any insight appreciated.
Update:
Misconfigured my project heres what I did to resolve it:
Separate the frontend and backend into their own Dockerfiles such that the 1st part of my Dockerfile above (backend) is the only part of the Dockerfile remaining in the root directory and the rest of it is in its own Dockerfile in /frontend.
Updated the docker-compose.yml web service build path to point to that new location,
...
web:
build: ./myapp/frontend
...
seems to build now sending data from backend to frontend.
What I learned - multiple Dockerfiles for multiple services. Gets glued together in docker-compose.yml instead of combining them into one Dockerfile.

How to copy build files from one container to another or host on docker

I am trying to dockerize application where I have a php backend and vuejs frontend. Backend is working as I expect however after running npm run build within the frontend container, I need to copy build files from dist folder to nginx container or to host and then use volume to bring those files to nginx container.
I tried to use named volume
services:
frontend:
.....
volumes:
- static:/var/www/frontend/dist
nginx:
.....
volumes:
- static:/var/www/frontend/dist
volumes:
static:
I also tried to do following as suggested on here to bring back dist folder to host
services:
frontend:
.....
volumes:
- ./frontend/dist:/var/www/frontend/dist
However none of the above options is working for me. Below is my docker-compose.yml file and frontend Dockerfile
version: "3"
services:
database:
image: mysql:5.7.22
.....
backend:
build:
context: ./docker/php
dockerfile: Dockerfile
.....
frontend:
build:
context: .
dockerfile: docker/node/Dockerfile
target: 'build-stage'
container_name: frontend
stdin_open: true
tty: true
volumes:
- ./frontend:/var/www/frontend
nginx:
build:
context: ./docker/nginx
dockerfile: Dockerfile
container_name: nginx
restart: unless-stopped
ports:
- 80:80
volumes:
- ./backend/public:/var/www/backend/public:ro
- ./frontend/dist:/var/www/frontend/dist:ro
depends_on:
- backend
- frontend
Frontend Dockerfile
# Develop stage
FROM node:lts-alpine as develop-stage
WORKDIR /var/wwww/frontend
COPY /frontend/package*.json ./
RUN npm install
COPY /frontend .
# Build stage
FROM develop-stage as build-stage
RUN npm run build
You can combine the frontend image and the Nginx image into a single multi-stage build. This basically just involves copying your docker/node/Dockerfile as-is into the start of docker/nginx/Dockerfile, and then COPY --from=build-stage into the final image. You will also need to adjust some paths since you'll need to make the build context be the root of your project.
# Essentially what you had in the question
FROM node:lts AS frontend
WORKDIR /frontend
COPY frontend/package*.json .
RUN npm install
COPY frontend .
RUN npm run build
# And then assemble the Nginx image with content
FROM nginx
COPY --from=frontend /frontend/dist /var/www/html
Once you've done this, you can completely delete the frontend container from your docker-compose.yml file. Note that it never did anything – the image didn't declare a CMD, and the docker-compose.yml didn't provide a command: to run either – and so this shouldn't really change your application.
You can use a similar technique to copy the static files from your PHP application into the Nginx proxy. When all is said and done, this leaves you with a simpler docker-compose.yml file:
version: "3.8"
services:
database:
image: mysql:5.7.22
.....
backend:
build: ./docker/php # don't need to explicitly name Dockerfile
# note: will not need volumes: to export files
.....
# no frontend container any more
nginx:
build:
context: . # because you're COPYing files from other components
dockerfile: docker/nginx/Dockerfile
restart: unless-stopped
ports:
- 80:80
# no volumes:, everything is built into the image
depends_on:
- backend
(There are two practical problems with trying to share content using Docker named volumes. The first is that volumes never update their content once they're created, and in fact hide the content in their original image, so using volumes here actually causes changes in your source code to be ignored in favor of arbitrarily old content. In environments like Kubernetes, the cluster doesn't even provide the copy-on-first-use that Docker has, and there are significant limitations on how files can be shared between containers. This works better if you build self-contained images and don't try to share files.)

Run multiple Docker containers at once using docker-compose

The Problem
Currently I've created a Dockerfile and a docker-compose.yml to run my rest-api and database using docker-compose up.
What I want to do now is add another container, namely the web application (build with React). I'm a little bit confused on how to do that, since I just started learning Docker 2 days ago.
Folder Structure
This is my current folder structure
Folder: rest-api (NodeJS)
Dockerfile
dockercompose.yml
The Question
In the end I want to be able to run docker-compose up to fire up both the rest-api and the web-app.
Do I need to create a seperate Dockerfile in every folder and create a 'global' docker-compose.yml to link everything together?
New folder structure:
dockercompose.yml
Folder: rest-api (NodeJS)
Dockerfile
Folder: web-app (React)
Dockerfile
My current setup to run the rest-api and database
Dockerfile
FROM node:13.10
# The destination of the app in the container
WORKDIR /usr/src/app
# Moves the package.json, package-loc.json and tsconfig.json to the specified workdir
COPY package*.json ./
COPY tsconfig.json ./
# Create user and postgres
ENV POSTGRES_USER root
ENV POSTGRES_PASSWORD 12345
ENV POSTGRES_DB postgres
ENV POSTGRES_URI 'postgres://postgres:12345#postgres:5432/postgres'
RUN npm install
COPY . .
EXPOSE 3000
CMD ["npm", "start"]
docker-compose.yml
version: '3'
services:
node:
container_name: rest-api
restart: always
build: .
environment:
PORT: 3000
ports:
- '80:3000'
links:
- postgres
postgres:
container_name: postgres-database
image: postgres
environment:
POSTGRES_URI: 'postgres://postgres:12345#postgres-database:5432/postgres'
POSTGRES_PASSWORD: 12345
ports:
- '5432:5432'
Ok - so there are quite a few ways to approach this and it is pretty much more or less based on your preferance.
If you want to go with your proposed folder structure (which is fine), the you can for example do it like so:
Have a Dockerfile in the root of each of your applications which will build the specific application (as you already suggested) place your docker-compose.yml file in the parent folder of both applications (exactly as you proposed already) and then just make some changes to your docker-compose.yml (I only left the essential parts. Note that links are no longer necessary - the internal networking will resolve the service names to the corresponding service IP address)
version: '3'
services:
node:
build:
context: rest-api
environment:
PORT: 3000
ports:
- '3000:3000'
web:
image: web-app
build:
context: web-app
ports:
- 80:80
postgres:
image: postgres
environment:
POSTGRES_URI: 'postgres://postgres:12345#postgres-database:5432/postgres'
POSTGRES_PASSWORD: 12345
ports:
- '5432:5432'
So the context is what tells docker that what you are building is actually in a different directory and all of the commands executed in the Dockerfile will be relative to that folder
I also changed the port mappings, cause you probably will want to access your web app via HTTP port. Note that the web-app will be able to communicate with the rest-api container by using the node hostname as long as the node service is binding to 0.0.0.0:3000 (not 127.0.0.1:3000)

Node.js docker container not updating to changes in volume

I am trying to host a development environment on my Windows machine which hosts a frontend and backend container. So far I have only been working on the backend. All files are on the C Drive which is shared via Docker Desktop.
I have the following docker-compose file and Dockerfile, the latter is inside a directory called backend within the root directory.
Dockerfile:
FROM node:12.15.0-alpine
WORKDIR /usr/app
COPY package*.json ./
RUN npm install
EXPOSE 5000
CMD [ "npm", "start" ]
docker-compose.yml:
version: "3"
services:
backend:
container_name: backend
build:
context: ./backend
dockerfile: Dockerfile
volumes:
- ./backend:/usr/app
environment:
- APP_PORT=80
ports:
- '5000:5000'
client:
container_name: client
build:
context: ./client
dockerfile: Dockerfile
volumes:
- ./client:/app
ports:
- '80:8080'
For some reason, when I make changes in my local files they are not reflecting inside the container. I am testing this by slightly modifying the outputs of one of my files, but I am having to rebuild the container each time to see the changes take effect.
I have worked with Docker in PHP applications before, and have basically done the same thing. So I am unsure why this is not working with by Node.js app. I am wondering if I am just missing something glaringly obvious as to why this is not working.
Any help would be appreciated.
The difference between node and PHP here is that php automatically picks up file system changes between requests, but a node server doesn't.
I think you'll see that the file changes get picked up if you restart node by bouncing the container with docker-compose down then up (no need to rebuild things!).
If you want node to pick up file system changes without needing to bounce the server you can use some of the node tooling. nodemon is one: https://www.npmjs.com/package/nodemon. Follow the installation instructions for local installation and update your start script to use nodemon instead of node.
Plus I really do think you have a mistake in your dockerfile and you need to copy the source code into your working directory. I'm assuming you got your initial recipe from here: https://dev.to/alex_barashkov/using-docker-for-nodejs-in-development-and-production-3cgp. This is the docker file is below. You missed a step!
FROM node:10-alpine
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
CMD [ "npm", "start" ]

Resources