How to use file from home directory in docker compose secret? - node.js

I am trying to build a docker container with private node packages in it. I have followed this guide to use secrets to reference npmrc file securely to install the dependencies. I can get this to work when building the image directly using a command like this: docker build --secret id=npm,src=$HOME/.npmrc . but I cannot get this working with docker compose. When running a docker compose build it acts like there is no npmrc file and gives me a 401 when trying to download dependencies.
I provided a stripped down version of Dockerfile and docker-compose.yml below.
Dockerfile
# syntax = docker/dockerfile:1.2
FROM node:14.17.1
COPY . .
RUN --mount=type=secret,id=npm,target=/root/.npmrc yarn --frozen-lockfile --production
EXPOSE 3000
CMD [ "npm", "start" ]
docker-compose.yml
version: '3.7'
services:
example:
build: packages/example
ports:
- "3000:3000"
secrets:
- npm
secrets:
npm:
file: ${HOME}/.npmrc

The problem appears to be that my docker-compose.yml is specifying secrets for runtime of a container vs build time. Support for build secrets from docker compose has not been implemented yet. Here is the outstanding PR: https://github.com/docker/compose/pull/7046.
For now, I have to build the image using docker build ... and reference the named image locally in docker-compose.yml instead of building through docker compose.

Since docker-compose v2.5.0 this is now possible.
Dockerfile:
# syntax=docker/dockerfile:1.2
RUN --mount=type=secret,id=mysecret,target=/root/mysecret cat /root/mysecret
docker-compose.yml
services:
my-app:
build:
context: .
secrets:
- mysecret
secrets:
mysecret:
file: ~/.npmrc

Related

How to build a react/vue application outside of a docker container

I have several applications (vue & react each of the applications is built on a different version of the node). I want to set up a project deployment so that I can run a docker container with the correct version of the node for each of the projects. A build (npm i & npm run build) should happen in the container, but I go to give the result from the container to /var/www/project_name already on the server itself.
Next, set up a container with nginx which, depending on the subdomain, will give the desired build
My question is how to return the folder with files from the container to the operating system area?
my docker-compose file:
version: "3.1"
services:
redis:
restart: always
image: redis:alpine
container_name: redis
build-adminapp:
build: adminapp/
container_name: adminapp
working_dir: /var/www/adminapp
volumes:
- ./adminapp:/var/www/adminapp
build-clientapp:
build: clientapp/
container_name: clientapp
working_dir: /var/www/clientapp
volumes:
- ./clientapp:/var/www/clientapp`
my docker files:
FROM node:10-alpine as build
# Create app directory
WORKDIR /var/www/adminapp/
COPY . /var/www/adminapp/
RUN npm install
RUN npm run build
second docker file:
FROM node:12-alpine as build
# Create app directory
WORKDIR /var/www/clientapp/
COPY . /var/www/clientapp/
RUN npm install
RUN npm run build
If you already have a running container, you can use docker cp command to move files between local machine and docker containers.

Using Docker with Node image to develop a VuejS (NuxtJs) app

The situtaion
I have to work on a VueJs (NuxtJs) spa, so I'm trying to use Docker with a Node image to avoid installing it on my pc, but can't figure out how to make it work.
The project
The source cose is in its own application folder, since it is versioned, and at the root level there is the docker-compose.yaml file
The folder structure
my-project-folder
├ application
| └ ...
└ docker-compose.yaml
The docker-compose.yaml
version: "3.3"
services:
node:
# container_name: prova_node
restart: 'no'
image: node:lts-alpine
working_dir: /app
volumes:
- ./application:/app
The problem
The container start but quit immediately with exit status 0 (so it executed correctly), but this way I can't use it to work on the project.
Probably there is something I'm missing about the Node image or Docker in general; what i would like to to do is connecting to the docker container to run npm commands like install, run start etc and then check the application on the browser on localhost:3000 or whatever it is.
I would suggest to use Dockerfile with base image as node and then create your entrypoint which runs the application. That will eliminate the need to use volumes which is used when we want to maintain some state for our containers.
Your Dockerfile may look something like this:
FROM node:lts-alpine
RUN mkdir /app
COPY application/ /app/
EXPOSE 3000
CMD npm start --prefix /app
You can then either run it directly through docker run command or use docker-compose.yaml as following :
version: "3.3"
services:
node:
# container_name: prova_node
restart: 'no'
build:
context: .
ports:
- 3000:3000

Node.js docker container not updating to changes in volume

I am trying to host a development environment on my Windows machine which hosts a frontend and backend container. So far I have only been working on the backend. All files are on the C Drive which is shared via Docker Desktop.
I have the following docker-compose file and Dockerfile, the latter is inside a directory called backend within the root directory.
Dockerfile:
FROM node:12.15.0-alpine
WORKDIR /usr/app
COPY package*.json ./
RUN npm install
EXPOSE 5000
CMD [ "npm", "start" ]
docker-compose.yml:
version: "3"
services:
backend:
container_name: backend
build:
context: ./backend
dockerfile: Dockerfile
volumes:
- ./backend:/usr/app
environment:
- APP_PORT=80
ports:
- '5000:5000'
client:
container_name: client
build:
context: ./client
dockerfile: Dockerfile
volumes:
- ./client:/app
ports:
- '80:8080'
For some reason, when I make changes in my local files they are not reflecting inside the container. I am testing this by slightly modifying the outputs of one of my files, but I am having to rebuild the container each time to see the changes take effect.
I have worked with Docker in PHP applications before, and have basically done the same thing. So I am unsure why this is not working with by Node.js app. I am wondering if I am just missing something glaringly obvious as to why this is not working.
Any help would be appreciated.
The difference between node and PHP here is that php automatically picks up file system changes between requests, but a node server doesn't.
I think you'll see that the file changes get picked up if you restart node by bouncing the container with docker-compose down then up (no need to rebuild things!).
If you want node to pick up file system changes without needing to bounce the server you can use some of the node tooling. nodemon is one: https://www.npmjs.com/package/nodemon. Follow the installation instructions for local installation and update your start script to use nodemon instead of node.
Plus I really do think you have a mistake in your dockerfile and you need to copy the source code into your working directory. I'm assuming you got your initial recipe from here: https://dev.to/alex_barashkov/using-docker-for-nodejs-in-development-and-production-3cgp. This is the docker file is below. You missed a step!
FROM node:10-alpine
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
CMD [ "npm", "start" ]

1 way sync instead of 2 way sync for Docker Volume?

I am using Docker Compose for my local development environment for a Full Stack Javascript project.
part of my Docker Compose file look like this
version: "3.5"
services:
frontend:
build:
context: ./frontend/
dockerfile: dev.Dockerfile
env_file:
- .env
ports:
- "${FRONTEND_PORT_NUMBER}:${FRONTEND_PORT_NUMBER}"
container_name: frontend
volumes:
- ./frontend:/code
- frontend_deps:/code/node_modules
- ../my-shared-module:/code/node_modules/my-shared-module
I am trying to develop a custom Node module called my-shared-module, that's why i added - ../my-shared-module:/code/node_modules/my-shared-module to the Docker Compose file. The node module is hosted in a private Git repo, and is defined like this in package.json
"dependencies": {
"my-shared-module": "http://gitlab+deploy-token....#gitlab.com/.....git",
My problem is,
When I run update my node modules in the docker container using npm install, it download my-shared-module from my private Git repo into /code/node_modules/my-shared-module, and that overwrites the files in host ../my-shared-module, because they are synced.
So my question is, is it possible to have 1 way volume sync in Docker?
when host changes, update container
when container changes, don't update host ?
Unfortunately I don't think this is possible in Docker. Mounting a host volume is always two-way unless you consider a readonly mount to be one-way, but that prevents you from being able modify the file system with things like npm install.
Your best options here would either be to rebuild the image with the new files each time, or bake into your CMD a step to copy the mounted files into a new folder outside of the mounted volume. That way any file changes won't be persisted back to the host machine.
You can script something to do this. Mount your host node_modules to another directory inside the container, and in the entrypoint, copy the directory:
version: "3.5"
services:
frontend:
build:
context: ./frontend/
dockerfile: dev.Dockerfile
env_file:
- .env
ports:
- "${FRONTEND_PORT_NUMBER}:${FRONTEND_PORT_NUMBER}"
container_name: frontend
volumes:
- ./frontend:/code
- frontend_deps:/code/node_modules
- /code/node_modules/my-shared-module
- ../my-shared-module:/host/node_modules/my-shared-module:ro
Then add an entrypoint script to your Dockerfile with something like:
#!/bin/sh
if [ -d /host/node_modules/my-shared-module ]; then
cp -r /host/node_modules/my-shared-module/. /code/node_modules/my-shared-module/.
fi
exec "$#"

Integration Tests with Docker and Bitbucket pipelines

I would like to run my integration tests as a part of the Bitbucket pipelines CI. My integration tests test a NodeJS backend that runs against an empty MongoDB database. To enable this I want to create a Docker Image that Bitbucket pipelines can pull from a docker image repository.
My bitbucket-pipelines.yml will be something like:
image: <my image with nodejs and a mongodb linked to it>
pipelines:
default:
- step:
script:
- npm test
Now I only need to create a docker image with nodejs and mongodb configured properly. I am able to build an environment by creating the following docker-compose.yml file:
version: "2"
services:
web:
build: .
volumes:
- ./:/app
ports:
- "3000:3000"
- "9090:8080"
links:
- mongo
mongo:
image: mongo
ports:
- "27018:27017"
My Dockerfile:
FROM node:7.9
RUN mkdir /app
WORKDIR /app
COPY package.json /app
RUN npm install
COPY . /app
EXPOSE 3000
CMD ["npm", "run", "dev"]
Problem - Question
I can run locally with docker compose my environment, but how can I make a single image, instead of using docker compose so I can publish that image publicly for using it in my Bitbucket CI? I am still fresh to docker, but I already understood from the documentation that trying to install MongoDB on top of my nodeJS image is a red flag.
Bitbucket Pipeline doesn't have native support for docker compose yet.
However you can define up to 3 services in the bitbucket-pipelines.yml. Documentation available at: https://confluence.atlassian.com/bitbucket/service-containers-for-bitbucket-pipelines-874786688.html

Resources