Docker with node bcrypt — invalid ELF header - node.js

I've tried every solution from this post and this post
I'm not finding a solution to get rid of the following error when running docker-compose up:
module.js:598
return process.dlopen(module, path._makeLong(filename));
^
Error: /code/node_modules/bcrypt/lib/binding/bcrypt_lib.node: invalid ELF header
Here's my latest attempt docker-compose.yml
version: "2"
services:
app:
build: ./client
ports:
- "3000:3000"
links:
- auth
volumes:
- ./client:/code
auth:
build: ./auth-service
ports:
- "3002:3002"
links:
- db
volumes:
- ./auth-service:/code
db:
...
And my auth service Dockerfile:
FROM node:7.7.1
EXPOSE 3002
WORKDIR /code
COPY package.json /code
RUN npm install
COPY . /code
CMD npm start
After trying each of the solution from the above two links, I rebuild the containers and it always results in the same error.
Also worth noting, the service runs fine locally, when I don't use docker.
How do I get docker to work with bcrypt?
Update
I was able to get it working by doing the following:
finding the id of the container: docker ps
accessing the container: docker exec -t -i containerId /bin/bash
installing bcrypt: npm install bcrypt
This isn't ideal for portability

I spent a few hours trying to solve this and in the end I came up with the following solution.
My compose file looks like this.....
version: "3"
services:
server:
build:
context: ./server
volumes:
- ./server:/usr/src/app
- /usr/src/app/node_modules/
ports:
- 3050:3050
depends_on:
- db
command: ["nodemon", "./bin/www"]
The second volume mount there is the important one as this gets around the local node_module issue.
Just for reference my dockerfile is like this:
FROM node
RUN npm install -g nodemon
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY package.json /usr/src/app
RUN npm install
EXPOSE 3050
CMD ["nodemon", "./bin/www"]

was struggling with this one few times, the ".dockerignore" solution wont work if you use volumes sadly, since its only related to the "Copy" command and only when you build the container.
the only solution i found which i think makes the most sense, is to volume only what you need, what does it mean - divide your source code and the "configurations" files (such as package.json):
- src
-- morecode
-- morecode2
-- server.js
- index.js
- package.json
- jslint.json
- tsconfig.json
- .env
- .dockerignore
- ... etc
and put the volume only on the "src" folder, that way your builds will also be much faster plus your node modules will be built and installed on the correct operation system, d'ont forget to add .dockerignore to node_modules to prevent builds form taking unnecessary longer time
do note that doing so will require re-build of the application every time your adding new package, but if you use npm best practice and you divide the npm installation in your docker file to be cached, it will be faster

The reason this error occurs is that the node module bcrypt is first compiled on your original machine (specific for your OS) and when an image is built on docker it cannot run since the OS is no longer the same. solution create a .dockerignore file in your root folder and add node_modules to this file.

Related

Docker - volumes explanation

As far as I know, volume in Docker is some permanent data for the container, which can map local folder and container folder.
In early day, I am facing Error: Cannot find module 'winston' issue in Docker which mentioned in:
docker - Error: Cannot find module 'winston'
Someone told me in this post:
Remove volumes: - ./:/server from your docker-compose.yml. It overrides the whole directory contains node_modules in the container.
After I remove volumes: - ./:/server, the above problem is solved.
However, another problem occurs.
[solved but want explanation]nodemon --legacy-watch src/ not working in Docker
I solve the above issue by adding back volumes: - ./:/server, but I don't know what is the reason of it
Question
What is the cause and explanation for above 2 issues?
What happen between build and volumes, and what is the relationship between build and volumes in docker-compose.yml
Dockerfile
FROM node:lts-alpine
RUN npm install --global sequelize-cli nodemon
WORKDIR /server
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3030
CMD ["npm", "run", "dev"]
docker-compose.yml
version: '2.1'
services:
test-db:
image: mysql:5.7
...
test-web:
environment:
- NODE_ENV=local
- PORT=3030
build: . <------------------------ It takes Dockerfile in current directory
command: >
./wait-for-db-redis.sh test-db npm run dev
volumes:
- ./:/server <------------------------ how and when does this line works?
ports:
- "3030:3030"
depends_on:
- test-db
When you don't have any volumes:, your container runs the code that's built into the image. This is good! But, the container filesystem is completely separate from the host filesystem, and the image contains a fixed copy of your application. When you change your application, after building and testing it in a non-Docker environment, you need to rebuild the image.
If you bind-mount a volume over the application directory (.:/server) then the contents of the host directory replace the image contents; any work you do in the Dockerfile gets completely ignored. This also means /server/node_modules in the container is ./node_modules on the host. If the host and container environments don't agree (MacOS host/Linux container; Ubuntu host/Alpine container; ...) there can be compatibility issues that cause this to break.
If you also mount an anonymous volume over the node_modules directory (/server/node_modules) then only the first time you run the container the node_modules directory from the image gets copied into the volume, and then the volume content gets mounted into the container. If you update the image, the old volume contents take precedence (changes to package.json get ignored).
When the image is built only the contents of the build: block have an effect. There are no volumes: mounted, environment: variables aren't set, and the build environment isn't attached to networks:.
The upshot of this is that if you don't have volumes at all:
version: '3.8'
services:
app:
build: .
ports: ['3000:3000']
It is completely disconnected from the host environment. You need to docker-compose build the image again if your code changes. On the other hand, you can docker push the built image to a registry and run it somewhere else, without needing a separate copy of Node or the application source code.
If you have a volume mount replacing the application directory then everything in the image build is ignored. I've seen some questions that take this to its logical extent and skip the image build, just bind-mounting the host directory over an unmodified node image. There's not really benefit to using Docker here, especially for a front-end application; install Node instead of installing Docker and use ordinary development tools.

Node.js docker container not updating to changes in volume

I am trying to host a development environment on my Windows machine which hosts a frontend and backend container. So far I have only been working on the backend. All files are on the C Drive which is shared via Docker Desktop.
I have the following docker-compose file and Dockerfile, the latter is inside a directory called backend within the root directory.
Dockerfile:
FROM node:12.15.0-alpine
WORKDIR /usr/app
COPY package*.json ./
RUN npm install
EXPOSE 5000
CMD [ "npm", "start" ]
docker-compose.yml:
version: "3"
services:
backend:
container_name: backend
build:
context: ./backend
dockerfile: Dockerfile
volumes:
- ./backend:/usr/app
environment:
- APP_PORT=80
ports:
- '5000:5000'
client:
container_name: client
build:
context: ./client
dockerfile: Dockerfile
volumes:
- ./client:/app
ports:
- '80:8080'
For some reason, when I make changes in my local files they are not reflecting inside the container. I am testing this by slightly modifying the outputs of one of my files, but I am having to rebuild the container each time to see the changes take effect.
I have worked with Docker in PHP applications before, and have basically done the same thing. So I am unsure why this is not working with by Node.js app. I am wondering if I am just missing something glaringly obvious as to why this is not working.
Any help would be appreciated.
The difference between node and PHP here is that php automatically picks up file system changes between requests, but a node server doesn't.
I think you'll see that the file changes get picked up if you restart node by bouncing the container with docker-compose down then up (no need to rebuild things!).
If you want node to pick up file system changes without needing to bounce the server you can use some of the node tooling. nodemon is one: https://www.npmjs.com/package/nodemon. Follow the installation instructions for local installation and update your start script to use nodemon instead of node.
Plus I really do think you have a mistake in your dockerfile and you need to copy the source code into your working directory. I'm assuming you got your initial recipe from here: https://dev.to/alex_barashkov/using-docker-for-nodejs-in-development-and-production-3cgp. This is the docker file is below. You missed a step!
FROM node:10-alpine
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
CMD [ "npm", "start" ]

Handling node modules with docker-compose

I am building a set of connected node services using docker-compose and can't figure out the best way to handle node modules. Here's what should happen in a perfect world:
Full install of node_modules in each container happens on initial build via each service's Dockerfile
Node modules are cached after the initial load -- i.e. functionality so that npm only installs when package.json has changed
There is a clear method for installing npm modules -- whether it needs to be rebuilt or there is an easier way
Right now, whenever I npm install --save some-module and subsequently run docker-compose build or docker-compose up --build, I end up with the module not actually being installed.
Here is one of the Dockerfiles
FROM node:latest
# Create app directory
WORKDIR /home/app/api-gateway
# Intall app dependencies (and cache if package.json is unchanged)
COPY package.json .
RUN npm install
# Bundle app source
COPY . .
# Run the start command
CMD [ "npm", "dev" ]
and here is the docker-compose.myl
version: '3'
services:
users-db:
container_name: users-db
build: ./users-db
ports:
- '27018:27017'
healthcheck:
test: exit 0'
api-gateway:
container_name: api-gateway
build: ./api-gateway
command: npm run dev
volumes:
- './api-gateway:/home/app/api-gateway'
- /home/app/api-gateway/node_modules
ports:
- '3000:3000'
depends_on:
- users-db
links:
- users-db
It looks like this line might be overwriting your node_modules directory:
# Bundle app source
COPY . .
If you ran npm install on your host machine before running docker build to create the image, you have a node_modules directory on your host machine that is being copied into your container.
What I like to do to address this problem is copy the individual code directories and files only, eg:
# Copy each directory and file
COPY ./src ./src
COPY ./index.js ./index.js
If you have a lot of files and directories this can get cumbersome, so another method would be to add node_modules to your .dockerignore file. This way it gets ignored by Docker during the build.

Docker not propagating file changes from host to container

I am aiming to configure docker so that when I modify a file on the host the change is propagated inside the container file system.
You can think of this as hot reloading for server side node code.
The nodemon file watcher should restart the server in response to file changes.
However these file changes on the host volume don't seem to be reflected inside the container when I inspect the container using docker exec pokerspace_express_1 bash and inspect a modified file the changes are not propagated inside the container from the host.
Dockerfile
FROM node:8
MAINTAINER therewillbecode
# Create app directory
WORKDIR src/app
RUN npm install nodemon -g
# Install app dependencies
COPY package.json .
# For npm#5 or later, copy package-lock.json as well
# COPY package.json package-lock.json ./
RUN npm install
CMD [ "npm", "start" ]
docker-compose.yml
version: '2'
services:
express:
build: .
depends_on:
- mongo
environment:
- MONGO_URL=mongo:27017/test
- SERVER_PORT=3000
volumes:
- ./:/src/app
ports:
- '3000:3000'
links:
- mongo
mongo:
image: mongo
ports:
- '27017:27017'
mongo-seed:
build: ./mongo-seed
links:
- mongo
.dockerignore
.git
.gitignore
README.md
docker-compose.yml
How can I ensure that host volume file changes are reflected in the container?
Try something like this in your Dockerfile:
CMD ["nodemon", "-L"]
Some people had a similar issue and were able to resolve it with passing -L (which means “legacy watch”) to nodemon.
References:
https://github.com/remy/nodemon/issues/419
http://fostertheweb.com/2016/02/nodemon-inside-docker-container/#why-isnt-nodemon-reloading
Right, so with Docker we need to re-build the image or figure out some clever solution.
You probably do not want to rebuild the image every time you make a change to your source code.
Let's figure out a clever solution. Let's generalize the Dockerfile a bit to solve your problem and also help others.
So this is the boilerplate Dockerfile:
FROM node:alpine
WORKDIR '/app'
COPY package.json .
RUN npm install
COPY . .
CMD ["npm", "run", "start"]
Remember, during the image building process we are creating a temporary container. When we make the copies we are essentially taking a snapshot of the contents /src and /public. Its a snapshot that is locked in time and by default will not be updated by making changes to the code.
So in order to get these changes to files /src and /public, we need to abandon doing a straight copy, we are going to adjust the docker run command that we use to start up our container.
We are going to make use of a feature called volume.
With Docker volume we setup a placeholder inside our Docker container, so instead of copying over our entire/src directory we can imagine we are going to put a reference to those files and give us access to the files and folders inside of the local machine.
We are setting up a mapping from a folder inside the container to a folder outside a container. The command to use is a bit painful, but once its documented here you can bookmark this answer.
docker run -p 3000:3000 -v /app/node_modules -v $(pwd):/app <image_id>
-v $(pwd):/app used to set up a volume in present working directory. This is a shortcut. So we are saying get the present working directory, get everything inside of it and map it up to our running container. It's long winded I know.
To implement this you will have to first rebuild your docker image by running:
docker build -f Dockerfile.dev .
Then run:
docker run -p 3000:3000 -v $(pwd):/app <image_id>
Then you are going to very quickly get an error message, the react-scripts not found error. You will see that message because I skipped the -v /app/node_modules.
So what's up with that?
The volume command sets up a mapping and when we do, we are saying take everything inside of our present working directory and map it up to our /appfolder, but the issue is there is no /node_modules folder which is where all our dependencies exist.
So the /node_modules folder got overwritten.
So we are essentially pointing to nothing and thats why we need that -v /app/node_modules with no colon because the colon is to map up the folder inside a container to a folder outside the container. Without the colon we are saying want it to be a placeholder, don't map it up against anything.
Now, go ahead and run: docker run -p 3000:3000 -v $(pwd):/app <image_id>
Once done, you can make all the changes you want to your project and see them "hot reload" in your browser. No need to figure out how to implement Nodemon.
So whats happening there is any changes made to your local file system is getting propagated into your container, the server inside your container sees the change and updates.
Now, I know its hard and annoying to remember such a long command, in enters Docker Compose.
We can make use of Docker Compose to dramatically simplify the command we have to run to start up the container.
So to implement that you create a Docker Compose file and inside of it you will include the port setting and the two volumes that you need.
Inside your root project, make a new file called docker-compose.yml.
Inside there you will add this:
version: '3'
services:
web:
build: .
ports:
- "3000:3000"
volumes:
- /app/node_modules
- .:/app
Then run: docker-compose up
Daniel's answer partially worked for me, but the hot reloading still doesn't work. I'm using a Windows host and had to change his docker-compose.yml to
version: '3'
services:
web:
build: .
ports:
- "3000:3000"
volumes:
- /App/node_modules
- .:/App
(I changed the volumes arguments from /app/node_modules to /App/node_modules and from .:/app to .:/App. This enables changes to be passed to the container, however the hot reloading still doesn't work. I have to use docker-compose up --build each time I want to refresh the app.)

How do I populate a volume in a docker-compose.yaml

I am starting to write my first docker-compose.yml file to set a a combination of services that make up my application (all node-js). One of the services (web-server - bespoke, not express) has both a large set of modules it needs and an even larger set of bower_components.
In order to provide separation of concerns, and so I can control the versioning more closely I want to create two named volumes which hold the node_modules and bower_components, and mount those volumes on to the relevant directories of the web-server service.
The question that is confusing me is how do I get these two volumes populated on service startup. There are two reasons for my confusion:-
The behaviour of docker-compose with the -d flag versus the docker run command with the -d flag - the web service obviously needs to keep running (and indeed needs to be restarted if it fails) whereas the container that might populate one or other of the volumes is a run once as the whole application is brought up with docker-compose up command. Can I control this?
A running service and the build commands of that service. Could I actually use a Dockerfiles to run npm install and bower install. In particular, if I change the source code of the web application, but the modules and bower_components don't change, will this build step be instantaneous because of a cached result?
I have been unable to find examples of this sort of behaviour so I am puzzled as to how to go about doing it. Can someone help.
I did sommething like that without bower but with nodeJS tools like Sass, Hall, live reload, jasmine...
I used npm for all installation inside the npm project (not global install)
For that, the official node image is quiet well, I only have to set the PATH to the app/node_modules/.bin. So my Dockerfile look like this (very simple) :
FROM node:7.5
ENV PATH /usr/src/app/node_modules/.bin/:$PATH
My docker-compose.yml file is :
version: '2'
services:
mydata:
image: busybox
stdin_open: true
volumes:
- .:/usr/src/app
node:
build: .
image: mynodecanvassvg
working_dir: /usr/src/app
stdin_open: true
volumes_from:
- mydata
sass:
depends_on:
- node
image: mynodecanvassvg
working_dir: /usr/src/app
volumes_from:
- mydata
#entrypoint: "node-sass -w -r -o public/css src/scss"
stdin_open: true
jasmine:
depends_on:
- node
image: mynodecanvassvg
working_dir: /usr/src/app
volumes_from:
- mydata
#entrypoint: "jasmine-node --coffee --autoTest tests/coffee"
stdin_open: true
live:
depends_on:
- node
image: mynodecanvassvg
working_dir: /usr/src/app
volumes_from:
- mydata
ports:
- 35729:35729
stdin_open: true
I have only some trouble with entrypoints that all needs a terminal to display result while working. So, I use the stdin_open: true to keep the container active and then I use the docker exec -it on each containers to get running each watch services.
And of course I launch the docker-compose with the -d to keep it alive as daemon.
Next you have to put your npm package.json on your app folder (next to Dockerfile and docker-compose.yml) and launch a npm update to load and install the modules.
I'll start with the standard way first
2. Dockerfile
Using a Dockerfile avoids trying to work out how to setup docker-compose service dependencies or external build scripts to get volumes populated and working before a docker-compose up.
A Dockerfile can be setup so only changes to the bower.json and package.json will trigger a reinstall of node_modules or bower_components.
The command that installs first will, at some point, have to invalidate the second commands cache though so the order you put them in matters. Which ever updates the least, or is significantly slower should go first. You may need to manually install bower globally if you want to run the bower command first.
If you are worried about NPM versioning, look at using yarn and a yarn.lock file. Yarn will speed things up a little bit too. Bower can just set specific versions as it doesn't have the same sub module versioning issues NPM does.
File Dockerfile
FROM mhart/alpine-node:6.9.5
RUN npm install bower -g
WORKDIR /app
COPY package.json /app/
RUN npm install --production
COPY bower.json /app/
RUN bower install
COPY / /app/
CMD ["node", "server.js"]
File .dockerignore
node_modules/
bower_components/
This is all supported in a docker-compose build: stanza
1. Docker Compose + Volumes
The easiest/quickest way to populate a volume is by defining a VOLUME in the Dockerfile after the directory has been populated in the image. This will work via compose. I'd question the point of using a volume when the image already has the required content though...
Any other methods of population will require some custom build scripts outside of compose. One option would be to docker run a container with the required volume attached and populate it with npm/bower install.
docker run \
--volume myapp_bower_components:/bower_components \
--volume bower.json:/bower.json \
mhart/alpine-node:6.9.5 \
npm install bower -g && bower install
and
docker run \
--volume myapp_mode_modules:/node_modules \
--volume package.json:/package.json \
mhart/alpine-node:6.9.5 \
npm install --production
Then you will be able to mount the populated volume on your app container
docker run \
--volume myapp_bower_components:/bower_components \
--volume myapp_node_modules:/node_modules \
--port 3000:3000
my/app
You'd probably need to come up with some sort of versioning scheme for the volume name as well so you could roll back. Sounds like a lot of effort for something an image already does for you.
Or possibly look at rocker, which provides an alternate docker build system and lets you do all the things Docker devs rail against, like mounting a directory during a build. Again this is stepping outside of what Docker Compose supports.

Resources