Docker compose not finding my index.js - node.js

I'm following this guide and use my low docker knowledge to get a dev environment up and running. I've hit a wall I cannot solve. This is my docker-compose.yml:
version: '2'
services:
redis:
image: redis:3.2
mongo:
image: mongo:3.2
app:
build: .
ports:
- '3000:3000'
command: './node_modules/.bin/nodemon ./index.js'
environment:
NODE_ENV: development
volumes:
- .:/home/app/cardcreator
- /home/app/cardcreator/node_modules
depends_on:
- redis
- mongo
links:
- redis
- mongo
and this is my Dockerfile:
FROM node:6.3.1
RUN useradd --user-group --create-home --shell /bin/false app
ENV HOME=/home/app
COPY package.json npm-shrinkwrap.json $HOME/cardcreator/
RUN chown -R app:app $HOME/*
USER app
WORKDIR $HOME/cardcreator
RUN npm install
USER root
COPY . $HOME/cardcreator/
RUN chown -R app:app $HOME/*
USER app
CMD ["node", "index.js"]
When I try to start the app via docker-compose up, I get the error
app_1 | Usage: nodemon [nodemon options] [script.js] [args]
app_1 | See "nodemon --help" for more.
I then removed the command line of my docker-compose.yml, only leaving node index.js to start. I get an error saying index.js cannot be found.
The file is in my project folder, it is there and it has content. I can't figure out why this setup doesn't work, I did similar setups for tails and it worked fine.
Can anyone tell me what I'm doing wrong here?

Whatever you are mounting in your compose file here:
- .:/home/app/cardcreator
Is going to mount on top of whatever you built in $HOME/cardcreator/ in your Dockerfile.
So basically you seem to have conflicting volumes -- it's an order of operations issue -- the build is going to happen first and the volume mount happens later when the container runs, so your container will no longer have access to the files built in the Dockerfile.

You could try to use
docker exec -it app_1 bash
to go into the container, trying to execute the
node index.js
command manually and see what's going on. Not 100% sure if the 'node' docker images have bash installed though..

Related

Docker container can't find file

Github repo here.
I have a runshortcuts.sh bash script I set up to deploy different parts of my app for dev or prod. I can run it from the project's root directory with ./runshortcuts.sh $args. I mapped the root of my project directory to /usr/src/app and verified with ls that the project root directories look the same on my machine and in the container.
For whatever reason I can't execute runshortcuts.sh from within the docker container and get "OCI runtime exec failed: exec failed: unable to start container process: exec ./runshortcuts.sh: no such file or directory: unknown". Its permissions are -rwxrwxr-x according to ls -l. Wrapping it in a sh also fails as sh can't find the file. I am clueless as to why this is, any ideas?
I'm using the node 14-alpine base image. My Docker setup is quite minimal:
Dockerfile:
FROM node:14-alpine
WORKDIR /usr/src/app
VOLUME /usr/src/app
docker-compose.yaml:
services:
app:
build:
context: .
dockerfile: Dockerfile
ports:
- "3000:3000"
- "5000:5000"
volumes:
- .:/usr/src/app
tty: true
When I docker-compose up -d and docker ps -a I see:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
bdaac49268d0 fiction-forge_app "docker-entrypoint.s…" About a minute ago Up 5 seconds 0.0.0.0:3000->3000/tcp, :::3000->3000/tcp, 0.0.0.0:5000->5000/tcp, :::5000->5000/tcp fiction-forge_app_1
The problem comes from your script you are using a #!/bin/bash and that bash is not available in the default packages of alpine.
Using #!/bin/sh should fix the problem.

Docker container running but not being listed and cannot be stopped

I'm relatively new to docker and I've been having a really strange problem.
The docker setup I have below runs perfectly, however though there seems to be an instance that is always running even after stopping and removing all containers and quitting the docker application.
When I access localhost in my browser, My app is always live and running.
I've tried running docker-compose stop ; docker-compose rm to stop and remove all container.
'docker-compose ps' and 'docker ps' both show no containers running at all. But whenever I access localhost, my app is there live and running.
Like i said i have tried quitting the docker application (I'm running on mac). i tried restarting the machine and the app would still be running.
The weird thing is when i check to see which if any processes are using port 80 (thus making my app accessible via localhost) by running 'sudo lsof -i tcp:80' the list is empty.
I'm new to docker and I know there must be something I'm overlooking.
Thanks in advance, any help and ideas are welcomed.
Here is my folder structure: screenshot
The Dockerfile for my app:
FROM node:10
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
EXPOSE 3000
ENV NUXT_HOST=0.0.0.0
ENV NUXT_PORT=3000
CMD [ "npm", "start" ]
docker-compose.yml
version: '3'
services:
nuxt:
build: ./app/
container_name: nuxt
restart: always
ports:
- '1880:1880'
command: 'npm run start'
nginx:
image: nginx:1.13
container_name: nginx
ports:
- '80:80'
volumes:
- ./nginx:/etc/nginx/conf.d
depends_on:
- nuxt

Docker port forwarding for nodejs app

I'm having problems configuring docker for my nodejs app.
I have previously set up containers for both php and rails with port forwarding working flawlessly, but for this instance i can't seem to get it to work.
Running: docker ps, i get the following:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a60f9c82d600 29c7d94a8c58 "/bin/sh -c 'npm s..." 5 seconds ago Up 3 seconds 3000/tcp romantic_albattani
As you can see I'm not getting the usual: 0.0.0.0:3000->3000/tcp that I am expecting.
docker-compose ps gives:
Name Command State Ports
------------------------------
My docker-compose.yml:
web:
build: .
volumes:
- .:/app
volumes_from:
- box
ports:
- "3000:3000"
box:
image: busybox
volumes:
- /node_modules
My Docker file:
FROM node:8.7.0
# The base node image sets a very verbose log level.
ENV NPM_CONFIG_LOGLEVEL warn
WORKDIR /tmp
COPY package.json /tmp/
RUN npm install
WORKDIR /app
ADD . /app
RUN cp -a /tmp/node_modules /app/
#ENV PORT=3000
EXPOSE 3000
CMD npm start
I'm running the command: docker-compose up --build
Any help at this point is appreciated.
I don't know if a docker inspect would be useful, but if so, tell me and i will also post it.
Edit: Changed my Dockerfile to follow the answer.
Your docker-compose.yml file has bad formatting, since you are not getting any errors i will assume you pasted it here wrong, here is the version with the fixed indenting:
web:
build: .
volumes:
- .:/app
volumes_from:
- box
ports:
- "3000:3000"
box:
image: busybox
volumes:
- /node_modules
Your Dockerfile has a bug, you are missing the ENTRYPOINT and/or CMD stanzas, instead you are using the RUN stanza with the wrong intent, here is a working Dockerfile with the fix applied:
FROM node:8.7.0
# The base node image sets a very verbose log level.
ENV NPM_CONFIG_LOGLEVEL warn
WORKDIR /tmp
COPY package.json /tmp/
RUN npm install
WORKDIR /app
ADD . /app
RUN cp -a /tmp/node_modules /app/
#ENV PORT=3000
EXPOSE 3000
CMD npm start
Your Dockerfile halted the execution of docker-compose at the docker image building stage because of the RUN npm start which is a process that starts and listens until stopped (because you want it to start your node app and listen for connections) causing docker-compose to never finish the docker image creating step, let alone the other steps like creating the needed containers and finish the entire docker-compose runtime process.
In short:
When you use RUN it is meant to run a command do some work and return sometime to continue the building process, it should return and exit code of 0 and the process will move on to the next Dockerfile stanza, or return another exit code and the building process will fail with an error.
When you use CMD you tell the docker image what is the starting command of all the containers started from this image (it can also be overridden at run time with docker run). It is tightly related to the ENTRYPOINT stanza, but for basic usage you are safe with the default.
Further reading: ENTRYPOINT, CMD and RUN

Docker not propagating file changes from host to container

I am aiming to configure docker so that when I modify a file on the host the change is propagated inside the container file system.
You can think of this as hot reloading for server side node code.
The nodemon file watcher should restart the server in response to file changes.
However these file changes on the host volume don't seem to be reflected inside the container when I inspect the container using docker exec pokerspace_express_1 bash and inspect a modified file the changes are not propagated inside the container from the host.
Dockerfile
FROM node:8
MAINTAINER therewillbecode
# Create app directory
WORKDIR src/app
RUN npm install nodemon -g
# Install app dependencies
COPY package.json .
# For npm#5 or later, copy package-lock.json as well
# COPY package.json package-lock.json ./
RUN npm install
CMD [ "npm", "start" ]
docker-compose.yml
version: '2'
services:
express:
build: .
depends_on:
- mongo
environment:
- MONGO_URL=mongo:27017/test
- SERVER_PORT=3000
volumes:
- ./:/src/app
ports:
- '3000:3000'
links:
- mongo
mongo:
image: mongo
ports:
- '27017:27017'
mongo-seed:
build: ./mongo-seed
links:
- mongo
.dockerignore
.git
.gitignore
README.md
docker-compose.yml
How can I ensure that host volume file changes are reflected in the container?
Try something like this in your Dockerfile:
CMD ["nodemon", "-L"]
Some people had a similar issue and were able to resolve it with passing -L (which means “legacy watch”) to nodemon.
References:
https://github.com/remy/nodemon/issues/419
http://fostertheweb.com/2016/02/nodemon-inside-docker-container/#why-isnt-nodemon-reloading
Right, so with Docker we need to re-build the image or figure out some clever solution.
You probably do not want to rebuild the image every time you make a change to your source code.
Let's figure out a clever solution. Let's generalize the Dockerfile a bit to solve your problem and also help others.
So this is the boilerplate Dockerfile:
FROM node:alpine
WORKDIR '/app'
COPY package.json .
RUN npm install
COPY . .
CMD ["npm", "run", "start"]
Remember, during the image building process we are creating a temporary container. When we make the copies we are essentially taking a snapshot of the contents /src and /public. Its a snapshot that is locked in time and by default will not be updated by making changes to the code.
So in order to get these changes to files /src and /public, we need to abandon doing a straight copy, we are going to adjust the docker run command that we use to start up our container.
We are going to make use of a feature called volume.
With Docker volume we setup a placeholder inside our Docker container, so instead of copying over our entire/src directory we can imagine we are going to put a reference to those files and give us access to the files and folders inside of the local machine.
We are setting up a mapping from a folder inside the container to a folder outside a container. The command to use is a bit painful, but once its documented here you can bookmark this answer.
docker run -p 3000:3000 -v /app/node_modules -v $(pwd):/app <image_id>
-v $(pwd):/app used to set up a volume in present working directory. This is a shortcut. So we are saying get the present working directory, get everything inside of it and map it up to our running container. It's long winded I know.
To implement this you will have to first rebuild your docker image by running:
docker build -f Dockerfile.dev .
Then run:
docker run -p 3000:3000 -v $(pwd):/app <image_id>
Then you are going to very quickly get an error message, the react-scripts not found error. You will see that message because I skipped the -v /app/node_modules.
So what's up with that?
The volume command sets up a mapping and when we do, we are saying take everything inside of our present working directory and map it up to our /appfolder, but the issue is there is no /node_modules folder which is where all our dependencies exist.
So the /node_modules folder got overwritten.
So we are essentially pointing to nothing and thats why we need that -v /app/node_modules with no colon because the colon is to map up the folder inside a container to a folder outside the container. Without the colon we are saying want it to be a placeholder, don't map it up against anything.
Now, go ahead and run: docker run -p 3000:3000 -v $(pwd):/app <image_id>
Once done, you can make all the changes you want to your project and see them "hot reload" in your browser. No need to figure out how to implement Nodemon.
So whats happening there is any changes made to your local file system is getting propagated into your container, the server inside your container sees the change and updates.
Now, I know its hard and annoying to remember such a long command, in enters Docker Compose.
We can make use of Docker Compose to dramatically simplify the command we have to run to start up the container.
So to implement that you create a Docker Compose file and inside of it you will include the port setting and the two volumes that you need.
Inside your root project, make a new file called docker-compose.yml.
Inside there you will add this:
version: '3'
services:
web:
build: .
ports:
- "3000:3000"
volumes:
- /app/node_modules
- .:/app
Then run: docker-compose up
Daniel's answer partially worked for me, but the hot reloading still doesn't work. I'm using a Windows host and had to change his docker-compose.yml to
version: '3'
services:
web:
build: .
ports:
- "3000:3000"
volumes:
- /App/node_modules
- .:/App
(I changed the volumes arguments from /app/node_modules to /App/node_modules and from .:/app to .:/App. This enables changes to be passed to the container, however the hot reloading still doesn't work. I have to use docker-compose up --build each time I want to refresh the app.)

Docker with node bcrypt — invalid ELF header

I've tried every solution from this post and this post
I'm not finding a solution to get rid of the following error when running docker-compose up:
module.js:598
return process.dlopen(module, path._makeLong(filename));
^
Error: /code/node_modules/bcrypt/lib/binding/bcrypt_lib.node: invalid ELF header
Here's my latest attempt docker-compose.yml
version: "2"
services:
app:
build: ./client
ports:
- "3000:3000"
links:
- auth
volumes:
- ./client:/code
auth:
build: ./auth-service
ports:
- "3002:3002"
links:
- db
volumes:
- ./auth-service:/code
db:
...
And my auth service Dockerfile:
FROM node:7.7.1
EXPOSE 3002
WORKDIR /code
COPY package.json /code
RUN npm install
COPY . /code
CMD npm start
After trying each of the solution from the above two links, I rebuild the containers and it always results in the same error.
Also worth noting, the service runs fine locally, when I don't use docker.
How do I get docker to work with bcrypt?
Update
I was able to get it working by doing the following:
finding the id of the container: docker ps
accessing the container: docker exec -t -i containerId /bin/bash
installing bcrypt: npm install bcrypt
This isn't ideal for portability
I spent a few hours trying to solve this and in the end I came up with the following solution.
My compose file looks like this.....
version: "3"
services:
server:
build:
context: ./server
volumes:
- ./server:/usr/src/app
- /usr/src/app/node_modules/
ports:
- 3050:3050
depends_on:
- db
command: ["nodemon", "./bin/www"]
The second volume mount there is the important one as this gets around the local node_module issue.
Just for reference my dockerfile is like this:
FROM node
RUN npm install -g nodemon
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY package.json /usr/src/app
RUN npm install
EXPOSE 3050
CMD ["nodemon", "./bin/www"]
was struggling with this one few times, the ".dockerignore" solution wont work if you use volumes sadly, since its only related to the "Copy" command and only when you build the container.
the only solution i found which i think makes the most sense, is to volume only what you need, what does it mean - divide your source code and the "configurations" files (such as package.json):
- src
-- morecode
-- morecode2
-- server.js
- index.js
- package.json
- jslint.json
- tsconfig.json
- .env
- .dockerignore
- ... etc
and put the volume only on the "src" folder, that way your builds will also be much faster plus your node modules will be built and installed on the correct operation system, d'ont forget to add .dockerignore to node_modules to prevent builds form taking unnecessary longer time
do note that doing so will require re-build of the application every time your adding new package, but if you use npm best practice and you divide the npm installation in your docker file to be cached, it will be faster
The reason this error occurs is that the node module bcrypt is first compiled on your original machine (specific for your OS) and when an image is built on docker it cannot run since the OS is no longer the same. solution create a .dockerignore file in your root folder and add node_modules to this file.

Resources