File not found in docker volume - linux

I'm using docker compose to create and run several services. One of the services is a nodeJS project in which I added a Makefile to run some custom build tasks.
That's how I define the service in docker-compose.yml file
web:
container_name: web
image: myApp/web
build:
context: .
dockerfile: operations/.docker/web/${APP_ENV}.dockerfile
ports:
- "8080"
volumes:
- ./src/js/web:/var/www/myApp/
working_dir: /var/www/myApp
env_file:
- operations/env/web/${APP_ENV}.env
networks:
- myApp_network
As you can see there is a volume with an alias to my local filesystem. The Makefile is under the ./src/js/web directory.
And this is the dockerfile
FROM node:latest
WORKDIR /var/www/myApp
RUN npm i
RUN make -f /var/www/myApp/Makefile build
RUN mkdir -p /var/log/pm2
EXPOSE 8080
ENTRYPOINT ["pm2", "start", "dist/server/bin/www.js","--name","web","--log","/var/log/pm2/pm2.log","--watch","--no-daemon"]
I had the same setup on my mac book and I manage to build and run the services.
However, I recently tried to run docker compose on linux mint and I get the following error.
make: /var/www/myApp/Makefile: No such file or directory
make: *** No rule to make target '/var/www/myApp/Makefile'. Stop.
ERROR: Service 'web' failed to build: The command '/bin/sh -c make -f /var/www/myApp/Makefile build' returned a non-zero code: 2
I also tried to tried to build the image by removing RUN make -f /var/www/myApp/Makefile build and it worked fine. When I entered the container I could see the Makefile sitting in /var/www/myApp.

Related

How to build a react/vue application outside of a docker container

I have several applications (vue & react each of the applications is built on a different version of the node). I want to set up a project deployment so that I can run a docker container with the correct version of the node for each of the projects. A build (npm i & npm run build) should happen in the container, but I go to give the result from the container to /var/www/project_name already on the server itself.
Next, set up a container with nginx which, depending on the subdomain, will give the desired build
My question is how to return the folder with files from the container to the operating system area?
my docker-compose file:
version: "3.1"
services:
redis:
restart: always
image: redis:alpine
container_name: redis
build-adminapp:
build: adminapp/
container_name: adminapp
working_dir: /var/www/adminapp
volumes:
- ./adminapp:/var/www/adminapp
build-clientapp:
build: clientapp/
container_name: clientapp
working_dir: /var/www/clientapp
volumes:
- ./clientapp:/var/www/clientapp`
my docker files:
FROM node:10-alpine as build
# Create app directory
WORKDIR /var/www/adminapp/
COPY . /var/www/adminapp/
RUN npm install
RUN npm run build
second docker file:
FROM node:12-alpine as build
# Create app directory
WORKDIR /var/www/clientapp/
COPY . /var/www/clientapp/
RUN npm install
RUN npm run build
If you already have a running container, you can use docker cp command to move files between local machine and docker containers.

Docker container can't find file

Github repo here.
I have a runshortcuts.sh bash script I set up to deploy different parts of my app for dev or prod. I can run it from the project's root directory with ./runshortcuts.sh $args. I mapped the root of my project directory to /usr/src/app and verified with ls that the project root directories look the same on my machine and in the container.
For whatever reason I can't execute runshortcuts.sh from within the docker container and get "OCI runtime exec failed: exec failed: unable to start container process: exec ./runshortcuts.sh: no such file or directory: unknown". Its permissions are -rwxrwxr-x according to ls -l. Wrapping it in a sh also fails as sh can't find the file. I am clueless as to why this is, any ideas?
I'm using the node 14-alpine base image. My Docker setup is quite minimal:
Dockerfile:
FROM node:14-alpine
WORKDIR /usr/src/app
VOLUME /usr/src/app
docker-compose.yaml:
services:
app:
build:
context: .
dockerfile: Dockerfile
ports:
- "3000:3000"
- "5000:5000"
volumes:
- .:/usr/src/app
tty: true
When I docker-compose up -d and docker ps -a I see:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
bdaac49268d0 fiction-forge_app "docker-entrypoint.s…" About a minute ago Up 5 seconds 0.0.0.0:3000->3000/tcp, :::3000->3000/tcp, 0.0.0.0:5000->5000/tcp, :::5000->5000/tcp fiction-forge_app_1
The problem comes from your script you are using a #!/bin/bash and that bash is not available in the default packages of alpine.
Using #!/bin/sh should fix the problem.

Bitbucket Pipeline with docker-compose: Container ID 166535 cannot be mapped to a host ID

I'm trying to use docker-compose inside bitbucket pipeline in order to build several microservices and run tests against them. However I'm getting the following error:
Step 19/19 : COPY . .
Service 'app' failed to build: failed to copy files: failed to copy directory: Error processing tar file(exit status 1): Container ID 166535 cannot be mapped to a host ID
As of now, my docker-compose.yml looks like this:
version: '2.3'
services:
app:
build:
context: .
target: dev
ports:
- "3030:3030"
image: myapp:dev
entrypoint: "/docker-entrypoint-dev.sh"
command: [ "npm", "run", "watch" ]
volumes:
- .:/app/
- /app/node_modules
environment:
NODE_ENV: development
PORT: 3030
DATABASE_URL: postgres://postgres:#postgres/mydb
and my Dockerfile is as follow:
# ---- Base ----
#
FROM node:10-slim AS base
ENV PORT 80
ENV HOST 0.0.0.0
EXPOSE 80
WORKDIR /app
COPY ./scripts/docker-entrypoint-dev.sh /
RUN chmod +x /docker-entrypoint-dev.sh
COPY ./scripts/docker-entrypoint.sh /
RUN chmod +x /docker-entrypoint.sh
ENTRYPOINT ["/docker-entrypoint.sh"]
COPY package.json package-lock.json ./
# ---- Dependencies ----
#
FROM base as dependencies
RUN npm cache verify
RUN npm install --production=true
RUN cp -R node_modules node_modules_prod
RUN npm install --production=false
# ---- Development ----
#
FROM dependencies AS dev
ENV NODE_ENV development
COPY . .
# ---- Release ----
#
FROM dependencies AS release
ENV NODE_ENV production
COPY --from=dependencies /app/node_modules_prod ./node_modules
COPY . .
CMD ["npm", "start"]
And in my bitbucket-pipelines.yml I define my pipeline as:
image: node:10.15.3
pipelines:
default:
- step:
name: 'install docker-compose, and run tests'
script:
- curl -L "https://github.com/docker/compose/releases/download/1.25.4/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
- chmod +x /usr/local/bin/docker-compose
- docker-compose -v
- docker-compose run app npm run test
- echo 'tests done'
services:
- docker
However, this example works when I try to use docker without docker-compose, defining my pipeline as:
pipelines:
default:
- step:
name: 'install and run tests'
script:
- docker build -t myapp .
- docker run --entrypoint="" myapp npm run test
- echo 'done!'
services:
- postgres
- docker
I found this issue (https://jira.atlassian.com/browse/BCLOUD-17319) in atlassian community, however I could not find a solution to fix my broken usecase. Any suggestions?
I would try to use an image with installed docker-compose already instead of installing it during the pipeline.
image: node:10.15.3
pipelines:
default:
- step:
name: 'run tests'
script:
- docker-compose -v
- docker-compose run app npm run test
- echo 'tests done'
services:
- docker
definitions:
services:
docker:
image: docker/compose:1.25.4
try to add this to your bitbucket-pipelines.yml
if it doesn't work rename docker to customDocker in the definition and in the service sections.
if it doesn't work too, then because you don't need nodejs in the pipeline directly, try to use this approach:
image: docker/compose:1.25.4
pipelines:
default:
- step:
name: 'run tests'
script:
- docker-compose -v
- docker-compose run app npm run test
- echo 'tests done'
services:
- docker
TL;DR: Start from your baseimage and check for the ID that is creating the problem using commands in your dockerfile. Use "problem_id = error_message_id - 100000 - 65536" to find the uid or gid that is not supported. Chown copies the files that are modified inflating your docker image.
The details:
We were using base image tensorflow/tensorflow:2.2.0-gpu and though we tried to find the problem ourselves, we were looking too late in our Dockerfile and making assumptions that were wrong.With help from Atlassian support we found that /usr/local/lib/python3.6 contained many files belonging to group staff (gid = 50)
Assumption 1: Bitbucket pipelines have definitions for the standard "linux" user ids and group ids.
Reality: Bitbucket pipelines only define a subset of the standard users and groups. Specifically they do not define group "staff" with gid 50. Your Dockerfile base image may define group staff (in /etc/groups) but the Bitbucket pipeline is run in a docker container without that gid. DO NOT USE
RUN cat /etc/group && RUN /etc/passwd
to check for ids. Execute these commands as Bitbucket pipeline commands in your script.
Assumption 2: It was something we were installing that was breaking the build.
Reality: Although we could "move the build failure around" by adjusting which packages we installed. This was likely just a case of some packages overwriting the ownership of pre-existing
We were able to find the files by using the relationship between the id in the error message and the docker build id of
problem_id = error_message_id - 100000 - 65536
And used the computed id value (50) to fined the files early in our Dockerfile:
RUN find / -uid 50-ls
RUN find / -gid 50 -ls
For example:
Error processing tar file(exit status 1): Container ID 165586 cannot be mapped to a host ID
50 = 165586 - 100000 - 65536
Final solution (for us):
Adding this command early to our Dockerfile:
RUN chown -R root:root /usr/local/lib/python*
Fixed the Bitbucket pipeline build problem, but also increases the size of our Docker image because Docker makes a copy of all of the files that are modified (contents or filesystem flags). We will look again at multi-stage builds to reduce the size of our docker images.

Docker port forwarding for nodejs app

I'm having problems configuring docker for my nodejs app.
I have previously set up containers for both php and rails with port forwarding working flawlessly, but for this instance i can't seem to get it to work.
Running: docker ps, i get the following:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a60f9c82d600 29c7d94a8c58 "/bin/sh -c 'npm s..." 5 seconds ago Up 3 seconds 3000/tcp romantic_albattani
As you can see I'm not getting the usual: 0.0.0.0:3000->3000/tcp that I am expecting.
docker-compose ps gives:
Name Command State Ports
------------------------------
My docker-compose.yml:
web:
build: .
volumes:
- .:/app
volumes_from:
- box
ports:
- "3000:3000"
box:
image: busybox
volumes:
- /node_modules
My Docker file:
FROM node:8.7.0
# The base node image sets a very verbose log level.
ENV NPM_CONFIG_LOGLEVEL warn
WORKDIR /tmp
COPY package.json /tmp/
RUN npm install
WORKDIR /app
ADD . /app
RUN cp -a /tmp/node_modules /app/
#ENV PORT=3000
EXPOSE 3000
CMD npm start
I'm running the command: docker-compose up --build
Any help at this point is appreciated.
I don't know if a docker inspect would be useful, but if so, tell me and i will also post it.
Edit: Changed my Dockerfile to follow the answer.
Your docker-compose.yml file has bad formatting, since you are not getting any errors i will assume you pasted it here wrong, here is the version with the fixed indenting:
web:
build: .
volumes:
- .:/app
volumes_from:
- box
ports:
- "3000:3000"
box:
image: busybox
volumes:
- /node_modules
Your Dockerfile has a bug, you are missing the ENTRYPOINT and/or CMD stanzas, instead you are using the RUN stanza with the wrong intent, here is a working Dockerfile with the fix applied:
FROM node:8.7.0
# The base node image sets a very verbose log level.
ENV NPM_CONFIG_LOGLEVEL warn
WORKDIR /tmp
COPY package.json /tmp/
RUN npm install
WORKDIR /app
ADD . /app
RUN cp -a /tmp/node_modules /app/
#ENV PORT=3000
EXPOSE 3000
CMD npm start
Your Dockerfile halted the execution of docker-compose at the docker image building stage because of the RUN npm start which is a process that starts and listens until stopped (because you want it to start your node app and listen for connections) causing docker-compose to never finish the docker image creating step, let alone the other steps like creating the needed containers and finish the entire docker-compose runtime process.
In short:
When you use RUN it is meant to run a command do some work and return sometime to continue the building process, it should return and exit code of 0 and the process will move on to the next Dockerfile stanza, or return another exit code and the building process will fail with an error.
When you use CMD you tell the docker image what is the starting command of all the containers started from this image (it can also be overridden at run time with docker run). It is tightly related to the ENTRYPOINT stanza, but for basic usage you are safe with the default.
Further reading: ENTRYPOINT, CMD and RUN

docker compose run with command error

From this documentation, it seems like that I can execute a single command from a service like this:
docker-compose run SERVICE CMD
But when I run
docker-compose up pwa npm test
I get the error
ERROR: No such service: npm
From my configurations, it will execute npm start, but I'd like to know how to execute other commands.
Files
Dockerfile:
From node:8
WORKDIR /app
copy package.json /app/
RUN npm install --quiet
CMD npm start
docker-compose.yml:
version: '3'
services:
pwa:
build: .
ports:
- '3000:3000'
volumes:
- ./src:/app/src
- ./public:/app/public
Versions
Docker version: 17.03
Docker compose version: 1.11.2
As docs say, the command is docker-compose run, not docker-compose up. The later expects all service names.
Do as this:
docker-compose run pwa npm test

Resources