Docker daemon unable to find the dockerfile - node.js

I am trying to create a node-js base image by using the following docker file
Dockerfile:
FROM node:0.10-onbuild
# replace this with your application's default port
EXPOSE 8888
I then run the command " sudo docker build -t nodejs-base-image ."
This keeps failing with the error
FATA[0000] The Dockerfile (Dockerfile) must be within the build context (.)
I am running the above command from the same directory where the 'Dockerfile' is located. What might be going on?
I am on Docker version 1.6.2, build 7c8fca2

This was happening because I did not have appropriate permissions to the Dockerfile in question

Related

Running docker compose inside Docker Container

I have a docker file I am building, it will use Localstack to spin up a mock AWS environment, at the minute I do this locally with my docker compose file, so I was thinking I could just copy my docker-compose.yml over when building my docker file and then run docker-compose up from dockerfile and I would be able to run my application from the container created from dockerfile
Here is the docker compose file
version: '3.1'
services:
localstack:
image: localstack/localstack:latest
environment:
- AWS_DEFAULT_REGION=us-east-1
- EDGE_PORT=4566
- SERVICES=lambda,s3,cloudformation,sts,apigateway,iam,route53,dynamodb
ports:
- '4566-4597:4566-4597'
volumes:
- "${TEMPDIR:-/tmp/localstack}:/temp/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"
Here us my Dockerfile
FROM node:16-alpine
RUN apk update
RUN npm install -g serverless; \
npm install -g serverless-localstack;
WORKDIR /app
COPY serverless.yml ./
COPY localstack_endpoints.json ./
COPY docker-compose.yml ./
COPY --from=library/docker:latest /usr/local/bin/docker /usr/bin/docker
COPY --from=docker/compose:latest /usr/local/bin/docker-compose /usr/bin/docker-compose
EXPOSE 3000
RUN docker-compose up
CMD ["sls","deploy" ]
But the error I am receiving is
#17 0.710 Couldn't connect to Docker daemon at http+docker://localhost - is it running?
#17 0.710
#17 0.710 If it's at a non-standard location, specify the URL with the DOCKER_HOST environment variable.
I'm new to Docker, when i researched the error online I see people saying it needs to be run with Sudo, although I think in this case it is something to do with my volumes linking to the host running the container but really not sure.
Inside the Docker container try to reach socket but it can not. so when you want to run your container use
-v /var/run/docker.sock:/var/run/docker.sock
it should fix the problem.
As a general rule, you can't do things in your Dockerfile that affect persistent state or processes running outside the container. Imagine docker building your image, docker pushing it to a registry, and docker pulling it on a new system; if the build step was able to start other running containers, they wouldn't be running with the same image on a different system.
At a more mechanical level, the build sequence doesn't have access to bind-mounted host directories or a variety of other runtime settings. That's why you get the "couldn't connect to Docker daemon" message: the build container isn't running a Docker daemon and it doesn't have access to the host's daemon.
Rather than try to have a container embed the Compose tool and Compose setup, you might find it easier to just distribute a docker-compose.yml file, and make the standard way to run your composite application be running docker-compose up on the host. Access to the Docker socket is incredibly powerful -- you can almost trivially use it to root the host -- and I wouldn't require it to avoid needing a fairly standard tool on the host.

Docker buildx with node app on Apple M1 Silicon - standard_init_linux.go:211: exec user process caused "exec format error

Please help!
I am trying to deploy a docker image to a kuebernetes clusters. No problem until I switched to new Macbook Pro with M1.
Once I build the image on the m1 machine and deploy I get the following error from the kuebernetes pod:
standard_init_linux.go:211: exec user process caused "exec format error"
After doing some research, I followed this medium post on getting docker buildx added and set up.
Once I build a new image using the new buildx and run it locally using the docker desktop (the m1 compatible preview version), it runs without issue. However the kubernetes pod still shows the same error.
standard_init_linux.go:211: exec user process caused "exec format error"
My build command
docker buildx use m1_builder && docker buildx build --platform linux/amd64,linux/arm64,linux/arm/v7,linux/arm/v6 -f Dockerfile -t ${myDockerRepo} --push . '
During the build I see each platform logging out that it is running the commands from my Dockerfile.
My push command
docker push ${myDockerRepo}
One odd thing to note is the sha256 digest in the docker push command response does not change.
Here is my docker file:
# Use an official Node runtime as a parent image
FROM node:10-alpine
# Copy the current directory contents into the container at /app
COPY dist /app
# Set the working directory to /app
WORKDIR /app
# Make port 8000 available to the world outside this container
EXPOSE 8000
# Run npm run serve:dynamic when the container launches
CMD ["node", "server"]
I am no docker expert, clearly.
Started with a full head of hair. Down to 3 strands. Please save those 3 strands.
I appreciate all help and advice!
Update
I have pulled the image built by the M1 macbook down to my other macbook and could run the image locally via docker desktop. I am not sure what this means. Could it be just a kuebernetes setting?
Try adding --platform=linux/amd64 to your dockerfile:
FROM --platform=linux/amd64 node:10-alpine

Docker- parent image for Node.js based Images

I'm trying to create a Node.js based docker image. For that, I'm looking for options for Parent image. Security is one of the main considerations in the image and we wanted to harden the image by not allowing shell or bash in the container.
Google Distroless does provide this option, but Distroless-NodeJS is in the experimental stage and not recommended for production.
Possible options I could think of are (compromising Distroless feature):
Official Node Image (https://hub.docker.com/_/node/) / Alpine / CentOS based image (but all would have a shell I believe).
With that being said,
Is there any alternative for Distroless?
What are the best options for the parent image for Node.js based docker image?
Any pointers would be helpful.
One option would be to start with a Node image that meets your requirements, then delete anything that you don't want (sh, bash, etc.)
At the extreme end you could add the following to your Dockerfile:
RUN /bin/rm -R /bin/*
Although I am not certain that this wouldn't interfere with the running of node.
On the official Node image (excl Apline) you have /bin/bash, /bin/dash and /bin/sh (a symlink to /bin/dash). Just deleting these 3 flies would be sufficient to prevent shell access.
The Alpine version has a symlink /bin/sh -> /bin/busybox. You could delete this symlink, but it may not run without busybox.
I think you can build an image from scratch which only contains your node application and required dependency, nothing more even no ls or pwd etc.
FROM node as builder
WORKDIR /app
COPY . ./
RUN npm install --prod
FROM astefanutti/scratch-node
COPY --from=builder /app /app
WORKDIR /app
ENTRYPOINT ["node", "bin/www"]
scratch-node
So if someone tries to get the shell,like,
docker run --entrypoint bash -it my_node_scratch
Will get error
docker: Error response from daemon: OCI runtime create failed:
container_linux.go:348: starting container process caused "exec:
\"bash\": executable file not found in $PATH": unknown.
I am referring this from official Node.js docker image.
Create a docker file in your project.
Then build and run docker image:
docker build - t test-nodejs-app
docker run -it --rm --name running-app test-nodejs-app
If you prefer docker compose:
Version: "2"
Services:
node:
image: "node:8"
user: "node"
working_dir: /home/node/app
environment:
- NODE_ENV=production
volumes:
- ./:/home/node/app
expose:
- "8081"
command: "npm start"
Run the compose file:
docker-compose up -d

Why is my Docker container not running my Nodejs app?

End goal: To spin up a docker container running my expressjs application on port 3000 (as if I am using npm start).
Details:
I am using Windows 10 Enterprise:
This a very basic, front-end Expressjs application.
It runs fine using npm start – no errors.
Dockerfile I am using:
FROM node:8.11.2
WORKDIR /app
COPY package.json .
RUN npm install
COPY src .
CMD node src/index.js
EXPOSE 3000
Steps:
I am able to create an image, using basic docker build command:
docker build –t portfolio-img .
Running the image (I am using this command from a tutorial www.katacoda.com/courses/docker/deploying-first-container):
docker run -d --name portfolio-container -p 3000:3000 portfolio-img
The container is not running. It is created, since I can inspect it, but it has exited after the command. I am guessing I did something wrong with the last command, or I am not giving the correct instructions in the dockerfile.
If anyone can point me in the right direction, I'd greatly appreciate it.
Already have searched a lot on the docker documentation and on here.

Docker automatically binds port

When I am not exposing any ports when writing my Dockerfile, nor am I binding any ports when running docker run, I am still able to interact with applications running inside the container. Why?
I am writing my Dockerfile for my Node application. It's pretty simple and looks like this:
FROM node:8
COPY . .
RUN yarn
RUN yarn run build
ARG PORT=80
EXPOSE $PORT
CMD yarn run serve
Using this Dockerfile, I was able to build the image using docker build
$ cd ~/project/dir/
$ docker build . --build-arg PORT=8080
And run it using docker run
$ docker run -p 8080 <image-id>
I then accessed the application, running inside the Docker container, on an IP address like http://172.17.0.12:8080/ and it works.
However, when I removed the EXPOSE instruction from the Dockerfile, and remove the -p option in docker run, the application still works! It's like Docker is automatically binding my ports
Additional Notes:
It appears that another user have experienced the same issue
I have tried rebuilding my image using --no-cache after I removed the EXPOSE instructions, but this problem still exists.
Using docker inspect, I see no entries for Config.ExposedPorts
the EXPOSE command in Dockerfile really doesnt do much and I think it is more for people that read the Dockerfile to know what ports/services are running inside the container. However, the EXPOSE is usefull when you start contianer with capital -P argument (-P, --publish-all Publish all exposed ports to random ports)
docker run -P my_image
but if you are using the lower case -p you have to specify the source:destination port... See this thread
If you dont write EXPOSE in Dockerfile it doesnt have any influence to the app inside container, it is only for the capital -P argument....

Resources