Docker- parent image for Node.js based Images - node.js

I'm trying to create a Node.js based docker image. For that, I'm looking for options for Parent image. Security is one of the main considerations in the image and we wanted to harden the image by not allowing shell or bash in the container.
Google Distroless does provide this option, but Distroless-NodeJS is in the experimental stage and not recommended for production.
Possible options I could think of are (compromising Distroless feature):
Official Node Image (https://hub.docker.com/_/node/) / Alpine / CentOS based image (but all would have a shell I believe).
With that being said,
Is there any alternative for Distroless?
What are the best options for the parent image for Node.js based docker image?
Any pointers would be helpful.

One option would be to start with a Node image that meets your requirements, then delete anything that you don't want (sh, bash, etc.)
At the extreme end you could add the following to your Dockerfile:
RUN /bin/rm -R /bin/*
Although I am not certain that this wouldn't interfere with the running of node.
On the official Node image (excl Apline) you have /bin/bash, /bin/dash and /bin/sh (a symlink to /bin/dash). Just deleting these 3 flies would be sufficient to prevent shell access.
The Alpine version has a symlink /bin/sh -> /bin/busybox. You could delete this symlink, but it may not run without busybox.

I think you can build an image from scratch which only contains your node application and required dependency, nothing more even no ls or pwd etc.
FROM node as builder
WORKDIR /app
COPY . ./
RUN npm install --prod
FROM astefanutti/scratch-node
COPY --from=builder /app /app
WORKDIR /app
ENTRYPOINT ["node", "bin/www"]
scratch-node
So if someone tries to get the shell,like,
docker run --entrypoint bash -it my_node_scratch
Will get error
docker: Error response from daemon: OCI runtime create failed:
container_linux.go:348: starting container process caused "exec:
\"bash\": executable file not found in $PATH": unknown.

I am referring this from official Node.js docker image.
Create a docker file in your project.
Then build and run docker image:
docker build - t test-nodejs-app
docker run -it --rm --name running-app test-nodejs-app
If you prefer docker compose:
Version: "2"
Services:
node:
image: "node:8"
user: "node"
working_dir: /home/node/app
environment:
- NODE_ENV=production
volumes:
- ./:/home/node/app
expose:
- "8081"
command: "npm start"
Run the compose file:
docker-compose up -d

Related

Running docker compose inside Docker Container

I have a docker file I am building, it will use Localstack to spin up a mock AWS environment, at the minute I do this locally with my docker compose file, so I was thinking I could just copy my docker-compose.yml over when building my docker file and then run docker-compose up from dockerfile and I would be able to run my application from the container created from dockerfile
Here is the docker compose file
version: '3.1'
services:
localstack:
image: localstack/localstack:latest
environment:
- AWS_DEFAULT_REGION=us-east-1
- EDGE_PORT=4566
- SERVICES=lambda,s3,cloudformation,sts,apigateway,iam,route53,dynamodb
ports:
- '4566-4597:4566-4597'
volumes:
- "${TEMPDIR:-/tmp/localstack}:/temp/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"
Here us my Dockerfile
FROM node:16-alpine
RUN apk update
RUN npm install -g serverless; \
npm install -g serverless-localstack;
WORKDIR /app
COPY serverless.yml ./
COPY localstack_endpoints.json ./
COPY docker-compose.yml ./
COPY --from=library/docker:latest /usr/local/bin/docker /usr/bin/docker
COPY --from=docker/compose:latest /usr/local/bin/docker-compose /usr/bin/docker-compose
EXPOSE 3000
RUN docker-compose up
CMD ["sls","deploy" ]
But the error I am receiving is
#17 0.710 Couldn't connect to Docker daemon at http+docker://localhost - is it running?
#17 0.710
#17 0.710 If it's at a non-standard location, specify the URL with the DOCKER_HOST environment variable.
I'm new to Docker, when i researched the error online I see people saying it needs to be run with Sudo, although I think in this case it is something to do with my volumes linking to the host running the container but really not sure.
Inside the Docker container try to reach socket but it can not. so when you want to run your container use
-v /var/run/docker.sock:/var/run/docker.sock
it should fix the problem.
As a general rule, you can't do things in your Dockerfile that affect persistent state or processes running outside the container. Imagine docker building your image, docker pushing it to a registry, and docker pulling it on a new system; if the build step was able to start other running containers, they wouldn't be running with the same image on a different system.
At a more mechanical level, the build sequence doesn't have access to bind-mounted host directories or a variety of other runtime settings. That's why you get the "couldn't connect to Docker daemon" message: the build container isn't running a Docker daemon and it doesn't have access to the host's daemon.
Rather than try to have a container embed the Compose tool and Compose setup, you might find it easier to just distribute a docker-compose.yml file, and make the standard way to run your composite application be running docker-compose up on the host. Access to the Docker socket is incredibly powerful -- you can almost trivially use it to root the host -- and I wouldn't require it to avoid needing a fairly standard tool on the host.

Docker is not writing to the defined volumes

I am new to Docker and created following files in a large Node project folder:
Dockerfile
# syntax=docker/dockerfile:1
FROM node:16
# Update npm
RUN npm install --global npm
# WORKDIR automatically creates missing folders
WORKDIR /opt/app
# https://stackoverflow.com/a/42019654/15443125
VOLUME /opt/app
RUN useradd --create-home --shell /bin/bash app
COPY . .
RUN chown -R app /opt/app
USER app
ENV NODE_ENV=production
RUN npm install
# RUN npx webpack
CMD [ "sleep", "180" ]
docker-compose.yml
version: "3.9"
services:
app:
build:
context: .
ports:
- "3000:3000"
volumes:
- ./dist/dockerVolume/app:/opt/app
And I run this command:
docker compose up --force-recreate --build
It builds the image, starts a container and I added a sleep to make sure the container stays up for at least 3 minutes. When I open a console for that container and run cd /opt/app && ls, I can verify that there are a lot of files. project/dist/dockerVolume/app gets created by Docker, but nothing is written to it at any point.
There are no errors or warnings or other indications that something isn't set up correctly.
What am I missing?
First you should move the VOLUME declaration to the end of the Dockerfile, because:
If any build steps change the data within the volume after it has been declared, those changes will be discarded. (Documentation)
After this you will face the issue of how bind mounts and docker volumes work. Unfortunately if you use a bind mount, the contents of the host directory will always replace the files that are already in the container. Files will only appear in the host directory, if they were created during runtime by the container.
Also see:
Docker docs: bind mounts
Docker docs: volumes
To solve the issue, you could use any of these workarounds, depending on your usecase:
Use volumes in your docker-compose.yml file instead of bind mounts (Documentation)
Create the files you want to run on the host instead of in the image, and bind mount them into the container.
Use a bash script in the container that creates the neccessary files (if they are missing) when the container is starting (so the bind mount is already initialized, and the changes will persist) and after that, it starts your processes.

Docker buildx with node app on Apple M1 Silicon - standard_init_linux.go:211: exec user process caused "exec format error

Please help!
I am trying to deploy a docker image to a kuebernetes clusters. No problem until I switched to new Macbook Pro with M1.
Once I build the image on the m1 machine and deploy I get the following error from the kuebernetes pod:
standard_init_linux.go:211: exec user process caused "exec format error"
After doing some research, I followed this medium post on getting docker buildx added and set up.
Once I build a new image using the new buildx and run it locally using the docker desktop (the m1 compatible preview version), it runs without issue. However the kubernetes pod still shows the same error.
standard_init_linux.go:211: exec user process caused "exec format error"
My build command
docker buildx use m1_builder && docker buildx build --platform linux/amd64,linux/arm64,linux/arm/v7,linux/arm/v6 -f Dockerfile -t ${myDockerRepo} --push . '
During the build I see each platform logging out that it is running the commands from my Dockerfile.
My push command
docker push ${myDockerRepo}
One odd thing to note is the sha256 digest in the docker push command response does not change.
Here is my docker file:
# Use an official Node runtime as a parent image
FROM node:10-alpine
# Copy the current directory contents into the container at /app
COPY dist /app
# Set the working directory to /app
WORKDIR /app
# Make port 8000 available to the world outside this container
EXPOSE 8000
# Run npm run serve:dynamic when the container launches
CMD ["node", "server"]
I am no docker expert, clearly.
Started with a full head of hair. Down to 3 strands. Please save those 3 strands.
I appreciate all help and advice!
Update
I have pulled the image built by the M1 macbook down to my other macbook and could run the image locally via docker desktop. I am not sure what this means. Could it be just a kuebernetes setting?
Try adding --platform=linux/amd64 to your dockerfile:
FROM --platform=linux/amd64 node:10-alpine

How do I populate a volume in a docker-compose.yaml

I am starting to write my first docker-compose.yml file to set a a combination of services that make up my application (all node-js). One of the services (web-server - bespoke, not express) has both a large set of modules it needs and an even larger set of bower_components.
In order to provide separation of concerns, and so I can control the versioning more closely I want to create two named volumes which hold the node_modules and bower_components, and mount those volumes on to the relevant directories of the web-server service.
The question that is confusing me is how do I get these two volumes populated on service startup. There are two reasons for my confusion:-
The behaviour of docker-compose with the -d flag versus the docker run command with the -d flag - the web service obviously needs to keep running (and indeed needs to be restarted if it fails) whereas the container that might populate one or other of the volumes is a run once as the whole application is brought up with docker-compose up command. Can I control this?
A running service and the build commands of that service. Could I actually use a Dockerfiles to run npm install and bower install. In particular, if I change the source code of the web application, but the modules and bower_components don't change, will this build step be instantaneous because of a cached result?
I have been unable to find examples of this sort of behaviour so I am puzzled as to how to go about doing it. Can someone help.
I did sommething like that without bower but with nodeJS tools like Sass, Hall, live reload, jasmine...
I used npm for all installation inside the npm project (not global install)
For that, the official node image is quiet well, I only have to set the PATH to the app/node_modules/.bin. So my Dockerfile look like this (very simple) :
FROM node:7.5
ENV PATH /usr/src/app/node_modules/.bin/:$PATH
My docker-compose.yml file is :
version: '2'
services:
mydata:
image: busybox
stdin_open: true
volumes:
- .:/usr/src/app
node:
build: .
image: mynodecanvassvg
working_dir: /usr/src/app
stdin_open: true
volumes_from:
- mydata
sass:
depends_on:
- node
image: mynodecanvassvg
working_dir: /usr/src/app
volumes_from:
- mydata
#entrypoint: "node-sass -w -r -o public/css src/scss"
stdin_open: true
jasmine:
depends_on:
- node
image: mynodecanvassvg
working_dir: /usr/src/app
volumes_from:
- mydata
#entrypoint: "jasmine-node --coffee --autoTest tests/coffee"
stdin_open: true
live:
depends_on:
- node
image: mynodecanvassvg
working_dir: /usr/src/app
volumes_from:
- mydata
ports:
- 35729:35729
stdin_open: true
I have only some trouble with entrypoints that all needs a terminal to display result while working. So, I use the stdin_open: true to keep the container active and then I use the docker exec -it on each containers to get running each watch services.
And of course I launch the docker-compose with the -d to keep it alive as daemon.
Next you have to put your npm package.json on your app folder (next to Dockerfile and docker-compose.yml) and launch a npm update to load and install the modules.
I'll start with the standard way first
2. Dockerfile
Using a Dockerfile avoids trying to work out how to setup docker-compose service dependencies or external build scripts to get volumes populated and working before a docker-compose up.
A Dockerfile can be setup so only changes to the bower.json and package.json will trigger a reinstall of node_modules or bower_components.
The command that installs first will, at some point, have to invalidate the second commands cache though so the order you put them in matters. Which ever updates the least, or is significantly slower should go first. You may need to manually install bower globally if you want to run the bower command first.
If you are worried about NPM versioning, look at using yarn and a yarn.lock file. Yarn will speed things up a little bit too. Bower can just set specific versions as it doesn't have the same sub module versioning issues NPM does.
File Dockerfile
FROM mhart/alpine-node:6.9.5
RUN npm install bower -g
WORKDIR /app
COPY package.json /app/
RUN npm install --production
COPY bower.json /app/
RUN bower install
COPY / /app/
CMD ["node", "server.js"]
File .dockerignore
node_modules/
bower_components/
This is all supported in a docker-compose build: stanza
1. Docker Compose + Volumes
The easiest/quickest way to populate a volume is by defining a VOLUME in the Dockerfile after the directory has been populated in the image. This will work via compose. I'd question the point of using a volume when the image already has the required content though...
Any other methods of population will require some custom build scripts outside of compose. One option would be to docker run a container with the required volume attached and populate it with npm/bower install.
docker run \
--volume myapp_bower_components:/bower_components \
--volume bower.json:/bower.json \
mhart/alpine-node:6.9.5 \
npm install bower -g && bower install
and
docker run \
--volume myapp_mode_modules:/node_modules \
--volume package.json:/package.json \
mhart/alpine-node:6.9.5 \
npm install --production
Then you will be able to mount the populated volume on your app container
docker run \
--volume myapp_bower_components:/bower_components \
--volume myapp_node_modules:/node_modules \
--port 3000:3000
my/app
You'd probably need to come up with some sort of versioning scheme for the volume name as well so you could roll back. Sounds like a lot of effort for something an image already does for you.
Or possibly look at rocker, which provides an alternate docker build system and lets you do all the things Docker devs rail against, like mounting a directory during a build. Again this is stepping outside of what Docker Compose supports.

Docker daemon unable to find the dockerfile

I am trying to create a node-js base image by using the following docker file
Dockerfile:
FROM node:0.10-onbuild
# replace this with your application's default port
EXPOSE 8888
I then run the command " sudo docker build -t nodejs-base-image ."
This keeps failing with the error
FATA[0000] The Dockerfile (Dockerfile) must be within the build context (.)
I am running the above command from the same directory where the 'Dockerfile' is located. What might be going on?
I am on Docker version 1.6.2, build 7c8fca2
This was happening because I did not have appropriate permissions to the Dockerfile in question

Resources