Below is the Dockerfile for linux images. I get file path access denied error in Ubuntu VM
environment:
- ASPNETCORE_ENVIRONMENT=Production
- ASPNETCORE_Kestrel__Certificates__Default__Password=password123
- ASPNETCORE_Kestrel__Certificates__Default__Path=/abc/https/localhost.pfx
volumes:
- ./devops/https/localhost.pfx:/abc/https/localhost.pfx:ro
The path in the image and dockerfile was the same at that time. Attempted to run on ubuntu. ubuntu user is added to docker group.
Docker file content is provided for reference.
FROM mcr.microsoft.com/dotnet/aspnet:6.0
COPY . App/
WORKDIR /App
ENV ASPNETCORE_ENVIRONMENT="Production"
ENV ASPNETCORE_URLS="https://+:44123;"
ENV ABC_WORKDIR /App
ENV ABC_FILE_STORE /abc/source
EXPOSE 44123
RUN mkdir -p $ABC_FILE_STORE
RUN mkdir -p /abc/https
RUN chown abcuser /abc/https
RUN chown abcuser $ABC_FILE_STORE
RUN chown abcuser /App
USER abcuser
VOLUME /abc/https
VOLUME $ABC_FILE_STORE
WORKDIR $ABC_FILE_STORE
# sanity check: try to write a file
RUN echo "Hello from ABC" > hello.txt
WORKDIR $ABC_WORKDIR
ENTRYPOINT ["dotnet", "ABCService.dll"]
Should all the volume paths used by app, need to be mentioned and change the permission in Dockerfile
Should a docker file need to create a user and use it in dockerfile.
Above seems, both windows and linux need separate dockerfile for image creation
Currently I have the docker file, which runs a non-optimized react app (it says 'Note that the development build is not optimized. To create a production build, use npm run build.'). The docker file is:
FROM node:16
# A directory within the virtualized Docker environment
# Becomes more relevant when using Docker Compose later
WORKDIR /usr/src/app
# Copies package.json and package-lock.json to Docker environment
COPY package*.json ./
# Installs all node packages
RUN npm install
# Copies everything over to Docker environment
COPY . .
# Uses port which is used by the actual application
EXPOSE 3000
# Finally runs the application
CMD [ "npm", "start" ]
With the above I can hit my service at http://localhost:3000/ .
I tried the following (from https://medium.com/geekculture/dockerizing-a-react-application-with-multi-stage-docker-build-4a5c6ca68166) but I could not access my service:
The docker file I tried is
# pull official base image
FROM node:16 AS builder
# set working directory
WORKDIR /app
# install app dependencies
#copies package.json and package-lock.json to Docker environment
COPY package.json ./
# Installs all node packages
EXPOSE 3000
RUN npm install
# Copies everything over to Docker environment
COPY . ./
RUN npm run build
#Stage 2
#######################################
#pull the official nginx:1.19.0 base image
FROM nginx:1.19.0
#copies React to the container directory
# Set working directory to nginx resources directory
WORKDIR /usr/share/nginx/html
# Remove default nginx static resources
RUN rm -rf ./*
# Copies static resources from builder stage
COPY --from=builder /app/build .
EXPOSE 3000
# Containers run nginx with global directives and daemon off
ENTRYPOINT ["nginx", "-g", "daemon off;"]
Does anyone know what to do to fix this (or how to create an optimized build)?
The root issue was that I was not aware that nginx was serving on port 80. The following docker file works and is run in the following way: docker run -p 80:80 my-ui-app
# pull official base image
FROM node:16 AS builder
# set working directory
WORKDIR /app
# install app dependencies
#copies package.json and package-lock.json to Docker environment
COPY package.json ./
# Installs all node packages
RUN npm install
# Copies everything over to Docker environment
COPY . ./
RUN npm run build
#Stage 2
#######################################
#pull the official nginx:1.19.0 base image
FROM nginx:1.19.0
#copies React to the container directory
# Set working directory to nginx resources directory
WORKDIR /usr/share/nginx/html
# Remove default nginx static resources
RUN rm -rf ./*
# Copies static resources from builder stage
COPY --from=builder /app/build .
EXPOSE 80
# Containers run nginx with global directives and daemon off
ENTRYPOINT ["nginx", "-g", "daemon off;"]
I am running a nodejs application in a Docker container. The application is hosted on a bluehost centOS VPS to which I connect using SSH. I use the following command to run the app in the container: sudo docker run -p 80:8080 -d skepticalbonobo/dandakou-nodeapp. Then I check that the container is running using sudo docker ps and sure enough it is. But when I try to access the app from Chrome using the domain name or IP address I get: "This site can’t be reached". I have noticed however that in the output of sudo docker ps, under COMMAND I get docker-entrypoint... as opposed to node app.js and I do not know how to fix it.You can pull the container using docker pull skepticalbonobo/dandakou-nodeapp. Here is the content of my Dockerfile:
RUN mkdir -p /home/node/app/node_modules && chown -R node:node /home/node/app
WORKDIR /home/node/app
COPY package*.json ./
USER node
RUN npm install
COPY . .
USER root
RUN chown -R node:node . .
EXPOSE 8080
CMD [ "node", "app.js" ]
Thank you!
The default for Nodejs app is 3000.
Run following command and check on which port node app is running
sudo docker run -ti skepticalbonobo/dandakou-nodeapp /bin/sh
Expose in Dockerfile is just for documentation purpose.
Good Morning. I am trying to run the docker file to start my mock api and my UI.
When I run those inside individual terminals, I am able to see the UI up and running. But when I run those inside a docker container the API doesn't start for some reasons.
Can you help me with this?
# My Docker file.
FROM node:11
# Set working directory for API
RUN mkdir /usr/src/api
WORKDIR /usr/src/api
COPY ./YYY/. /usr/src/api/.
RUN npm install
RUN npm start &
# set working directory for UI
RUN mkdir /usr/src/app/
WORKDIR /usr/src/app/
COPY ./ZZZ/. /usr/src/app/.
ENV PATH /usr/src/app/node_modules/.bin:$PATH
EXPOSE 3000
RUN npm install
RUN npm start
Thanks,
Ranjith
The command npm start starts a web server that only listens on the loopback interface of the container. To fix this, in package.json, under start, add —host 0.0.0.0. This will allow you to access the app in your browser using the container ip.
I have a container with nodejs and pm2 as start command and on OpenShift i get this error on startup:
Error: EACCES: permission denied, mkdir '/.pm2'
I tried same image on a Marathon hoster and it worked fine.
Do i need to change something with UserIds?
The Dockerfile:
FROM node:7.4-alpine
RUN npm install --global yarn pm2
RUN mkdir /src
COPY . /src
WORKDIR /src
RUN yarn install --production
EXPOSE 8100
CMD ["pm2-docker", "start", "--auto-exit", "--env", "production", "process.yml"]
Update
the node image already creates a new user "node" with UID 1000 to not run the image as root.
I also tried to fix permissions and adding user "node" to root group.
Further i told pm2 to which dir it should use with ENV var:
PM2_HOME=/home/node/app/.pm2
But i still get error:
Error: EACCES: permission denied, mkdir '/home/node/app/.pm2'
Updated Dockerfile:
FROM node:7.4-alpine
RUN npm install --global yarn pm2
RUN adduser node root
COPY . /home/node/app
WORKDIR /home/node/app
RUN chmod -R 755 /home/node/app
RUN chown -R node:node /home/node/app
RUN yarn install --production
EXPOSE 8100
USER 1000
CMD ["pm2-docker", "start", "--auto-exit", "--env", "production", "process.yml"]
Update2
thanks to Graham Dumpleton i got it working
FROM node:7.4-alpine
RUN npm install --global yarn pm2
RUN adduser node root
COPY . /home/node/app
WORKDIR /home/node/app
RUN yarn install --production
RUN chmod -R 775 /home/node/app
RUN chown -R node:root /home/node/app
EXPOSE 8100
USER 1000
CMD ["pm2-docker", "start", "--auto-exit", "--env", "production", "process.yml"]
OpenShift will by default run containers as a non root user. As a result, your application can fail if it requires it runs as root. Whether you can configure your container to run as root will depend on permissions you have in the cluster.
It is better to design your container and application so that it doesn't have to run as root.
A few suggestions.
Create a special UNIX user to run the application as and set that user (using its uid), in the USER statement of the Dockerfile. Make the group for the user be the root group.
Fixup permissions on the /src directory and everything under it so owned by the special user. Ensure that everything is group root. Ensure that anything that needs to be writable is writable to group root.
Ensure you set HOME to /src in Dockerfile.
With that done, when OpenShift runs your container as an assigned uid, where group is root, then by virtue of everything being group writable, application can still update files under /src. The HOME variable being set ensures that anything written to home directory by code goes into writable /src area.
You can also run the below command which grants root access to the project you are logged in as:
oc adm policy add-scc-to-user anyuid -z default
Graham Dumpleton'solution is working but not recommended.
Openshift, will use random UIDs when running containers.
You can see that in the generated Yaml of your Pod.
spec:
- resources:
securityContext:
runAsUser: 1005120000
You should instead apply Docker security best practices to write your Dockerfile.
Do not bind the execution of your application to a specific UID : Make resources world readable (i.e., 0644 instead of 0640) and executable when needed.
Make executables owned by root and not writable
For a full list of recommendation see : https://sysdig.com/blog/dockerfile-best-practices/
In your case, there is not need to :
RUN adduser node root
...
RUN chown -R node:node /home/node/app
USER 1000
In the original question, the application files are already owned by root.
The following chmod is enough to make them readable and executable to the world.
RUN chmod -R 775 /home/node/app
What kind of openshift are you using ?
You can edit the "restricted" Security Context Constraints :
From openshift CLI :
oc edit scc restricted
And change :
runAsUser:
type: RunAsUSer
to
runAsUser:
type: RunAsAny
Note that Graham Dumpleton's answer is proper