I have this Dockerfile ...
FROM keymetrics/pm2:latest-alpine
RUN apk update && \
apk upgrade && \
apk add \
bash
COPY . ./
EXPOSE 1886 80 443
CMD pm2-docker start --auto-exit --env ${NODE_ENV} ecosystem.config.js
How can I execute the CMD command using sudo ?
I need to do this because the port 443 is allowed only for sudo user.
The su-exec can be used in alpine.
Do add it the package, if not already available, add the following to your Dockerfile
RUN apk add --no-cache su-exec
Inside your scripts you'd run inside docker you can use the following to become another user:
exec su-exec <my-user> <my command>
Alternatively, you could add the more familiair sudo package while building your docker-file
Add the following to your Dockerfile that's FROM alpine
RUN set -ex && apk --no-cache add sudo
After that you can use sudo
sudo -u <my-user> <my command>
Sudo isn't shipped with Alpine images normally, and it rarely makes sense to include it inside of any container. What you need isn't sudo to bind to a low numbered port, but the root user itself, and sudo is just a common way to get root access in multi-user environments. If a container included sudo, you would need to either setup the user with a password, or allow commands to run without a password. Regardless of which you chose, you now have a privilege escalation inside the container, defeating the purpose of running the container as a normal user, so you may as well run the container as root at that point.
If the upstream image is configured to run as a non-root user (unlikely since you run apk commands during the build), you can specify USER root in your Dockerfile, and all following steps will run as root by default, including the container entrypoint/cmd.
If you start your container as a different user, e.g. docker run -u 1000 your_image, then to run your command as root, you'd remove the -u 1000 option. This may be an issue if you run your container in higher security environments that restrict containers to run as non-root users.
If your application itself is dropping the root privileges, then including sudo is unlikely not help, unless the application itself has calls to sudo internally. If that's the case, update the application to drop root privileges after binding to the ports.
Most importantly, if the only reason for root inside your container is to bind to low numbered ports, then configure your application inside the container to bind to a high numbered port, e.g. 8080 and 8443. You can map this container port to any port on the host, including 80 and 443, so the outside world does not see any impact. E.g. docker run -p 80:8080 -p 443:8443 your_image. This simplifies your image (removing tools like sudo) and increases your security at the same time.
Related
I have an application written in PHP and I need to work with the host machine: create a Linux user, copy some files, etc.
Run this program as root outside a container on the host system.
One of the core features of Docker is that a container isn't normally allowed to, for example, reset the host root user's password by writing /etc/shadow, or more generally read or write the host filesystem at all. A container is similarly normally forbidden from other system-management tasks like changing the host network configuration. This filesystem isolation, keeping a container process from corrupting key system files, is a primary reason to use Docker at all, and it can't be trivially disabled.
So in particular, "create a user" is precisely the class of dangerous-to-the-host operations that a container process by design is forbidden from doing. More generally, "copy files" is merely hard, but a task whose main purpose is reading and writing host files will generally be much easier to run outside a container.
In theory you can accomplish some of this using bind mounts. For the "copy files" part of the task, in principle you can run something like
docker run --rm \
-v "$PWD/here:/input" \
-v "$PWD/there:/output" \
your-image
and in the container, /input and /output will be the host's ./here and ./there directories.
It's possible to mount the entire host filesystem, -v /:/host for example. You could in theory use this to edit /host/etc/passwd, or possibly even to chroot(8) back into the host system and effectively escape the container. But at this point you're not really getting much benefit from Docker, and it'll be much easier to run the task outside a container.
I dit it via SSH & host.docker.internal:
Dockerfile:
RUN apt-get update && apt-get upgrade -y && apt-get install -y ssh
# ...
COPY ./.ssh /root/.ssh
RUN chmod 700 /root/.ssh && chmod 644 /root/.ssh/* && chmod 600 /root/.ssh/id_rsa
docker-compose.yml:
extra_hosts:
- 'host.docker.internal:host-gateway'
I'm posting for a friend. He asked my help and we couldn't find out what's going on.
My situation is: my application works perfectly on Ubuntu 18.04 when it’s not inside a container, but the customer required the use of containers so I created a Dockerfile so it could be started by a Docker container.
Here’s the contente of my Dockerfile
FROM node:8.9.4
ENV HOME=/home/backend
RUN apt-get update
RUN apt-get install -y build-essential libssl-dev
RUN apt-get install -y npm
COPY . $HOME/
WORKDIR $HOME/
RUN npm rebuild node-sass
RUN npm install --global babel-cli
USER root
EXPOSE 6543
CMD ["babel-node", "index.js"]
After building the image, I execute the following Docker run command:
sudo docker run --name backend-api -p 6543:6543 -d backend/backendapi1.0
Taking a look at the log output, I can conclude that the application Works properly:
I’ve created a rule in my nginx to redirect from port 90 to 6543 (before using containers it used to work)
server {
listen 90;
listen [::]:90;
access_log /var/log/nginx/reverse-access.log;
error_log /var/log/nginx/reverse-error.log;
location / {
proxy_pass http://localhost:6543;
}
}
P.S.: i’ve tried to change from localhost to the container’s IP and it doesn’t work as well.
The fun fact is that when i try na internal telnet on 6543 it accepts the connection and closes it immediately.
P.S.: all ports are open on the firewall.
The application Works normally outside the container (using port 6543 and redirecting in nginx)
I’d appreciate if someone could help us to find out the reason why it’s happening. We don't have much experience creating containers.
Thanks a lot!
Edit: it's an AWS VM, but this is the return when we run the command curl:
We found the solution!!
It was an internar container router problem...
The following Docker run command solved the problem:
sudo docker run --name my_container_name --network="host" -e MONGODB=my_container_ip -p 6543:6543 my_dockerhub_image_name
Thanks a lot!!
I am new to Openshift.
I have deployed an application in openshift. When I checked the logs, there is permission denied error for some files.
Now, I want to change the permissions on the the container that is already deployed in Openshift, but I am getting, "Operation not permitted" warning. How do I fix this ?
This is for linux running latest version of MongoDB. I have already tried executing RUN chmod 777 /path/to/directory in my docker file, created the image and pulled the same image in my yaml file, which I am deploying in my openshift. However, when I check my docker container, it shows that the permissions are changed for that directory, but when I deploy, I get the warning in my logs as "permission denied".
FROM node:10.16.3
RUN apt update && apt install -y openjdk-8-jdk
RUN useradd -ms /bin/bash admin
# Set the workdir /var/www/myapp
WORKDIR /var/www/myapp
# Copy the package.json to workdir
COPY package.json .
# Run npm install - install the npm dependencies
RUN npm install
RUN npm install sqlite3
# Copy application source
COPY . .
RUN chown -R admin:admin /var/www/myapp
RUN chmod 775 /var/www/myapp
USER admin
# Copy .env.docker to workdir/.env - use the docker env
#COPY .env.docker ./.env
# Expose application ports - (4300 - for API and 4301 - for front end)
# EXPOSE 4300 4301
EXPOSE 52000
CMD [ "npm", "start" ]
Athough, when I run my dockerifle, the permissions have changed, but when I try to deploy in my openshift, I get permission denied for some files in that directory.
By default any container started in OpenShift gets a random user ID. Therefor images not designed to handle such a random UID will fail with permission errors.
In order to get your image working I recommed you read following Article of the latest OpenShift documentation: https://docs.openshift.com/container-platform/4.2/openshift_images/create-images.html
Here the short version that relates to your issue:
SUPPORT ARBITRARY USER IDS
By default, OpenShift Container Platform
runs containers using an arbitrarily assigned user ID. This provides
additional security against processes escaping the container due to a
container engine vulnerability and thereby achieving escalated
permissions on the host node.
For an image to support running as an arbitrary user, directories and
files that may be written to by processes in the image should be owned
by the root group and be read/writable by that group. Files to be
executed should also have group execute permissions.
Adding the following to your Dockerfile sets the directory and file
permissions to allow users in the root group to access them in the
built image:
RUN chgrp -R 0 /some/directory && \
chmod -R g=u /some/directory
Because the container user is always a member of the root group, the container user can read and write
these files. The root group does not have any special permissions
(unlike the root user) so there are no security concerns with this
arrangement. In addition, the processes running in the container must
not listen on privileged ports (ports below 1024), since they are not
running as a privileged user.
Adding this to my Dockerfile worked:
RUN chgrp -R 0 /some/directory && \
chmod -R g=u /some/directory
The following changes in the Dockerfile solved the issue for me
#Add user
RUN useradd -rm -d /home/ubuntu -s /bin/bash -g root -G sudo -u 1001 ubuntu
#Add permissions
RUN chown -R ubuntu:root /home/ubuntu && \
chgrp -R 0 /home/ubuntu && \
chmod -R 775 /home/ubuntu
##Specify the user with UID
USER 1001
OpenShift runs the Pod with arbitrary UID by default. You can check what UID is used by in the pod using oc rsh <pod name> id. Your pod might run not using admin you specified.
The container you built is required admin permission, so you should configure anyuid SCC to the default serviceaccount for avoid the permission error.
Further information about SCC is here: Enable Container Images that Require Root
.
e.g.> default scc is using to run pod by default
$ oc adm policy add-scc-to-user anyuid -z default
$ oc delete pod <your pod name>
FYI, before building your image, checking here:General Container Image Guidelines
is helpful to build your image used on OpenShift.
I have a simple shell script which is generating delay, jitter, packet loss using the tc-netem package. You can check the script here. My idea is that I want to put it into a container and to execute it on my host using an alias. Something like: echo "alias netsim="sudo docker run -v=pwd:/home/netsim/ docker-image"" >> ~/.bashrc and then when I run netsim on my host to ask me for all the user input. Also the container should control the host network interfaces and I am not even sure if this is possible. My Dockerfile so far is the following:
FROM alpine:3.7
WORKDIR /home/netsim
RUN apk update && apk upgrade
COPY netsim.sh random_netsim.sh /home/netsim/
CMD chmod +x /home/netsim/netsim.sh && chmod +x /home/netsim/random_netsim.sh
The reason why I want to do it is because my host is running Ubuntu 18.04 and in the tc-netem version there is a bug with the jitter generation. I was thinking that it would be pretty cool if I can use containers to do the same, without re-compiling/downgrading the tc-netem version.
I'm attempting to run a node.js application in debug mode in one Docker container, and attach a debugger from another container onto the application running in the first container.
As such, I'm trying to open up port 5858 to the outside world. However, when I --link another container to the first container (with alias firstContainer), and run nmap -p 5858 firstContainer, I find that port 5858 is closed. The first container has told me that the node.js application is listening on port 5858, I've exposed the port in the Dockerfile, and I've also bound the ports to the corresponding port on my machine (although, I'm not certain that's necessary). When I run nmap on port 8080, all is successful.
How can I open up port 5858 on a Docker container such that I can attach a debugger to this port?
The Dockerfile is:
FROM openshift/base-centos7
# This image provides a Node.JS environment you can use to run your Node.JS
# applications.
MAINTAINER SoftwareCollections.org <sclorg#redhat.com>
EXPOSE 8080 5858
ENV NODEJS_VERSION 0.10
LABEL io.k8s.description="Platform for building and running Node.js 0.10 applications" \
io.k8s.display-name="Node.js 0.10" \
io.openshift.expose-services="8080:http" \
io.openshift.tags="builder,nodejs,nodejs010"
RUN yum install -y \
https://www.softwarecollections.org/en/scls/rhscl/v8314/epel-7-x86_64/download/rhscl-v8314-epel-7-x86_64.noarch.rpm \
https://www.softwarecollections.org/en/scls/rhscl/nodejs010/epel-7-x86_64/download/rhscl-nodejs010-epel-7-x86_64.noarch.rpm && \
yum install -y --setopt=tsflags=nodocs nodejs010 && \
yum clean all -y
# Copy the S2I scripts from the specific language image to $STI_SCRIPTS_PATH
COPY ./s2i/bin/ $STI_SCRIPTS_PATH
# Each language image can have 'contrib' a directory with extra files needed to
# run and build the applications.
COPY ./contrib/ /opt/app-root
# Drop the root user and make the content of /opt/app-root owned by user 1001
RUN chown -R 1001:0 /opt/app-root
USER 1001
# Set the default CMD to print the usage of the language image
CMD $STI_SCRIPTS_PATH/usage
Run with:
docker run -P -p 5858:5858 -p 8080:8080 --name=firstContainer nodejs-sample-app
Taken from/built with instructions from here.
Thanks.
-P automagically maps any exposed ports within a container to a random port on host machine, while -p allows explicit mapping of ports. Using the --link flag allows two docker containers to communicate with each other, but does nothing to expose the ports to the outside world (outside the docker private network).