How to prevent other users from accessing APIs hosted in docker container? - linux

I have a docker container that hosts REST APIs.
As root user I am able to access with its internal IP from the host machine, like below
#docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' mycontainer
172.20.0.7
#curl -X GET http://172.20.0.7:8080/v1/api
Welcome!!
I can do the same as a non-privileged user too! (Permission denied! If I try to execute docker commands as this user).
Is there a way, I can prevent non-privileged users from accessing the container APIs?

I guess you could create a network interface which you'll be allow the usage by only your root user by using iptable then cast your service only on this network interface (-p <your_new_ip>:port:port)

Related

How to pull a docker image from Azure Registry in a Azure Ubuntu virtual machine

I have created an Azure Registry where I deploy some docker container from the CD\CI in Azure DevOps.
Following the Microsoft documentation, I have created a service principal. So, I have username and password to use to pull images from the Azure Container Registry. I tried to pull the images locally and it is working. To connect to the Container registry I use this command:
docker login myazureregistry.azurecr.io --username --password
Now, I want to create a virtual machine in Azure to publicly access to the application in the container.
I created an Ubuntu virtual machine and installed Docker. I run the same command as before on the Ubuntu machine but I got an error:
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Post http://%2Fvar%2Frun%2Fdocker.sock/v1.39/auth: dial unix /var/run/docker.sock: connect: permission denied
What is the problem? How have I to configure Ubuntu to connect to the Azure Container?
Maybe you don't have permissions for use docker.
Add your user to docker group for use docker command without sudo
sudo groupadd docker
sudo usermod -aG docker $USER
And after this run your login command. (If you use virtual machine, it may be necessary to restart the virtual machine for changes to take effect)

Restrict access of other linux user to docker container

I have two linux users, named as: ubuntu and my_user
Now I build a simple Docker image and also run the Docker container
In my docker-compose.yml, I volume mount some of the files from local machine to the container, which were created by 'ubuntu' user.
Now if I login by 'my_user', and access the docker container created by 'ubuntu' user using docker exec command, then I am able to access any files that are present in the container.
My requirement is to restrict the access of 'my_user', to access the content of Docker container that was created by 'ubuntu' user.
This is not possible to achieve currently. If your user can execute Docker commands, it means effectively that the user has root privileges, therefore it's impossible to prevent this user from accessing any files.
You can add "ro",means readOnly after the data volumn.Like this
HOST:CONTAINER:ro
Or you can add ReadOnly properties in your docker-compose.yml
Here is an example how to specify read-only containers in docker-compose:
#surabhi, There is only option to restrict file access by adding fields in docker-compose file.
read_only: flag to set the volume as read-only
nocopy: flag to disable copying of data from a container when a volume is created
You can find more information here
You could install and run a sshd in that container, map port 22 to an available host port and manage the user accessibility via ssh keys.
This would not allow the user to manage things via docker commands but would give that user access to that container.

How to logon as non-root user in Kubernetes pod/container

I am trying to log into a kubernetes pod using the kubectl exec command. I am successful but it logs me in as the root user. I have created some other users too as part of the system build.
Command being used is "kubectl exec -it /bin/bash". I guess this means that run /bin/bash on the pod which results into a shell entry into the container.
Can someone please guide me on the following -
How to logon using a non-root user?
Is there a way to disable root user login?
How can I bind our organization's ldap into the container?
Please let me know if more information is needed from my end to answer this?
Thanks,
Anurag
You can use su - <USERNAME> to login as a non-root user.
Run cat /etc/passwd to get a list of all available users then identify a user with a valid shell compiler e.g
/bin/bash or /bin/sh
Users with /bin/nologin and /bin/false as the set compiler are used by system processes and as such you can't log in as them.
I think its because the container user is root, that is why when you kubectl exec into it, the default user is root. If you run your container or pod with non root then kubectl exec will not be root.
In most cases, there is only one process that runs in a Docker container inside a Kubernetes Pod. There are no other processes that can provide authentication or authorization features. You can try to run a wrapper with several nested processes in one container, but this way you spoil the containerization idea to run an immutable application code with minimum overhead.
kubectl exec runs another process in the same container environment with the main process, and there is no option to set the user ID for this process.
However, you can do it by using docker exec with the additional option:
--user , -u Username or UID (format: <name|uid>[:<group|gid>])
In any case, these two articles might be helpful for you to run IBM MQ in Kubernetes cluster
Availability and scalability of IBM MQ in containers
Administering Kubernetes

Access service running in docker container from inside another docker container

At the moment I'm running a node.js application inside a docker container which needs to connect to camunda, which runs in another container.
I start the containers with the following command
docker run -d --restart=always --name camunda -p 8000:8080 camunda/camunda-bpm-platform:tomcat-7.4.0
docker run -d --name app -p 3000:3000 app
Both applications are now running and I can access camunda by navigating to my host's IP on port 8000, and running wget http://localhost:8000 -q -O - also returns the camunda page. When I login to my app container with docker exec -it app sh and type wget http://localhost:8000 -q -O -, I cannot access camunda. Instead I get the following error:
wget: can't connect to remote host (127.0.0.1): Connection refused
When I link my app container to the camunda container with --link camunda:camunda, and type wget http://camunda:8000 -q -O - in my app container, I get the following error:
wget: can't connect to remote host (172.17.0.4): Connection refused`
I've seen this option, so I started my app container with --add-host camunda:my_hosts_ip and tried wget again, resulting in:
wget: can't connect to remote host (149.210.227.191): Operation timed out
When running wget http://149.210.227.191:5001 -q -O - on my host machine however, I get a correct response immediately.
Ideally I would like to just start my app container without the need to supply the external IP in any way, and let the app container just use the camunda service via the localhost or by linking the camunda container tot my app container. What would be the easiest way to achieve this?
Why does it not work?
Containers and host do not share their local IP stack. Thus, when you are within a container and try anything localhost:port the anything command will try to connect to the container-specific local IP stack, not the other container nor the host.
How to make it work?
Hard way: you either need to know the IP address of the other container and connect to this IP address..
Easier and cleaner way: .. either link your containers.
--link=[]
Add link to another container in the form of <name or id>:alias or just <name or id> in which case the alias will match the name
So you'll need to perform, assuming the camunda container is named camunda:
docker run -d --name app -p 3000:3000 --link camunda app
Then, once you docker-exec-ed into the container app you will be able to execute wget http://camunda:8080 -q -O - without error.
Note that while the linked containers graph cannot loop, e.g., camunda cannot be linked to app as you need to start a container to be able to link it, you actually do whatever you want/need playing with IP addresses.
Note also that you can specify the IP address of a container using the --ip option (though it can only be used in conjunction with --net for user-defined networks).
Original answer below. Note that link has been deprecated and the recommended replacement is network. That is explained in the answer to this question: docker-compose: difference between network and link
--
Use the --link camunda:camunda option for your app container. Then you can access camunda via http://camunda:8080/.... The link option adds a entry to the /etc/hosts file of the app container with the IP address of the camunda container. This also means you have to restart your app container if you restart the camunda container.

Mount SMB/CIFS share within a Docker container

I have a web application running in a Docker container. This application needs to access some files on our corporate file server (Windows Server with an Active Directory domain controller). The files I'm trying to access are image files created for our clients and the web application displays them as part of the client's portfolio.
On my development machine I have the appropriate folders mounted via entries in /etc/fstab and the host mount points are mounted in the Docker container via the --volume argument. This works perfectly.
Now I'm trying to put together a production container which will be run on a different server and which doesn't rely on the CIFS share being mounted on the host. So I tried to add the appropriate entries to the /etc/fstab file in the container & mounting them with mount -a. I get mount error(13): Permission denied.
A little research online led me to this article about Docker security. If I'm reading this correctly, it appears that Docker explicitly denies the ability to mount filesystems within a container. I tried mounting the shares read-only, but this (unsurprisingly) also failed.
So, I have two questions:
Am I correct in understanding that Docker prevents any use of mount inside containers?
Can anyone think of another way to accomplish this without mounting a CIFS share on the host and then mounting the host folder in the Docker container?
Yes, Docker is preventing you from mounting a remote volume inside the container as a security measure. If you trust your images and the people who run them, then you can use the --privileged flag with docker run to disable these security measures.
Further, you can combine --cap-add and --cap-drop to give the container only the capabilities that it actually needs. (See documentation) The SYS_ADMIN capability is the one that grants mount privileges.
yes
There is a closed issue mount.cifs within a container
https://github.com/docker/docker/issues/22197
according to which adding
--cap-add SYS_ADMIN --cap-add DAC_READ_SEARCH
to the run options will make mount -t cifs operational.
I tried it out and:
mount -t cifs //<host>/<path> /<localpath> -o user=<user>,password=<user>
within the container then works
You could use the smbclient command (part of the Samba package) to access the SMB/CIFS server from within the Docker container without mounting it, in the same way that you might use curl to download or upload a file.
There is a question on StackExchange Unix that deals with this, but in short:
smbclient //server/share -c 'cd /path/to/file; put myfile'
For multiple files there is the -T option which can create or extract .tar archives, however this looks like it would be a two step process (one to create the .tar and then another to extract it locally). I'm not sure whether you could use a pipe to do it in one step.
You can use a Netshare docker volume plugin which allows to mount remote CIFS/Samba as volumes.
Do not make your containers less secure by exposing many ports just to mount a share. Or by running it as --privileged
Here is how I solved this issue:
First mount the volume on the server that runs docker.
sudo mount -t cifs -o username=YourUserName,uid=$(id -u),gid=$(id -g) //SERVER/share ~/WinShare
Change the username, SERVER and WinShare here. This will ask your sudo password, then it will ask password for the remote share.
Let's assume you created WinShare folder inside your home folder. After running this command you should be able to see all the shared folders and files in WinShare folder. In addition to that since you use the uidand gid tags you will have write access without using sudo all the time.
Now you can run your container by using -v tag and share a volume between the server and the container.
Let's say you ran it like the following.
docker run -d --name mycontainer -v /home/WinShare:/home 2d244422164
You should be able to access the windows share and modify it from your container now.
To test it just do:
docker exec -it yourRunningContainer /bin/bash
cd /Home
touch testdocfromcontainer.txt
You should see testdocfromcontainer.txt in the windows share.

Resources