Restricting access to mounted /var/run/docker.sock - security

I am currently developing a webapp using docker-compose and Docker. Currently, there is a front-end Nginx reverse proxy-server in one container and a Rails app in another container.
Sometimes, the Rails app needs to make changes to the Nginx configuration files. I've implemented this by mounting the configuration directory as a shared volume in both containers.
However, to force Nginx to reload its configuration files after the Rails app modifies it, it needs to send a HUP signal to the Nginx process. At the moment, I am implementing this by mounting the host's /var/run/docker.sock into the Rails app container and using a gem to ask the host Docker to send the signal to the right container.
This works fine but now I'm worried about security. If the Rails container is compromised, then the attacker will have root access to the host.
I thought about creating another container who's sole job is to broker access to the socket and exposing a limited API to the main Rails app. But then we run into the same problem of what happens when the broker is also compromised. Not only that but surely there's an easier way?
I searched for some solutions to limit which APIs can be called on /var/run/docker.sock but I wasn't able to find any solutions.
Does anyone have any ideas? Perhaps there is some other way I can reload the Nginx configuration files without having to go through the Docker API?

Related

How to package MEAN stack application as docker container with nginx?

I've made an app using Angular, NestJS (Node.js) and MongoDB and I'm wondering how to easily turn it into a simple docker container, which will contain nginx sever for serving the frontend app and reverse proxy for the backend + the MongoDB. Also it would be cool if there was option for automatic Let's encrypt cert renewal etc.
Is there some pre-made package/template I could just clone, replace the app files and immediately just run? If not, I'd appreciate at least link to some guide on how to build such container myself - It's probably not super hard, but I've never really built anything with Docker (I do have some basic knowledge how it works) and my nginx experience is also very limited...
The expected result should be all-in-one docker app that I can easily provide to anyone, so they can just easily run it with something like docker run -p 80:80 image.

Best Practise for docker intercontainer communication

I have two docker containers A and B. On container A a django application is running. On container B a WEBDAV Source is mounted.
Now I want to check from container A if a folder exists in container B (in the WebDAV mount destination).
What is the best solution to do something like that? Currently I solved it mounting the docker socket into the container A to execute cmds from A inside B. I am aware that mounting the docker socket into a container is a security risk for the host and the whole application stack.
Other possible solutions would be to use SSH or share and mount the directory which should be checked. Of course there are further possible solutions like doing it with HTTP requests.
Because there are so many ways to solve a problem like that, I want to know if there is a best practise (considering security, effort to implement, performance) to execute commands from container A in contianer B.
Thanks in advance
WebDAV provides a file-system-like interface on top of HTTP. I'd just directly use this. This requires almost no setup other than providing the other container's name in configuration (and if you're using plain docker run putting both containers on the same network), and it's the same setup in basically all container environments (including Docker Swarm, Kubernetes, Nomad, AWS ECS, ...) and a non-Docker development environment.
Of the other options you suggest:
Sharing a filesystem is possible. It leads to potential permission problems which can be tricky to iron out. There are potential security issues if the client container isn't supposed to be able to write the files. It may not work well in clustered environments like Kubernetes.
ssh is very hard to set up securely in a Docker environment. You don't want to hard-code a plain-text password that can be easily recovered from docker history; a best-practice setup would require generating host and user keys outside of Docker and bind-mounting them into both containers (I've never seen a setup like this in an SO question). This also brings the complexity of running multiple processes inside a container.
Mounting the Docker socket is complicated, non-portable across environments, and a massive security risk (you can very easily use the Docker socket to root the entire host). You'd need to rewrite that code for each different container environment you might run in. This should be a last resort; I'd consider it only if creating and destroying containers would need to be a key part of this one container's operation.
Is there a best practise to execute commands from container A in contianer B?
"Don't." Rearchitect your application to have some other way to communicate between the two containers, often over HTTP or using a message queue like RabbitMQ.
One solution would be to mount one filesystem readonly on one container and read-write on the other container.
See this answer: Docker, mount volumes as readonly

Running NodeJS server in production

I have a react + node app which I need to deploy. I am using nginx to serve my front end but I am not sure what to use to keep my nodejs server running in production.
The project is hosted on a windows VM. I cannot use pm2 due to license issues. I have no idea if running the server using nodemon in production is good or not. I have never deployed an app in production, hence I have no idea about appropriate methods.
You may consider forever or supervisor.
Check this blog post on the same.
You can also use docker. You can create multiple docker containers that will run your node server. Now at the nginx level at your host machine you can do load balancing configuration which will route the traffic equally to different docker node containers this will improve your availability and scalability, In heavy traffic you just need to increase the number of docker node containers as and when required. I guess initially 2 containers will be enough to handle traffic (depends on your use case though).
Note:- You can also use forever or supervisor as suggested by #Rajesh Gupta inside your docker containers for running node server. We use PM2 for that.
If you have a database then you can create a separate docker container for the database and map it to a volume in your host machine.
You can learn about docker from here.
Also you can read about load balancing in nginx from here.
Further more to improve your availability you can add a caching layer in between nginx and docker containers. Varnish is the best caching service i have used till date.
PS:- We use a similar but more advanced architecture to run our Ecommerce application that generates 5-10k orders daily. So this is a tested approach with 0 downtime.
Try to dockerize the whole app including the db, caching server (if any) etc.
Here are some examples why:
You can launch a fully capable development environment on any
computer supporting Docker; you don't have to install libraries,
dependencies, download packages, mess with config files etc.
The working environment of the application remains consistent across
the whole workflow. This means the app runs exactly the same for
developer, tester, and client, be it on development, staging or
production server. In short, Docker is the counter-measure for the
age-old response in the software development: "Strange, it works for
me!"
Every application requires a specific working environment: pre-installed applications, dependencies, data bases, everything in specific version. Docker containers allow you to create such environments. Contrary to VM, however, the container doesn't hold the whole operating system—just applications, dependencies, and configuration. This makes Docker containers much lighter and faster than regular VM's.

docker-compose / nginx / SPA

I want to use docker-compose with two containers, a nginx and another one that have a node.js application. The node.js application is a Single Page Application and an API express server.
I want that nginx serves the static files from the SPA. The problem is that my app container compiles the SPA when it starts and then I do not have the files for the nginx container.
I do not want to create a data volume for it, as I want the "composed" environment to not depend on external state.
I think something on a transient volume (a volume that is started with docker-compose up) and then removed, but this feature seems to not exist.
Another way be to serve the static files by NFS in the app container and let nginx read them, but not sure about how good or bad would be this.
What's the best practice to run this environment?

how to setup nodejs webapp as service?

I have nodejs installed, npm installed, modules installed, and my app codes. On my dev machine I simply type node app.js in my app folder to start the dev server, but now it's the time to deploy it to a real server I got problem.
Where is the regular folder to place my app codes.
Which user should be use to run my nodejs app. and how to make the user only have permission to execute app codes, and 80, 443, 843 ports.
How to write the service script, and stop server by kill pid?
ports are determined by which port your app listens on. If you have physical access via ssh to a server and have root privileges etc then you can just treat it as a dev server.
I would recommend forever for keeping it running and maybe a writing a balancer to handle multiple node apps at once.
Permission handling has to be done based on connectivity. A user connects to your service and you authenticate it for its permission levels. This is done by hand.
The folder you place it is not very relevant.
If you have say a no.de server you can learn how to use their smart machines. THere are similar guides for say the Amazon EC2.
I recommend Monit with Upstart. You can read about that solution here and here. You can also make a simple load balancer in nginx.

Resources