Google Cloud Run and system capabilities - linux

I have a docker image which I am running on Google's Cloud Run.
When I want to run the image locally, I have to give my container additional capabilities like the following:
docker run -p 8080:8080 --cap-add=SYS_ADMIN gcr.io/my-project/my-docker-image
Is there a way of configuring Docker's capabilities in Cloud Run?
I stumbled upon this piece of API documentation from Google, but I don't know how to configure my container. I am not even sure that it is relevant to my situation.
Any help would be really appreciated.

Expanding the POSIX capabilities is not an option on Cloud Run or Cloud Run on GKE as they represent expanding the security vulnerabilities of the underlying host.
Adding capabilities is often the easiest way to make something with special system demands work. More complex but frequently doable are modifications within the container environment or to the package configuration to get things working.
If what you're trying to do absolutely requires cap-add, this might be addressed in a feature request to the software package... or it may be a novel use case that Cloud Run cannot support but may in the future with your feedback.

Related

Docker plugin - passing and storing passwords in a standalone docker setup

I'm working on a docker plugin that needs to access an external service using a password. The password should be configured at plugin install time and be available during the plugin's lifetime.
Currently I'm using env variables, optionally reading the password from a file via VAR=$(cat password_file). This approach is convenient, but doesn't seem like a very good solution as the password can be looked up using docker plugin inspect.
I wonder what would be the best way to pass and store passwords in a plugin using standalone docker setup. Swarm and Kubernetes (and probably other orchestration solutions) support secrets. Unfortunately, standalone docker doesn't seem to support secrets and the customer's docker setup is not under my controls :-(
I did look through the documentation and spent time googling the answer, but came up empty. In fact, I saw a few generic threads about storing passwords in containers with no satisfactory answers, but these were from a few years ago and I was hoping that maybe in 2018 such a basic issue has a decent solution.
P.S. This is my first question - please be gentle with me.

Forbid npm update in Docker environment

guys,
For various projects, I'm creating single Docker environments. Each Docker container consists of Debian, Nginx, Node.js, etc. and is going to use by developers as well as in production via Google Cloud's Kubernetes. Since the Node.js/module version should be everywhere the same, I would like to restrict the access to certain npm commands (somehow). Often developers work with different Node.js and project modules and that caused a lot of trouble in the past. With the Docker containers, I can provide environments with everything you need for a project. To finish this step, I would like to restrict the npm command execution and only allow arguments like install, test, etc.
Please drop me a comment if you know how to resolve this :)
Cheers
It is almost impossible to limit your developers to run some commands in the container if they have an access to Dockerfiles and can somehow change a build flow.
But, because container providing isolation and you can build a custom container for which application based on your basic image, it can be not a big problem if the version of any package for one application will be changed somehow, as an example in a build step, because it will not affect other apps. They just have different containers.
So, you will not have a problem with compatibility like when you using one server with many application which using a shared environment.
The only one thing you need to do - make sure that nobody change container which you using as a base image.

How to create a docker image of current file and OS system?

I wonder if one can take all the current environment variables settings OS applications and create a simple docker layer on top of it all so that docker container user will not be able to damage host system even if he would remove all files, yet will have abilety to access all installed applications and system settings inside his docker layer?
Technically you might be able to hack together a solution that does this by copying in all data/apps, installing dependencies, re-configuring the applications and providing a bash shell to attach to for a user to play around with but this is not what Docker is designed for at all, not to mention that I would not recommend anyone to attempt this.
I always try to explain docker's usecase as processes which run in isolated containers with defined interfaces that may be exposed. Meaning you would ideally run one application within it which has an interface exposed for communication.
What you are looking for is essentially a VM with snapshots which you can provide to different users.

Docker For Development Only

I am an IT Supervisor head and have very little development background so I apologize for this naive question.
Currently, we are using Weblogic, running in Linux VMs, created by Oracle VM (OVM) to host our application for production.
The development environment also uses the same configuration.
Our developers are suggesting we use docker in the development environment and utilize DevOps to increase the agility of development.
This sounds like a good idea to me, but I still want our production to run on the same configuration running today (Weblogic in Linux VMs over Oracle VM Hypervisor); I do not want to use docker for production.
I have been searching to find out if that is possible with no luck.
I would really appreciate it if you can help.
I have three questions:
Is that possible?
Is that a normal practice to run docker for development only while using traditional nondocker for production?
If it is possible, what are the best ways to achieve that?
Thank You
Docker is linux distro-agnostic. Java development is JEE container-agnostic (if you follow the Java official specs defined in the JSRs).
So, these are two reasons why you should have the same behaviour between your developper environment and your production environment. Of course, a pre-production environment should be welcome to be sure this is true. And do not avoid looking at memory and performances issues, before doing that. Moreover, depending on the reason you are using Weblogic, ask yourself about which JVM and JEE container you would run in your docker containers.
is that possible ?
Yes, we do that in my organization, for some applications, using tomcat (instead of WebSphere for other applications).
is that a normal practice to run docker for development only while using traditional none docker for production ?
There are many practices, depending on the organization goals, strategy and level of agility. Using Docker for development and not in production is the most use-case with Docker containers, nowadays, but the next level is to use a Docker engine in a production environment. See next section:
-if it is possible, what are the best practice to achieve that ?
The difficulty is that in a production environment, you need a system for automating deployment, scaling, and management of containerized applications.
Developers do not need that. So it is really easy for them to migrate to Docker (and it lets them do things easier and faster than without Docker).
In production, you should really consider using Kubernetes or OpenShift, instead of running a simple docker engine, like your developers do. But it is much more complicated than simply installing Docker on a single Windows or Linux host.

Docker for a one shot CLI application

Since I first knew of Docker, I thought it might be the solution for several problems we are usually facing at the lab. I work as a Data Analyst for a small Biology research group. I am using Snakemake for defining the -usually big and quite complex- workflows for our analyses.
From Snakemake, I usually call small scripts in R, Python, or even Command Line Applications such as aligners or annotation tools. In this scenario, it is not uncommon to suffer from dependency hell, hence I was thinking about wrapping some of the tools in Docker containers.
At this moment I am stuck at a point where I do not know if I have chosen technology badly, or if I am not able to properly assimilate all the information about Docker.
The problem is related to the fact that you have to run the Docker tools as root, which is something I would not like to do at all, since the initial idea was to make the dockerized applications available to every researcher willing to use them.
In AskUbuntu, the most voted answer proposes to add the final user to the docker group, but it seems that this is not good for security. In the security articles at Docker, on the other hand, they explain that running the tools as root is good for your security. I have found similar questions at SO, but related to the environment inside the container.
Ok, I have no problem with this, but as every moderate-complexity example I happen to find, it seems it is more oriented towards web-applications development, where the system could initially start the container once and then forget about it.
Things I am considering right now:
Configuring the Docker daemon as a TLS-enabled, TCP remote service, and provide the corresponding users with certificates. Would there be any overhead in running the applications? Security issues?
Create images that only make available the application to the host by sharing a /usr/local/bin/ volume or similar. Is this secure? How can you create a daemonized container that does not need to execute anything? The only example I have found implies creating an infinite loop.
The nucleotid.es page seem to do something similar to what I want, but I have not found any reference to security issues. Maybe they are running all the containers inside a virtual machine, where they do not have to worry about these issues, due to the fact that they do not need to expose the dockerized applications to more people.
Sorry about my verbosity. I just wanted to write down the mental process (possibly flawed, I know, I know) where I am stuck. To sum up:
Is there any possibility to create a dockerized command line application which does not need to be run using sudo, is available for several people in the same server, and which is not intended to run in a daemonized fashion?
Thank you in advance.
Regards.
If users will be able to execute docker run then will be able to control host system just because they could map files from host to container and in container they always could be root if they could use docker run or docker exec. So users should not be able to execute docker directly. I think easiest solution here to create scripts which run docker and these scripts could either have suid flag or users could have sudo access to them.

Resources