Docker: use private layers without running instance of registry - linux

I'd like to build my own layer on top of a public Docker image. Fine, I know how to do that. However, my layer will contain proprietary code that I can't share in a public docker image. I do, however want to be able to share it among servers inside my organization.
Is my only option to run my own instance of docker registry? Or are there workflows that allow moving of layers/images around without a central repository?

You can:
run your own docker registry,
use one of the private registry services available out there,
move images around with docker save and docker load,
build images locally each time (not recommended, but eh!)

To expand on Jerome's answer, we are one of the private Docker registry services: Quay.io
We offer a robust permissions model that should be sufficient for your needs, including teams, shared organizations, and individual repository level controls.
We have many companies using us to store their proprietary code and to distribute it within their datacenters. If you do consider using us, I am sure your first questions will deal with the security of our service, for which we have dedicated a special page on our site: Security

We run Gandalf.io, an affordable private docker registry service. You'll like Gandalf.io if you need to get started with private dockers quickly and inexpensively. Its still pretty much in the early stages so we're offering just one service - private docker sharing among teams using the docker cli - and its works well for that use case.

Related

How to create a docker image of current file and OS system?

I wonder if one can take all the current environment variables settings OS applications and create a simple docker layer on top of it all so that docker container user will not be able to damage host system even if he would remove all files, yet will have abilety to access all installed applications and system settings inside his docker layer?
Technically you might be able to hack together a solution that does this by copying in all data/apps, installing dependencies, re-configuring the applications and providing a bash shell to attach to for a user to play around with but this is not what Docker is designed for at all, not to mention that I would not recommend anyone to attempt this.
I always try to explain docker's usecase as processes which run in isolated containers with defined interfaces that may be exposed. Meaning you would ideally run one application within it which has an interface exposed for communication.
What you are looking for is essentially a VM with snapshots which you can provide to different users.

How to automatically scan deployed containers for security vulnerabilities?

Places like quay.io provide an analysis of known vulnerabilities for the container images they host. How can I connect that to my deployed software in Kubernetes? In other words, I want a process that will periodically:
query the apiserver to list all pods
get the image associated with each container in the pod
check each image against a known vulnerability list.
By analogy, we can do this at the OS level by using built-in tools or external things like Nessus. I've found plenty of tools that can do a static analysis of container images; that's like the CVE database of .apt packages. How do I apply that list of image vulnerabilities to a running system?
I've found plenty of tools that can do a static analysis of container images;
That is the preferred approach indeed.
As an alternative to connect to running container, and get their image (that a docker inspect might give you: docker inspect --format='{{.Config.Image}}' $INSTANCE_ID), you might consider:
doing this analysis in advance (at the image level)
signing the image
only allowing to run container from signed images
That is what describes Antonio Murdaca (Senior Engineer at Red Hat Inc. and one of the CRI-O guys. Docker (Moby) Core Maintainer) in "Secure your Kubernetes production cluster".
digitally sign a container image with a GPG key generating its detached signature, put the signature where it can be retrieved and verified and finally validate it when someone requests the image back on a host.
The story behind all this is pretty simple: if the signature for a given image is valid, the node is allowed to pull the image and run your containerwith it. Otherwise, your node rejects the image and fail to run your container.
That way, you only allow for running container whose image have been pre-validated.

Security restrictions when building dockerfile

The company where I work (strictly regulated/audited environment) is yet to embrace containers but would like to adopt them for some applications. There is the view that as the image build process issues commands as root (or could be overridden by the user by use of the USER command), that building (not running) a container is effectively giving a user unfettered access as root during the build process. This is anathema to them and goes against all manner of company policies. Access to certain commands for computers is restricted via PowerBroker, i.e. access to certain commands requires explicit permissioning and is logged/subject to audit.
We need to allow container images to be built by a CI/CD system as well as ideally to allow developers to be able to build containers locally. Containers will generally be run in Kubernetes, but may be run directly on a VM. I'd like to be able to have CI build agents spin up on demand, as there are a lot of developers, so I want to run the build process within Kubernetes.
What is the best practice for building docker containers in this sort of environment please? Should we look to restrict access to commands within the Dockerfile?
My current thinking for this approach:
CI/CD:
Define "company-approved" image to act as build agent within
Kubernetes.
Build image defines a user that the build process runs as (not
root).
Build agent image contains PowerBroker, enabling locking down access
to sensitive commands.
Scan docker file for use of user command and forbid this.
Build agent runs docker-in-docker, as per here
(https://applatix.com/case-docker-docker-kubernetes-part-2/). This
achieves isolation between multiple build instances whilst ensuring
all containers are controlled via Kubernetes.
Images are scanned for security compliance via OpenSCAP or similar.
Passing the scan is part of the build process. Passing the scan
allows the image to be tagged as compliant and pushed to a registry.
I'm uncomfortable with the thinking around (4), as this seems a bit rule bound (i.e. it's a sort of blacklist approach) and I'm sure there must be a better way.
Developer's localhost:
Define "company-approved" base images (tagged as such inside a
trusted registry).
Image defines a user that the build process runs
as (not root).
Base image contains PowerBroker, enabling locking
down access to sensitive commands.
Create wrapper script on localhost that wraps docker build. No direct access to docker build: user must use script instead. Access to script is secured via PowerBroker. Script can also scan docker file for use of user command and forbid this.
Pushing of images to registry requires tagging which requires scanning for security compliance via OpenSCAP or similar as above.
I'd like to use the OpenSCAP results plus the CI system to create an audit trail of the images that exist; similarly for the deploy process. The security team that monitor for CVEs etc should be able to understand what containers exist and have been deployed and be able to trigger rebuilds of images to make use of updated libraries, or to flag up to developers when containers need to be rebuilt/redeployed. I want to be able to demonstrate that all containers meet a security configuration policy that is itself defined as code.
Is this a sensible way to go? Is there even a risk for allowing a user to build (but not run) a container image without restriction? If there is not, what's the best way to ensure that a foolish/malicious developer has not undone the best practices inside the "approved base image", other than a manual code review (which is going to be done anyway, but might miss something)?
By the way, you must assume that all code/images are hosted in-house/on-premises, i.e. nothing is allowed to use a cloud-based product/service.
When docker build runs each layer executes in the context of a container. So the risks presented by that command executing are constrained by what access is available to the container.
Locking down the build environment could be achieved by restricting what the Docker engine instance which will complete the build can do.
Things like ensuring that user namespaces are used can reduce the risk of a command run inside a container having a wider effect on the environment.
Of course that doesn't mitigate the risks of a developer curl|bashing from an untrusted location, but then what's to stop that being done outside of Docker? (i.e. what additional risk is being introduced by the use of Docker in this scenario)
If you have a policy of restricting externally hosted code, for example, then one option could be to just restrict access from the Docker build host to the Internet.
If you're making use of Kubernetes for the build process and are concerned about malicious software being executed in containers, it could be worth reviewing the CIS Kubernetes standard and making sure you've locked down your clusters appropriately.
There is the view that as the image build process issues commands as
root (or could be overridden by the user by use of the USER command),
that building (not running) a container is effectively giving a user
unfettered access as root during the build process
This view is not correct. When you build an image, all what you are doing is creating new docker layers (files) which are stored under /var/lib/docker/aufs/layers. There are simply no security concerns when building docker images.
There are tools to analyze the security of images you already built. One is the image analyzer built into Dockerhub.

Internal infrastructure with docker

I have a small company network with the following services/servers:
Jenkins
Stash (Atlassian)
Confluence (Atlassian)
LDAP
Owncloud
zabbix (monitoring)
puppet
and some Java web apps
all running in separate kvm(libvirt)-vms in separate virtual-subnets on 2 machines (1 internal, 1 hetzner-rootserver) with shorewall inbetween. I'm thinking about switching to Docker.
But I have two questions:
How can I achieve network security between docker containers (i.e. I want to prevent owncloud to access any host in the network except ldap-hosts-sslport)
Just by using docker-linking? If yes: does docker really allow to access only linked containers, but no others?
By using kubernetes?
By adding multiple bridging-network-interfaces for each container?
Would you switch all my infra-services/-servers to docker, or a hybrid solution with just the owncloud and the java-web-apps on docker?
Regarding the multi-host networking: you're right that Docker links won't work across hosts. With Docker 1.9+ you can use "Docker Networking" like described in their blog post http://blog.docker.com/2015/11/docker-multi-host-networking-ga/
They don't explain how to secure the connections, though. I strongly suggest to enable TLS on your Docker daemons, which should also secure your multi-host network (that's an assumption, I haven't tried).
With Kubernetes you're going to add another layer of abstraction, so that you'll need to learn working with the pods and services concept. That's fine, but might be a bit too much. Keep in mind that you can still decide to use Kubernetes (or alternatives) later, so the first step should be to learn how you can wrap your services in Docker containers.
You won't necessarily have to switch everything to Docker. You should start with Jenkins, the Java apps, or OwnCloud and then get a bit more used to the Docker universe. Jenkins and OwnCloud will give you enough challenges to gain some experience in maintaining containers. Then you can evaluate much better if Docker makes sense in your setup and with your needs to be applied to the other services.
I personally tend to wrap everything in Docker, but only due to one reason: keeping the host clean. If you get to the point where everything runs in Docker you'll have much more freedom to choose where a service can run and you can move containers to other hosts much more easily.
You should also explore the Docker Hub, where you can find ready to run solutions, e.g. Atlassian Stash: https://hub.docker.com/r/atlassian/stash/
If you need inspiration for special applications and how to wrap them in Docker, I recommend to have a look in https://github.com/jfrazelle/dockerfiles - you'll find a bunch of good examples there.
You can give containers their own IP from your subnet by creating a network like so:
docker network create \
--driver=bridge \
--subnet=135.181.x.y/28 \
--gateway=135.181.x.y+1 \
network
Your gateway is the IP of your subnet + 1 so if my subnet was 123.123.123.123 then my gateway should be 123.123.123.124
Unfortunately I have not yet figured out how to make the containers appear to the public from their own ip, at the moment they appear as the dedicated servers' ip. Let me know if you know how I can fix that. I am able to access the container using its ip though.

Why nobody does not make it in the docker? (All-in-one container/"black box")

I need a lot of various web applications and microservices.
Also, I need to do easy backup/restore and move it between servers/cloud providers.
I started to study Docker for this. And I'm embarrassed when I see advice like this: "create first container for your application, create second container for your database and link these together".
But why I need to do separate container for database? If I understand correctly, the main message is the docker the: "allow to run and move applications with all these dependencies in isolated environment". That is, as I understand, it is appropriate to place in the container application and all its dependencies (especially if it's a small application with no require to have external database).
How I see the best-way for use Docker in my case:
Take a baseimage (eg phusion/baseimage)
Build my own image based on this (with nginx, database and
application code).
Expose port for interaction with my application.
Create data-volume based on this image on the target server (for store application data, database, uploads etc) or restore data-volume from prevous backup.
Run this container and have fun.
Pros:
Easy to backup/restore/move application around all. (Move data-volume only and simply start it on the new server/environment).
Application is the "black box", with no headache external dependencies.
If I need to store data in external databases or use data form this - nothing prevents me for doing it (but usually it is never necessary). And I prefer to use the API of other blackboxes instead direct access to their databases.
Much isolation and security than in the case of a single database for all containers.
Cons:
Greater consumption of RAM and disk space.
A little bit hard to scale. (If I need several instances of app for response on thousand requests per second - I can move database in separate container and link several app instances on it. But it need in very rare cases)
Why I not found recommendations for use of this approach? What's wrong with it? What's the pitfalls I have not seen?
First of all you need to understand a Docker container is not a virtual machine, just a wrapper around the kernel features chroot, cgroups and namespaces, using layered filesystems, with its own packaging format. A virtual machine usually a heavyweight, stateful artifact with extensive configuration options regarding to the resources available on the host machine and you can setup complex environments within a VM.
A container is a lightweight, throwable runtime environment with a recommendation to make it as stateless as possible. All changes are stored with in the container that is just a running instance of the image and you'll loose all diffs in case of container deletion. Of course you can map volumes for more static data, but this is available for the multi-container architecture too.
If you pack everything into one container you loose the capability to scale the components independently from each other and build a tight coupling.
With this tight coupling you can't implement fail-over, redundancy and scalability features into your app config. The most modern nosql databases are built to scale out easily and also the data redundancy could be a possibility when you run more than one backing database instance.
On the other side defining this single-responsible containers is easy with docker-compose, where you can declare them in a simple yml file.

Resources