We have On-premises software docker image.Also, We have licensing for application security and code duplication.
But to add extra security is it possible to do any of the below ?
Can we lock docker image such that no one can copy or save running container and start new docker container in another environment.
or is it possible to change something in docker image while build that may prevent user to login inside container.
Goal is to secure docker images as much as possible in terms of duplication of the docker images and stop login inside running container to see the configuration.
No. Docker images are a well known format with an open specification that is essentially a set of tar files and some json metadata. Once someone has this image, they can do with it what they want. This includes running it with any options they'd like, coping it, and extending it with their own changes.
Related
Could someone explain me what is happening when you map (in a volume) your vendor or node_module files?
I had some speed problems of docker environment and red that I don't need to map vendor files there, so I excluded it in docker-compose.yml file and the speed was much faster instantly.
So I wonder what is happening under the hood if you have vendor files mapped in your volume and what's happening when you don't?
Could someone explain that? I think this information would be useful to more than only me.
Docker does some complicated filesystem setup when you start a container. You have your image, which contains your application code; a container filesystem, which gets lost when the container exits; and volumes, which have persistent long-term storage outside the container. Volumes break down into two main flavors, bind mounts of specific host directories and named volumes managed by the Docker daemon.
The standard design pattern is that an image is totally self-contained. Once I have an image I should be able to push it to a registry and run it on another machine unmodified.
git clone git#github.com:me/myapp
cd myapp
docker build -t me/myapp . # requires source code
docker push me/myapp
ssh me#othersystem
docker run me/myapp # source code is in the image
# I don't need GitHub credentials to get it
There's three big problems with using volumes to store your application or your node_modules directory:
It breaks the "code goes in the image" pattern. In an actual production environment, you wouldn't want to push your image and also separately push the code; that defeats one of the big advantages of Docker. If you're hiding every last byte of code in the image during the development cycle, you're never actually running what you're shipping out.
Docker considers volumes to contain vital user data that it can't safely modify. That means that, if your node_modules tree is in a volume, and you add a package to your package.json file, Docker will keep using the old node_modules directory, because it can't modify the vital user data you've told it is there.
On MacOS in particular, bind mounts are extremely slow, and if you mount a large application into a container it will just crawl.
I've generally found three good uses for volumes: storing actual user data across container executions; injecting configuration files at startup time; and reading out log files. Code and libraries are not good things to keep in volumes.
For front-end applications in particular there doesn't seem to be much benefit to trying to run them in Docker. Since the actual application code runs in the browser, it can't directly access any Docker-hosted resources, and there's no difference if your dev server runs in Docker or not. The typical build chains involving tools like Typescript and Webpack don't have additional host dependencies, so your Docker setup really just turns into a roundabout way to run Node against the source code that's only on your host. The production path of building your application into static files and then using a Web server like nginx to serve them is still right in Docker. I'd just run Node on the host to develop this sort of thing, and not have to think about questions like this one.
I have to deploy my application software which is a linux based package (.bin) file on a VM instance. As per system requirements, it needs minimum 8vCPUs and 32GB RAM.
Now, i was wondering if it is possible to deploy this software over multiple containers that load share the CPU and RAM power in the kubernetes cluster, rather than installing the software on a single VM instance.
is it possible?
Yes, it's possible to achieve that.
You can start using docker compose to build your customs docker images and then build your applications quickly.
First, I'll show you my GitHub docker-compose repo, you can inspect the folders, they are separated by applications or servers, so, one docker-compose.yml build the app, only you must run a command docker-compose up -d
if you need to create a custom image with docker you should use this docker command docker build -t <user_docker>/<image_name> <path_of_files>
<user_docker> = your docker user
<image_name> = the image name that you choose
<path_of_files> = somelocal path, if you need to build in the same folder you should use . (dot)
So, after that, you can upload this image to Dockerhub using the following commands.
You must login with your credentials
docker login
You can check your images using the following command
docker images
Upload the image to DockerHub registry
docker push <user_docker>/<image_name>
Once the image was uploaded, you can use it in different projects, make sure to make the image lightweight and usefully
Second, I'll show a similar repo but this one has a k8s configuration into the folder called k8s. This configuration was made for Google cloud but I think you can analyze it and learn how you can start in your new project.
The Nginx service was replaced by ingress service ingress-service.yml and https certificate was added certificate.yml and issuer.yml files
If you need dockerize dbs, make sure the db is lightweight, you need to make a persistent volume using PersistentVolumeClaim (database-persistent-volume-claim.yml file) or if you use larger data onit you must use a dedicated db server or some db service in the cloud.
I hope this information will be useful to you.
There are two ways to achieve what you want to do. The first one is to write a dockerfile and create the image. More information about how to write a dockerfile can be found from here. Apart for that you can create a container from a base image and install all the software and packages and export it as a image. Then you can upload to a docker image repo like Docker Registry and Amazon ECR
I am new to docker and am running an Ubuntu container on Arch linux. I use it for debugging and building software with an older version of gcc. I was running low on disk and stumbled upon logs which I was able to truncate. I don't need the logs but don't want to loose my existing container that I created some time back. The solutions I have come across (disable through drivers or set rotate size to 0m) are in my understanding applied to create new containers, but I want to apply them to existing one.
You can create an image of that container with docker commit, remove the container with docker rm and then use --log=none option to docker run.
If you're new to Docker, consider that it's best to use ephemeral containers of a given image. You can also maintain a Dockerfile to recreate that image with docker build.
I'm trying to run a ghost docker image on Azure within a Linux Docker container. This is incredibly easy to get up and running using a custom Docker image for Azure Web App on Linux and pointing it at the official docker hub image for ghost.
Unfortunately the official docker image stores all data on the /var/lib/ghost path which isn't persisted across restarts so whenever the container is restarted all my content get's deleted and I end up back at a default ghost install.
Azure won't let me execute arbitrary commands you basically point it at a docker image and it fires off from there so I can't use the -v command line param to map a volume. The docker image does have an entry point configured if that would help.
Any suggestions would be great. Thanks!
Set WEBSITES_ENABLE_APP_SERVICE_STORAGE to true in appsettings and the home directory would be mapped from your outer kudo instance:
https://learn.microsoft.com/en-us/azure/app-service/containers/app-service-linux-faq
You have a few options:
You could mount a file share inside the Docker container by creating a custom image, then storing data there. See these docs for more details.
You could switch to the new container instances, as they provide volume support.
You could switch to the Azure Container Service. This requires an orchestrator, like Kubernetes, and might be more work than you're looking for, but it also offers more flexibility, provides better reliability and scaling, and other benefits.
You have to use a shared volume that map the content of the container /var/lib/ghost directory to a host directory. This way, your data will persist in your host directory.
To do that, use the following command.
$ docker run -d --name some-ghost -p 3001:2368 -v /path/to/ghost/blog:/var/lib/ghost/content ghost:1-alpine
I never worked with Azure, so I'm not 100 percent sure the following applies. But if you interface docker via the CLI there is a good chance it applies.
Persistency in docker is handled with volumes. They are basically mounts inside the container's file system tree to a directory on the outside. From your text I understand that you want store the content of the inside /var/lib/ghost path in /home/site/wwwroot on the outside. To do this you would call docker like this:
$ docker run [...] -v /var/lib/ghost:/home/site/wwwroot ghost
Unfortunately setting the persistent storage (or bring your own storage) to a specific path is currently not supported in Azure Web Apps on Linux.
That's said, you can play with ssh and try and configure ghost to point to /home/ instead of /var/lib/.
I have prepared a docker image here: https://hub.docker.com/r/elnably/ghost-on-azure that adds the ssh capability the dockerfile and code can be found here: https://github.com/ahmedelnably/ghost-on-azure/tree/master/1/alpine.
try it out by configuring you web app to use elnably/ghost-on-azure:latest, browse to the site (to start the container) and go to the ssh page .scm.azurewebsites.net, to learn more about SSH check this link: https://aka.ms/linux-ssh.
How to ensure, that docker container will be secure, especially when using third party containers or base images?
Is it correct, when using base image, it may initiate any services or mount arbitrary partitions of host filesystem under the hood, and potentially send sensitive data to attacker?
So if I use third party container, which Dockerfile proves the container to be safe, should I traverse the whole linked list of base images (potentially very long) to ensure the container is actually safe and does what it intends of doing?
How to ensure the trustworthy of docker container in a systematic and definite way?
Consider Docker images similar to android/iOS mobile apps. You are never quite sure if they are safe to run, but the probability of it being safe is higher when it's from an official source such as Google play or App Store.
More concretely Docker images coming from Docker hub go through security scans details of which are undisclosed as yet. So chances of a malicious image pulled from Docker hub are rare.
However, one can never be paranoid enough when it comes to security. There are two ways to make sure all images coming from any source are secure:
Proactive security: Do security source code review of each Dockerfile corresponding to Docker image, including base images which you have already expressed in question
Reactive security: Run Docker bench, open sourced by Docker Inc., which runs as a privileged container looking for runtime known malicious activities by containers.
In summary, whenever possible use Docker images from Docker hub. Perform security code reviews of DockerFiles. Run Docker bench or any other equivalent tool that can catch malicious activities performed by containers.
References:
Docker security scanning formerly known as Project Nautilus: https://blog.docker.com/2016/05/docker-security-scanning/
Docker bench: https://github.com/docker/docker-bench-security
Best practices for Dockerfile: https://docs.docker.com/engine/userguide/eng-image/dockerfile_best-practices/
Docker images are self-contained, meaning that unless you run them inside a container with volumes and network mode they have no way of accessing any network or memory stack of your host.
For example if I run an image inside a container by using the command:
docker run -it --network=none ubuntu:16.04
This will start the docker container ubuntu:16.04 with no mounting to host's storage and will not share any network stack with host. You can test this by running ifconfig inside the container and in your host and comparing them.
Regarding checking what the image/base-image does, a conclusion from above said is nothing harmful to your host (unless you mount your /improtant/directory_on_host to container and after starting container it removes them).
You can check what an image/base-image conatins after running by checking their dockerfile(s) or docker-compose .yml files.