Disable logs for existing containers in docker - linux

I am new to docker and am running an Ubuntu container on Arch linux. I use it for debugging and building software with an older version of gcc. I was running low on disk and stumbled upon logs which I was able to truncate. I don't need the logs but don't want to loose my existing container that I created some time back. The solutions I have come across (disable through drivers or set rotate size to 0m) are in my understanding applied to create new containers, but I want to apply them to existing one.

You can create an image of that container with docker commit, remove the container with docker rm and then use --log=none option to docker run.
If you're new to Docker, consider that it's best to use ephemeral containers of a given image. You can also maintain a Dockerfile to recreate that image with docker build.

Related

Need to run a docker image in azure virtual machine scale sets

I have a docker image, I want to run it in azure VMSS . It should be scaled automatically (custom scaling based on metrics). Need some help on how to achieve this..
I can create a VM, install docker and run docker image there. But not sure how to do the same in VMSS. Do we need to get into the instances of VMSS and install docker and run docker image there in each VM? If so, how it will work when it scaled out to new instances..
It seems that there is an autoscale mechanism in place: https://learn.microsoft.com/en-us/azure/virtual-machine-scale-sets/virtual-machine-scale-sets-autoscale-overview
I wish I had a reliable answer.
I created the base VM, modified /etc/docker/daemon.json so it sets data-root to run in /mnt/docker, which would have tons of space.
tested out docker there using docker version 20.10.12. docker pull worked, docker run, ps -a, docker rm, docker image ls, docker system prune, all worked.
generalized vm, created image, hosted it in compute gallery, built a VMSS from the image version in the compute gallery.
Logged into a running instance.
successfully did a docker pull. but on docker run (same command I used in base VM), and poof, kernel panic, not syncing error.
but then, later on, after googling around (some said out-of-diskspace (not the problem)), found this site, tried to reproduce, and poof, it worked, docker run worked. I changed nothing.
Which kinda sucks. I built the VMSS for Azure Devops build/release pipelines. having it sometimes work (or fail the first time, dunno yet) is not a good solution.

How do i create a custom docker image?

I have to deploy my application software which is a linux based package (.bin) file on a VM instance. As per system requirements, it needs minimum 8vCPUs and 32GB RAM.
Now, i was wondering if it is possible to deploy this software over multiple containers that load share the CPU and RAM power in the kubernetes cluster, rather than installing the software on a single VM instance.
is it possible?
Yes, it's possible to achieve that.
You can start using docker compose to build your customs docker images and then build your applications quickly.
First, I'll show you my GitHub docker-compose repo, you can inspect the folders, they are separated by applications or servers, so, one docker-compose.yml build the app, only you must run a command docker-compose up -d
if you need to create a custom image with docker you should use this docker command docker build -t <user_docker>/<image_name> <path_of_files>
<user_docker> = your docker user
<image_name> = the image name that you choose
<path_of_files> = somelocal path, if you need to build in the same folder you should use . (dot)
So, after that, you can upload this image to Dockerhub using the following commands.
You must login with your credentials
docker login
You can check your images using the following command
docker images
Upload the image to DockerHub registry
docker push <user_docker>/<image_name>
Once the image was uploaded, you can use it in different projects, make sure to make the image lightweight and usefully
Second, I'll show a similar repo but this one has a k8s configuration into the folder called k8s. This configuration was made for Google cloud but I think you can analyze it and learn how you can start in your new project.
The Nginx service was replaced by ingress service ingress-service.yml and https certificate was added certificate.yml and issuer.yml files
If you need dockerize dbs, make sure the db is lightweight, you need to make a persistent volume using PersistentVolumeClaim (database-persistent-volume-claim.yml file) or if you use larger data onit you must use a dedicated db server or some db service in the cloud.
I hope this information will be useful to you.
There are two ways to achieve what you want to do. The first one is to write a dockerfile and create the image. More information about how to write a dockerfile can be found from here. Apart for that you can create a container from a base image and install all the software and packages and export it as a image. Then you can upload to a docker image repo like Docker Registry and Amazon ECR

Is it possible to stop duplicating docker images?

We have On-premises software docker image.Also, We have licensing for application security and code duplication.
But to add extra security is it possible to do any of the below ?
Can we lock docker image such that no one can copy or save running container and start new docker container in another environment.
or is it possible to change something in docker image while build that may prevent user to login inside container.
Goal is to secure docker images as much as possible in terms of duplication of the docker images and stop login inside running container to see the configuration.
No. Docker images are a well known format with an open specification that is essentially a set of tar files and some json metadata. Once someone has this image, they can do with it what they want. This includes running it with any options they'd like, coping it, and extending it with their own changes.

Should you recreate containers when deploying web app?

I'm trying to figure out if best practices would dictate that when deploying a new version of my web app (nodejs running in its own container) I should:
Do a git pull from inside the container and update "in place"; or
Create a new container with the new code and perform a hot swap of the two docker containers
I may be missing some technical details as I'm very new to the idea of containers.
The second approach is the best practice: you would make a second version of your image (with the new code), stop your container, and run a second container based on that second version.
The idea is that you can easily roll-back as the first version of your image can be used to run the container that was initially in production at any time.
Trying to modify a running container is not a good idea as, once it is stopped and removed, running it again would be from the original image, with its original state. Unless you commit that container to a new image, those changes would be lost. And even if you did commit, you would not be able to easily rebuild that image. (plus you would commit the all container: its new code, but also a bunch of additional files created during the execution of the server: logs and other files: not very clean)
A container is supposed to be run from an image that you can precisely build from the specifications of a Dockerfile. It is not supposed to be modified at runtime.
Couple of caveat though:
if your container is used (--link) by other containers, you would beed to stop those first, stop your container and run a new one from a new version of the image, then restart your other containers.
don't forget to remount any data containers that you were using in order to get your persistent data.

Docker continuous deployment workflow

I'm planning to set up a jenkins-based CD workflow with Docker at the end.
My idea is to automatically build (by Jenkins) a docker image for every green build, then deploy that image either by jenkins or by 'hand' (I'm not yet sure whether I want to automatically run each green build).
Getting to the point of having a new image built is easy. My question is about the deployment itself. What's the best practice to 'reload' or 'restart' a running docker container? Suppose the image changed for the container, how do I gracefully reload it while having a service running inside? Do I need to do the traditional dance with multiple running containers and load balancing or is there a 'dockery' way?
Suppose the image changed for the container, how do I gracefully reload it while having a service running inside?
You don't want this.
Docker is a simple system for managing apps and their dependencies. It's simple and robust because ALL dependencies of an application are bundled with it. If your app runs today on your laptop, it will run tomorrow on your server. This is because we have captured 100% of the "inputs" for your application.
As soon as you introduce concepts like "upgrade" and "restart", your application can (accidentally) store state internally. That means it might behave differently tomorrow than it does today (after being restarted and upgraded 100 times).
It's better use a load balancer (or similar) to transition between your versions than to try and muck with the philosophy of Docker.
The Docker machine itself should always be immutable as you have to replace it for a new deployment. Storing state inside the Docker container will not work when you want to ship new releases often that you've built on your CI.
Docker supports Volumes which will let you write files that are permanent into some folder on the host. When you then upgrade the Docker container you use the same volume so you've got access to the same files written by the old container:
https://docs.docker.com/userguide/dockervolumes/

Resources