I have an application that needs apache to report the same year in the past otherwise it will not work correct (it's encoded with ioncube so I cannot change the code). I configured it as a service and set it to run in a docker container. If i set the docker container date using date --set="" it will not allow me, then i set the privileged: true in the Docker file and it works but as soon as i set to it goes back to the default date (in the host system i have a time sync function that kicks in). I would like the host system to have it's real time but inside the container to be able to set a date in the past. How can I do that ? Thanks
Both host and the container are Debian 11.
Related
I got a Docker environment that contain Linux Containers,
I am running on one of those pods a program that try to access a windows server under a secure domain(my own),
How can i tell the container to run from a specific user , or vice versa how can i allow access to my Docker environment to my server?
I prefer the first option if possible.
I have a mongodb docker container (stock one downloaded from the docker repo). Its log size is unconstrained (/var/lib.docker/containers/'container_id'/'container_id'-json.log)
This recently caused a server to fill up so I discovered I can instruct the docker daemon to limit the max size of a container's log file as well as the number of log files it will keep after splitting. (Please forgive the naiveté. This is a tools environment so things get set up to serve immediate needs with an often painful lack of planning)
Stopping the container is not desirable (though it wouldn't bring about the end of the world) thus doing so is probably a suitable plan G.
Through experimentation I discovered that running a different instance of the same docker image and including --log-opt max-size=1m --log-opt max-file=3 in the docker run command accomplishes what I want nicely.
I'm given to understand that I can include this in the docker daemon.json file so that it will work globally for all containers. I tried adding the following to the file "/etc/docker/daemon.json"
{
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "3"
}
}
Then I sent a -SIGHUP to the daemon. I did observe that the daemon's log spit out something about reloading the config and it mentioned the exact filepath at which I made the edit. (Note: This file did not exist previously. I created it and added the content.) This had no effect on the log output of the running Mongo container.
After reloading the daemon I also tried instantiating the different instance of the Mongo container again and it too didn't observe the logging directive that the daemon should have. I saw its log pass the 10m mark and keep going.
My questions are:
Should there be a way for updates to logging via the daemon to affect running containers?
If not, is there a way to tell the container to reload this information while still running? (I see docker update but this doesn't appear to be one of the config options that can be updated.
Is there something wrong with my config. I tested including a nonsensical directive to see if mistakes would fail silently and they did not. A directive not in the schema raised an error in the daemon's log. This indicates that the content I added (displayed above) is, at least, expected, though possibly incomplete or something. The commands seem to work in the run command but not in the config. Also, I initially tried including the "3" as a number and that raised an error too that disappeared when I stringified it.
I did see in the file "/var/lib.docker/containers/'container_id'/hostconfig.json" for the different instance of the Mongo container in which I included the directives in its run command that these settings were visible. Would it be effective/safe to manually edit this file for the production instance of the Mongo container to match the different proof of concept container's config?
Please see below some system details:
Docker version 1.10.3, build 20f81dd
Ubuntu 14.04.1 LTS
My main goal is to understand why the global config didn't seem to work and if there is a way to make this change to a running container without interrupting it.
Thank you, in advance, for your help!
This setting will be the new default for newly created containers, not existing containers even if they are restarted. A newly created container will have a new container id. I stress this because many people (myself included) try to change the log settings on an existing container without first deleting that container (they've likely created a pet), and there is no supported way to do that in docker.
It is not necessary to completely stop the docker engine, you can simply run a reload command for this change to apply. However, some methods for running docker, like the desktop environments and Docker in Docker based installs, may require a restart of the engine when there is no easy reload option.
This setting will limit the json file to 3 separate 10 meg files, that's between 20-30 megs of logs depending on where in the file the third log happens to be. Once you fill the third file, the oldest log is deleted, taking you back to 20 megs, a rotation is performed in the other logs, and a new log file is started. However json has a lot of overhead, approximately 50% in my testing, which means you'll get roughly 10-15 megs of application output.
Note that this setting is just the default, and any container can override it. So if you see no effect, double check how the container is started to verify there are no log options being passed there.
Changing the daemon.json for running containers did not work for me. Reloading the daemon and restarting the docker after editing the /etc/docker/daemon.json worked but only for the new containers.
docker-compose down
sudo systemctl daemon-reload
sudo systemctl restart docker
docker-compose up -d
I am running NGINX and Tomcat on Docker containers (container OS is Red Hat linux) and deployed through Kubernetes pods. Host OS is Red Hat Linux.
My query is which OS parameter will be effective - host OS or container OS? During performance tuning do I need to tune both OS or host OS parameters are effective.
Example of some parameters I am referring to are ulimit - n (open files), net.ipv4.tcp.* , fs.file-max, etc.
As Crazykev already mentioned, you can set ulimits using the respective docker run flags.
Parameters like net.ipv4.tcp.* are kernel parameters. Docker containers are run in the same Linux kernel as the host system; for this reason, parameters set on the host will also be effective in the container.
Usually, you will not be able to set these parameters from inside a container. You can (not saying you should) start a container with the --privileged flag, which might (untested) give you access to setting kernel parameters from within the container. The Kubernetes docs also describe how to start privileged containers.
In Docker container, and I'm not sure if it could be called as OS...
By the way, some of your referring example may not support set directly in docker container for safety or other issues. You may need to find more manual in docker docs.(for example, ulimit, docker run --ulimit nofile=262144:262144)
Kubernetes right now does not support adding ulimit there is an issue in kubernetes open for that.
Similar question which asks about setting ulimit is answered here.
I am trying to find the best way to automatically start services inside a docker container once it has been restarted.
I don't mean starting the docker container on restart. I'm trying to achieve the following way:
I stop a container; and
when I start it again, the same services (processes) I was running before will start up again.
I.e. if I am running apache and ssh inside the container starting those service on container restart
That's really not the docker way (multiple processes per container). You can try to go down that path, as I did for several months, but you'll find that you'll be going against the docker team's design principles most of the time. I used the phusion/baseimage base image and it really is well designed, with a good init process and support for run-it and ssh out of the box. Tread carefully, if you go down that path however.
I'm planning to set up a jenkins-based CD workflow with Docker at the end.
My idea is to automatically build (by Jenkins) a docker image for every green build, then deploy that image either by jenkins or by 'hand' (I'm not yet sure whether I want to automatically run each green build).
Getting to the point of having a new image built is easy. My question is about the deployment itself. What's the best practice to 'reload' or 'restart' a running docker container? Suppose the image changed for the container, how do I gracefully reload it while having a service running inside? Do I need to do the traditional dance with multiple running containers and load balancing or is there a 'dockery' way?
Suppose the image changed for the container, how do I gracefully reload it while having a service running inside?
You don't want this.
Docker is a simple system for managing apps and their dependencies. It's simple and robust because ALL dependencies of an application are bundled with it. If your app runs today on your laptop, it will run tomorrow on your server. This is because we have captured 100% of the "inputs" for your application.
As soon as you introduce concepts like "upgrade" and "restart", your application can (accidentally) store state internally. That means it might behave differently tomorrow than it does today (after being restarted and upgraded 100 times).
It's better use a load balancer (or similar) to transition between your versions than to try and muck with the philosophy of Docker.
The Docker machine itself should always be immutable as you have to replace it for a new deployment. Storing state inside the Docker container will not work when you want to ship new releases often that you've built on your CI.
Docker supports Volumes which will let you write files that are permanent into some folder on the host. When you then upgrade the Docker container you use the same volume so you've got access to the same files written by the old container:
https://docs.docker.com/userguide/dockervolumes/