Docker daemon.json logging config not effective - linux

I have a mongodb docker container (stock one downloaded from the docker repo). Its log size is unconstrained (/var/lib.docker/containers/'container_id'/'container_id'-json.log)
This recently caused a server to fill up so I discovered I can instruct the docker daemon to limit the max size of a container's log file as well as the number of log files it will keep after splitting. (Please forgive the naiveté. This is a tools environment so things get set up to serve immediate needs with an often painful lack of planning)
Stopping the container is not desirable (though it wouldn't bring about the end of the world) thus doing so is probably a suitable plan G.
Through experimentation I discovered that running a different instance of the same docker image and including --log-opt max-size=1m --log-opt max-file=3 in the docker run command accomplishes what I want nicely.
I'm given to understand that I can include this in the docker daemon.json file so that it will work globally for all containers. I tried adding the following to the file "/etc/docker/daemon.json"
{
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "3"
}
}
Then I sent a -SIGHUP to the daemon. I did observe that the daemon's log spit out something about reloading the config and it mentioned the exact filepath at which I made the edit. (Note: This file did not exist previously. I created it and added the content.) This had no effect on the log output of the running Mongo container.
After reloading the daemon I also tried instantiating the different instance of the Mongo container again and it too didn't observe the logging directive that the daemon should have. I saw its log pass the 10m mark and keep going.
My questions are:
Should there be a way for updates to logging via the daemon to affect running containers?
If not, is there a way to tell the container to reload this information while still running? (I see docker update but this doesn't appear to be one of the config options that can be updated.
Is there something wrong with my config. I tested including a nonsensical directive to see if mistakes would fail silently and they did not. A directive not in the schema raised an error in the daemon's log. This indicates that the content I added (displayed above) is, at least, expected, though possibly incomplete or something. The commands seem to work in the run command but not in the config. Also, I initially tried including the "3" as a number and that raised an error too that disappeared when I stringified it.
I did see in the file "/var/lib.docker/containers/'container_id'/hostconfig.json" for the different instance of the Mongo container in which I included the directives in its run command that these settings were visible. Would it be effective/safe to manually edit this file for the production instance of the Mongo container to match the different proof of concept container's config?
Please see below some system details:
Docker version 1.10.3, build 20f81dd
Ubuntu 14.04.1 LTS
My main goal is to understand why the global config didn't seem to work and if there is a way to make this change to a running container without interrupting it.
Thank you, in advance, for your help!

This setting will be the new default for newly created containers, not existing containers even if they are restarted. A newly created container will have a new container id. I stress this because many people (myself included) try to change the log settings on an existing container without first deleting that container (they've likely created a pet), and there is no supported way to do that in docker.
It is not necessary to completely stop the docker engine, you can simply run a reload command for this change to apply. However, some methods for running docker, like the desktop environments and Docker in Docker based installs, may require a restart of the engine when there is no easy reload option.
This setting will limit the json file to 3 separate 10 meg files, that's between 20-30 megs of logs depending on where in the file the third log happens to be. Once you fill the third file, the oldest log is deleted, taking you back to 20 megs, a rotation is performed in the other logs, and a new log file is started. However json has a lot of overhead, approximately 50% in my testing, which means you'll get roughly 10-15 megs of application output.
Note that this setting is just the default, and any container can override it. So if you see no effect, double check how the container is started to verify there are no log options being passed there.

Changing the daemon.json for running containers did not work for me. Reloading the daemon and restarting the docker after editing the /etc/docker/daemon.json worked but only for the new containers.
docker-compose down
sudo systemctl daemon-reload
sudo systemctl restart docker
docker-compose up -d

Related

Parse Database on cloud machine only persists for a couple of days

There are a lot of pieces so I don't expect anyone to be able to answer this without seeing every configuration. But maybe people can tell me how to look for diagnostics or kind of how the major pieces fit together so that I can understand what I'm doing wrong.
I have a Tencent CVM instance running Ubuntu Server.
I also have a domain name pointing to the ip address of that server.
I start an nginx service to listen to port 1337 and pass requests to example.com/parse
I have mongodb running inside of a docker container, listening on port 27017.
Inside of index.js, I have the databaseURI set as 'mongodb://localhost:27017/dev' and the SERVER_URL set as 'https://example.com/parse'
When it's time to deploy the Parse Server instance, I use screen inside of my current ssh session, run npm start, and then detach the screen, and then kill my ssh session by closing the terminal.
Finally, I run the parse dashboard on my local machine with serverURL 'https://example.com/parse'
And everything works great. I add items to the database via the test page that comes with the Parse Server repo. I add items to the database via cloudcode calls from Python. I add and delete classes and objects via the dashboard. The behavior is exactly like I should expect.
And it continues that way for anywhere between 12-72 hours.
But after a few days of normal operation, it will happen that I open parse dashboard and everything is gone. I can start adding things again and everything works right, but nothing persists for more than 72 hours.
There's a lot I don't understand about the anatomy of this, so I figured maybe using screen and then detaching and closing the terminal causes some process to get killed and that's my mistake. But when I run top, I can see everything. I can see npm start is running. I can see mongo is running. When I docker ps to check the container, it's still there doing fine. The nginx service is still running.
Can anyone suggest a way for me to start diagnosing what the problem is? I feel like it's not any of the configs because if that was the problem, it wouldn't work fine for so long. I feel like it must be how I'm deploying it is causing something to reset or causing some process that's supposed to be always running to die.
Edit: For posterity I'll summarize the below as a solution in case you've come here struggling with the same issue. #Joe pointed me to db.setProfilingLevel(), level 2 with the slowms=0 option for max verbosity. Those logs are written to the file indicated within mongodb.conf. Docker doesn't persist storage by default, so you should have a named volume. The syntax is $docker volume create <volume_name>. And the syntax to attach the volume when you create the container is to add a v flag like -v <volume_name>:. And finally, I was running mongodb in a container because that's the workflow I saw in tutorials. But it's solving a problem I didn't have and it was simpler to start mongodb as a service without a container.

Docker container management solution

We've NodeJS applications running inside docker containers. Sometimes, if any process gets locked down or due to any other issue the app goes down and we've to manually login to each container n restart the application. I was wondering
if there is any sort of control panel that allow us to easily and quickly restart those and see the whole health of the system.
Please Note: we can't use --restart flag because essentially application doesn't exist with exist code. It run into problem like some process gets blocked, things are just getting bogged down vs any crashes and exist codes. That's why I don't think restart policy will help in this scenario.
I suggest you consider using the new HEALTHCHECK directive in Docker 1.12 to define a custom check for your locking condition. This feature can be combined with the new Docker swarm service feature to specify how many copies of your container you want to have running.

Sandboxing a node.js script written by a user running on the server

I'm developing a platform where users can create their own "widgets", widgets are basically js snippets ( in the future there will be html and css too ).
The problem is they must run even when the user is not on the website, so basically my service will have to schedule those user scripts to run every now and then.
I'm trying to figure out which would be the best way to "sandbox" that script, one of the first ideas i had was to run on it's own process inside of a Docker, so let's say the user manages to somehow get into the shell it would be a virtual machine and hopefully he would be locked inside.
I'm not a Docker specialist so i'm not even sure if that makes sense, anyway that would yield another problem which is spinning hundreds of dockers to run 1 simple javascript snippet.
Is there any "secure" way of doing this? Perhaps running the script on an empty scope and somehow removing access to the "require" method?
Another requirement would be to kill the script if it times out.
EDIT:
- Found this relevant stackexchange link
This can be done with docker, you would create a docker image with their script in it and the run the image which creates a container for the script to run in.
You could even make it super easy and create a common image, based on the official node.js docker image, and pass in the users custom files at run time, run them, save the output, and then you are done. This approach is good because there is only one image to maintain, and it keeps the setup simple.
The best way to pass in the data would be to create a volume mount on the container, and mount the users directory into the container at the same spot everytime.
For example, let's say you had a host with a directory structure like this.
/users/
aaron/
bob/
chris/
Then when you run the containers you just need to change the volume mount.
docker run -v /users/aaron:/user/ myimagename/myimage
docker run -v /users/bob:/user/ myimagename/myimage
I'm not sure what the output would be, but you could write it to /user/output inside the container and the output would be stored in the users output directory.
As far as timeouts, you could write a simple script that looks at docker ps and if it is running for longer then the limit, docker stop the container.
Because everything is run in a container, you can run many at a time and they are isolated from each other and the host.

Should you recreate containers when deploying web app?

I'm trying to figure out if best practices would dictate that when deploying a new version of my web app (nodejs running in its own container) I should:
Do a git pull from inside the container and update "in place"; or
Create a new container with the new code and perform a hot swap of the two docker containers
I may be missing some technical details as I'm very new to the idea of containers.
The second approach is the best practice: you would make a second version of your image (with the new code), stop your container, and run a second container based on that second version.
The idea is that you can easily roll-back as the first version of your image can be used to run the container that was initially in production at any time.
Trying to modify a running container is not a good idea as, once it is stopped and removed, running it again would be from the original image, with its original state. Unless you commit that container to a new image, those changes would be lost. And even if you did commit, you would not be able to easily rebuild that image. (plus you would commit the all container: its new code, but also a bunch of additional files created during the execution of the server: logs and other files: not very clean)
A container is supposed to be run from an image that you can precisely build from the specifications of a Dockerfile. It is not supposed to be modified at runtime.
Couple of caveat though:
if your container is used (--link) by other containers, you would beed to stop those first, stop your container and run a new one from a new version of the image, then restart your other containers.
don't forget to remount any data containers that you were using in order to get your persistent data.

Docker continuous deployment workflow

I'm planning to set up a jenkins-based CD workflow with Docker at the end.
My idea is to automatically build (by Jenkins) a docker image for every green build, then deploy that image either by jenkins or by 'hand' (I'm not yet sure whether I want to automatically run each green build).
Getting to the point of having a new image built is easy. My question is about the deployment itself. What's the best practice to 'reload' or 'restart' a running docker container? Suppose the image changed for the container, how do I gracefully reload it while having a service running inside? Do I need to do the traditional dance with multiple running containers and load balancing or is there a 'dockery' way?
Suppose the image changed for the container, how do I gracefully reload it while having a service running inside?
You don't want this.
Docker is a simple system for managing apps and their dependencies. It's simple and robust because ALL dependencies of an application are bundled with it. If your app runs today on your laptop, it will run tomorrow on your server. This is because we have captured 100% of the "inputs" for your application.
As soon as you introduce concepts like "upgrade" and "restart", your application can (accidentally) store state internally. That means it might behave differently tomorrow than it does today (after being restarted and upgraded 100 times).
It's better use a load balancer (or similar) to transition between your versions than to try and muck with the philosophy of Docker.
The Docker machine itself should always be immutable as you have to replace it for a new deployment. Storing state inside the Docker container will not work when you want to ship new releases often that you've built on your CI.
Docker supports Volumes which will let you write files that are permanent into some folder on the host. When you then upgrade the Docker container you use the same volume so you've got access to the same files written by the old container:
https://docs.docker.com/userguide/dockervolumes/

Resources