Schedule daily Docker container restart / reset - linux

I have a Linux based Docker container running an application which seems to have a memory leak. After around a week requests to the application start to fail and the container requires a restart to reset its state and get things working again.
The error reported by the application is:
java.lang.OutOfMemoryError: Java heap space
Is there a generic method that can be used to trigger a restart, resetting it's state, regardless of which service is being used to host it? If there's not a good generic solution, I'm about to give DigitalOcean a whirl so maybe there's a DigitalOcean specific solution that may work instead?

You can set a restart policy (with flag on-failure) as described here.

Check out the Watchtower project. This is an incredible tool that restarts Docker containers on schedule and also updates containers automatically.

Related

Deploying Elastic search 6 in Azure Container Instance

I'm trying to deploy the current version of Elastic Search in an Azure Container Instance using the Docker image, however, I need to set vm.max_map_count=262144. Although since the container continually tries to restart on max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144] I can't hit the instance with any commands. Trying to disable restarts or continuing on Errors causes the container instance to fail.
From the comments it sounds like you may have resolved the issue. In general for future readers a possible troubleshooting guide is:
If container exits unsuccessfully
Try using EXEC for interactive debugging while container is running. This can be found in the Azure portal as well on the "Containers" tab.
Attempt to run to success on local docker if EXEC did not help.
Upload new container version after local success was found to your registry and try to redeploy to ACI.
If container exits successfully and repeatedly restarts
Verify you have a long-running command for the container.
Update the restart policy to Never so upon exit you can debug the terminated container group.
If you cannot find issues, follow the local steps and get a successful run with local Docker.
Hope this helps.

Docker container management solution

We've NodeJS applications running inside docker containers. Sometimes, if any process gets locked down or due to any other issue the app goes down and we've to manually login to each container n restart the application. I was wondering
if there is any sort of control panel that allow us to easily and quickly restart those and see the whole health of the system.
Please Note: we can't use --restart flag because essentially application doesn't exist with exist code. It run into problem like some process gets blocked, things are just getting bogged down vs any crashes and exist codes. That's why I don't think restart policy will help in this scenario.
I suggest you consider using the new HEALTHCHECK directive in Docker 1.12 to define a custom check for your locking condition. This feature can be combined with the new Docker swarm service feature to specify how many copies of your container you want to have running.

Docker: Best way to handle security updates of packages from apt-get inside docker containers

On my current server i use unattended-upgrades to automatically handle security updates.
But i'm wondering what people would suggest for working inside docker containers.
I have several docker containers running for each service of my app.
Should i have the unattended-upgrades setup in each? Or maybe upgrade them locally and push the upgraded images up? Any other ideas?
Does anyone have any experience with this in production maybe?
I do updates automatically as you did (before). I currently have Stage containers and nothing in Prod, yet. But there is no harm done applying updates to each container: some redundant networking activity, perhaps, if you have multiple containers based in the same image, but harmless otherwise.
Rebuilding a container strikes me as unnecessarily time consuming and involves a more complex process.
WRT Time:
The time to rebuild is added to the time needed to update so it is 'extra' time in that sense. And if you have start-up processes for your container, those have to be repeated.
WRT Complexity:
On the one hand you are simply running updates with apt. On the other you are basically acting as an integration server: the more steps, the more to go wrong.
Also, the updates do not create a 'golden image' since it is easily repeatable.
And finally, since the kernel is not ever actually updated, you would not ever need to restart the container.
I would rebuild the container. They are usually oriented to run one app, and may have little sense to update the supporting filesystem and all the included but not used/exposed apps there.
Having the data in a separate volume let you have a script that rebuilds the container and restarts it. It would have the advantage that loading another container from that image or pushing through a repository to another server would have all the fixes applied.

docker and product versions

I am working for a product company and we do make lot of releases of the product. In the current approach to test multiple releases, we create separate VM and install all infrastructure softwares(db, app server etc) on top of it. Later we deploy the application WARs on the respective VM. Recently, I came across docker and it seems to be much helpful. Hence I started exploring it with the examples listed on the site. But, I am not able to find a way as how docker can be applied to build environment suitable to various releases?
Each product version will have db schema changes.
Each application WARs will have enhancements/defects etc.
Consider below example.
Every month, our company is releasing a new version of software and hence in order to support/fix defects we create VMs per release. Given the fact that if the application's overall size is 2 gb and OS takes close to 5 gb (apart from space it will also take up system resources for extra overhead). The VMs are required to restore any release and test any support issues reported against it. But looking at the additional infrastructure requirements, it seems that its very costly affair.
Can docker have everything required to run an application inside a container/image?
Can docker pack an application which consists of multiple WARs/DB schemas and when started allocate appropriate port?
Will there be any space/memory/speed differences compared to VM and docker assuming above scenario?
Do you think docker is still appropriate solution or should we continue using VMs? Can someone share pointers on how I can achieve above requirements with docker?
tl;dr: Yes, docker can run most applications inside a container.
Docker runs a single process inside each container. When using VMs or real servers, this one process is usually the init system which starts all system services. With docker it is usually your app.
This difference will get you faster startup times for your app (not starting the whole operating system). The trade off is that, if you depend on system services (such as cron, sshd…) you will need to start them yourself. There are some base images that provide a more "VM-like" environment… check phusion's baseimage for instance. To start more than a single process, you can also use a process manager such as supervisord.
Going forward, the recommended (although not required) approach is to start one process in each container (one per application server, one per database server, and so on) and not use containers as VMs.
Docker has no problems allocating ports either. It even has an explicit command on the Dockerfile: EXPOSE. Exposed ports can also be published on the docker host with the --publish argument of run so you don't even need to know the IP assigned to the container.
Regarding used space, you will probably see important savings. Docker images are created by stacking filesystem layers… this means that the common layers are only stored once on the server. In your setup, you will likely only have one copy of the base operating system layer (with VMs, you have a copy on each VM).
On memory you will probably see less significant savings (mostly caused by not starting all the operating system services). Speed is still a subject of research… A few things clear so far is that for faster IO you will need to use docker volumes and that for network heavy use cases you should use host networking. Check the IBM research "An Updated Performance Comparison of Virtual Machines and Linux Containers" for details. Or a summary like InfoQ's.

Docker continuous deployment workflow

I'm planning to set up a jenkins-based CD workflow with Docker at the end.
My idea is to automatically build (by Jenkins) a docker image for every green build, then deploy that image either by jenkins or by 'hand' (I'm not yet sure whether I want to automatically run each green build).
Getting to the point of having a new image built is easy. My question is about the deployment itself. What's the best practice to 'reload' or 'restart' a running docker container? Suppose the image changed for the container, how do I gracefully reload it while having a service running inside? Do I need to do the traditional dance with multiple running containers and load balancing or is there a 'dockery' way?
Suppose the image changed for the container, how do I gracefully reload it while having a service running inside?
You don't want this.
Docker is a simple system for managing apps and their dependencies. It's simple and robust because ALL dependencies of an application are bundled with it. If your app runs today on your laptop, it will run tomorrow on your server. This is because we have captured 100% of the "inputs" for your application.
As soon as you introduce concepts like "upgrade" and "restart", your application can (accidentally) store state internally. That means it might behave differently tomorrow than it does today (after being restarted and upgraded 100 times).
It's better use a load balancer (or similar) to transition between your versions than to try and muck with the philosophy of Docker.
The Docker machine itself should always be immutable as you have to replace it for a new deployment. Storing state inside the Docker container will not work when you want to ship new releases often that you've built on your CI.
Docker supports Volumes which will let you write files that are permanent into some folder on the host. When you then upgrade the Docker container you use the same volume so you've got access to the same files written by the old container:
https://docs.docker.com/userguide/dockervolumes/

Resources