This question already has answers here:
How to increase/check default memory Docker has on Linux?
(2 answers)
Closed 1 year ago.
I'm trying to run the confluent cp-demo docker image.
https://docs.confluent.io/5.5.0/tutorials/cp-demo/docs/index.html
I'm using Ubuntu 20-04 and in order to start the container I need to increase docker max memory setting from the default 2GB to 8GB.
This can be done easily on Windows and Mac by the Docker Desktop app but that isn't available on Ubuntu and I haven't found a way to modify it using the cli. (I can only modify the memory of a container after I started it with the cli,but in order to start cp-demo it says I need to change the memory setting first).
Does anyone know how can I do this?
As far I know, in mac and windows you can control memory and cpu for docker application but in Linux it directly usages linux namespaces and cgroups so you don't need to do any changes for daemon, docker will provide free resources available on host to containers.
For your requirements:.
Make sure host-machine have sufficient free resources available.
Start cp-demo with memory limit:. docker run -it --memory=”2g” cp-demo
Related
Looking for some recommendations for how to report linux host metrics such as cpu and memory utilization and disk usage stats from within a docker container. The host will contain a number of docker containers. One thought was to run Top and other basic linux commands from the outside the container and push them into a container folder that has the appropriate authorization so that they can be consumed. Another thought was to use the docker api to run docker stats for the containers but not sure this is the best as it may not report on other processes running on the host that are not containerized. A third option would be to somehow execute something like TOP and other commands on the host from within the container, this option being the most ideal for my situation. I was just looking for some proven design patterns that others have used. Also, I don’t have the ability to install a bunch of tools on the host as this would be a customer host which I don’t have control as to what is already installed.
You may run your container in privileged mode, but be aware that it this could compromise the host security as your container will no longer be in a sandboxed environment.
docker run -d --privileged --pid=host alpine:3.8 sh
When the operator executes docker run --privileged, Docker will enable access to all devices on the host as well as set some configuration in AppArmor or SELinux to allow the container nearly all the same access to the host as processes running outside containers on the host. Additional information about running with --privileged is available on the Docker Blog.
https://docs.docker.com/engine/reference/run/#runtime-privilege-and-linux-capabilities
Good reference: https://security.stackexchange.com/a/218379
I am new to containers and would like to get a good knowledge about how container technology (Docker) is made up from 'scratch'. I have to write a paper and hope that I have every important thing correctly understood so far.
The following diagram is made by me and shows my current understanding of containers.
Obviously we need an OS with a Kernel that allows us to use the hardware. For Docker this is Linux. Docker for Windows uses a VM with Linux for that.
On top of our Linux OS we then run our Docker Engine. Our Docker Engine is in charge of starting, building, configuring ... our images and containers. But most importantly the Docker Engine handles everything that has to do with isolating of containers, for example it maintains how namespaces or cgroups are used so that every container has it's own full filesystem.
Then we have our actual containers. Containers themselvesneed almost every time a kind of OS itself. This is mostly just a very compact one like Alpine or Busybox. They collect a small number of standard functions such as 'file', 'tar', 'grep' that most software definitely need. This compact OS is now using the Kernel from our full Linux OS. They don't have their own Kernel.
On top of the compact OS we then place our actual piece of software such as Node.js or a NGINX Server. This software is only using the compact OS which in return uses the Kernel from our full Linux OS. And all data or modifications that is generated or done in runtime are made on the writeable layer of our container.
And if I understood correctly, our container or everything that runs in our container is not using or interacting with our full Linux OS but just with it's Kernel?
I also don't quite understand how the writeable layer in a container works. Like how does my software for example know that a modified file from a read-only layer is now present in the writeable layer and should use this?
I would really appreciate some corrections or suggestions on what I have missed out so far. Thank you
And if I understood correctly, our container or everything that runs in our container is not using or interacting with our full Linux OS but just with it's Kernel?
The containers are just processes. For kernel, Docker daemon, NodeJS application and Nginx are processes. That's why containers don't have their own kernels. The difference between Docker daemon process (and other processes on a host) and processes that are running within containers is in their scope (it's called a namespace). Processes in containers are run in isolation and they don't see anything around their namespace. There are many different namespaces, for example, a pid namespace is one of them and it limits the visibility of other processes. That's why ps command in a container doesn't show processes from a host or other containers. Namespaces is a kernel things and they are more about what a process can see and access to while there is also cgroups that apply limits for CPU and memory usage.
I hope this helps you somehow, at least, I tried to put more attention to the kernel because Docker is just a daemon that spins new processes with configured namespaces, cgroups and own filesystem.
Here are some links that might be useful:
What even is a container: namespaces and cgroups
How containers work: overlayfs
See also other posts about containers on https://jvns.ca (I recommend it because Julia explains it in simple words and even provide illustrations.)
If you want to go deeper, I'd suggest to look at Namespaces: from chroot() to containers slides and read the article about creation of own containers.
This question already has answers here:
Is it ok to run docker from inside docker?
(5 answers)
Closed 4 years ago.
We have app and which will spin the short term (short span) docker containers. Right now, it runs in Ubunut16.04 server (VM) and we installed docker, and nodejs in same server. We have nodejs app which runs in same server so whenever a request comes in, then the nodejs app will spin up a docker container and execute a user input inside the docker container. Once after the docker finish its job or if it runs out of admin defined resources then the docker container will be forcefully killed (docker kill) and removed (docker rm).
Now my question is, is it best practices to run the Ubunte16.04 docker container and run nodjes app and the short term docker containers inside the Ubunuter16.04 docker container.
In short run a docker inside other docker container.
Docker-in-Docker is generally considered fragile and hard to maintain and using it isn’t a best practice. https://hub.docker.com/_/docker/ has a little discussion on this.
A straightforward (but potentially dangerous) way to rearrange this is to give the server process access to the host’s Docker socket, with docker run -v /var/run/docker.sock:/var/run/docker.sock. Then it could launch its own Docker containers as needed. Note that if you do this, these sub-containers’ docker run -v options refer to the host’s filesystem, not the calling container’s filesystem, so if you’re trying to use the filesystem to transfer data this can get tricky. Also note that being able to run any Docker command this way gives unlimited access to the host, so you need to be extremely careful about how you launch containers.
A larger redesign would be to introduce some sort of message-queueing system; I’ve successfully used RabbitMQ in the past but there are many other options. Instead of the server process launching a subprocess directly, it writes a message to a queue. Instead of the workers being short-lived processes that start and stop frequently, you have a long-lived worker that reads jobs off the queue and does them. This puts you in a much more established Docker space where nothing needs to dynamically start and stop containers, and you can easily test the Node-Rabbit-worker stack in a non-Docker environment.
I am running NGINX and Tomcat on Docker containers (container OS is Red Hat linux) and deployed through Kubernetes pods. Host OS is Red Hat Linux.
My query is which OS parameter will be effective - host OS or container OS? During performance tuning do I need to tune both OS or host OS parameters are effective.
Example of some parameters I am referring to are ulimit - n (open files), net.ipv4.tcp.* , fs.file-max, etc.
As Crazykev already mentioned, you can set ulimits using the respective docker run flags.
Parameters like net.ipv4.tcp.* are kernel parameters. Docker containers are run in the same Linux kernel as the host system; for this reason, parameters set on the host will also be effective in the container.
Usually, you will not be able to set these parameters from inside a container. You can (not saying you should) start a container with the --privileged flag, which might (untested) give you access to setting kernel parameters from within the container. The Kubernetes docs also describe how to start privileged containers.
In Docker container, and I'm not sure if it could be called as OS...
By the way, some of your referring example may not support set directly in docker container for safety or other issues. You may need to find more manual in docker docs.(for example, ulimit, docker run --ulimit nofile=262144:262144)
Kubernetes right now does not support adding ulimit there is an issue in kubernetes open for that.
Similar question which asks about setting ulimit is answered here.
I am trying to setup a docker image with a DB2 database.
The installation is completed without any problems, but I get the following error when I try to restart the database:
SQL1084C Shared memory segments cannot be allocated. SQLSTATE=57019
I based the Dockerfile on this one:
https://github.com/jeffbonhag/db2-docker
where he states the same problem should be addressed by adding the command
sysctl kernel.shmmax=18446744073692774399
to allow the kernel to allocate more memory but the error persists.
The docker daemon itself runs in Ubuntu 14.04 which runs inside Parallels on MacOSX.
EDIT: After some search I found out that this is related to the following command:
UPDATE DB CFG FOR S0MXAT01 USING locklist 100000;
You are over-allocating the database memory heap, i.e. docker is unable to satisfy the memory requirements. Have a look at the following link to the manuals. This will give you a breakdown of what is located in the database memory:
Bufferpools
The database heap
The locklist
The utility heap
The package cache
The catalog cache
The shared sort heap, if it is enabled
A 20% overflow area
You can fiddle around with (decrease) any of this heaps until docker is happy.
In case others run into this - If you're rolling your own container and leave memory set at automatic, it may try to allocate all the memory on the host to Db2, leading to this error. Sometimes the initial start works out ok, but you end up with odd crashes weeks or months down the line.
The "official" db2 container (the developer community edition one) handles this. If you're building your own container, you'll likely need to set DATABASE_MEMORY and/or INSTANCE_MEMORY to reasonable limits based on the size of your container and restart Db2 in the container. This can be done in your entrypoint script.