How to config the Docker resources - linux

I am running Docker on a Linux server. By default only 2GB of memory and 0GB of Swap space are allocated. How can I change the memory and swap space in Docker?

From official documentation:
By default, a container has no resource constraints and can use as much of a given resource as the host’s kernel scheduler allows. Docker provides ways to control how much memory, CPU, or block IO a container can use, setting runtime configuration flags of the docker run command
You can use the -m or --memory option and set it to a desired value depending on your host's available memory.

Related

Is there a way to retrieve kubernetes container's ephemeral-storage usage details?

I create some pods with containers for which I set ephemeral-storage request and limit, like: (here 10GB)
Unfortunately, for some containers, the ephemeral-storage will be fully filled for unknown reasons. I would like to understand which dirs/files are responsible for filling it all, but I did not find a solution to do it.
I tried with df -h, but unfortunately, it will give stats for the whole node and not only for the particular pod/container.
Is there a way to retrieve the kubernetes container's ephemeral-storage usage details?
Pods use ephemeral local storage for scratch space, caching, and for logs. The kubelet can provide scratch space to Pods using local ephemeral storage to mount emptyDir volumes into containers.
Depending on your Kubernetes platform, You may not be able to easily determine where these files are being written, any filesystem can fill up, but rest assured that disk is being consumed somewhere (or worse, memory - depending on the specific configuration of your emptyDir and/or Kubernetes platform).
Refer to this SO link for more details on how by default & allocatable ephemeral-storage in a standard kubernetes environment is sourced from filesystem(mounted to /var/lib/kubelet).
And also refer to kubernetes documentation on how ephemeral storage can be managed & Ephemeral storage consumption management works.
I am assuming you're a GCP user, you can get a sense of your ephemeral-storage usage way:
Menu>Monitoring>Metrics Explorer>
Resource type: kubernetes node & Metric: Ephemeral Storage
Try the below commands to know kubernetes pod/container's ephemeral-storage usage details :
Try du -sh / [run inside a container] : du -sh will give the space consumed by your container files. Which simply returns the amount of disk space the current directory and all those stuff in it are using as a whole, something like: 2.4G.
Also you can check the complete file size using the du -h someDir command.
Inspecting container filesystems : You can use /bin/df as a tool to monitor ephemeral storage usage on the volume where ephemeral container data is located, which is /var/lib/kubelet and /var/lib/containers.

How to make wine use more memory in docker on k8s cluster?

I'm using the k8s v1.16 which is icp (ibm container platform).
I want to run some.exe files on the container.
So that I use the wineserver to run window based exe files.
But there is a problem with the memory usage.
Though I allocated 32GB of memory on the pod where the wineserver container will be running, the container does not use memory more than 3GB.
What should I do to make the wine container uses memory more than 3GB?

Is there a way to set the available resources of a docker container system using the docker container limit?

I am currently working on a Kubernetes cluster, which uses docker.
This cluster allows me to launch jobs. For each job, I specify a memory request and a memory limit.
The memory limit will be used by Kubernetes to fill the --memory option of the docker run command when creating the container. If this container exceeds this limit it will be killed for OOM reason.
Now, If I go inside a container, I am a little bit surprised to see that the available system memory is not the one from the --memory option, but the one from the docker machine. (The Kubernetes Node)
I am surprised because a system with wrong information about available resources will not behave correctly.
Take for example the memory cache used by IO operations. If you write on disk, pages will be cached on the RAM before being written. To do this the system will evaluate how many pages could be cached using the sysctl vm.dirty_ratio (20 % by default) and the memory size of the system. But how this could work if the container system memory size is wrong.
I verified it:
I ran a program with a lot of IO operations (os.write, decompression, ...) on a container limited at 10Gi of RAM, on a 180Gi Node. The container will be killed because it will reach the 10Gi memory limit. This OOM is caused by the wrong evaluation of dirty_ratio * the system memory.
This is terrible.
So, my question is the following:
Is there a way to set the available resources of a docker container system using the docker container limit?

Kubernetes doesn't take into account total node memory usage when starting Pods

What I see: Kubernetes takes into account only the memory used by its components when scheduling new Pods, and considers the remaining memory as free, even if it's being used by other system processes outside Kubernetes. So, when creating new deployments, it attempts to schedule new pods on a suffocated node.
What I expected to see: Kubernetes automatically take in consideration the total memory usage (by kubernetes components + system processes) and schedule it on another node.
As a work-around, is there a configuration parameter that I need to set or is it a bug?
Yes, there are few parameters to allocate resources:
You can allocate memory and CPU for your pods and allocate memory and CPU for your system daemons manually. In documentation you could find how it works with the example:
Example Scenario
Here is an example to illustrate Node Allocatable computation:
Node has 32Gi of memory, 16 CPUs and 100Gi of Storage
--kube-reserved is set to cpu=1,memory=2Gi,ephemeral-storage=1Gi
--system-reserved is set to cpu=500m,memory=1Gi,ephemeral-storage=1Gi
--eviction-hard is set to memory.available<500Mi,nodefs.available<10%
Under this scenario, Allocatable will be 14.5 CPUs, 28.5Gi of memory and 98Gi of local storage. Scheduler ensures that the total memory requests across all pods on this node does not exceed 28.5Gi and storage doesn’t exceed 88Gi. Kubelet evicts pods whenever the overall memory usage across pods exceeds 28.5Gi, or if overall disk usage exceeds 88GiIf all processes on the node consume as much CPU as they can, pods together cannot consume more than 14.5 CPUs.
If kube-reserved and/or system-reserved is not enforced and system daemons exceed their reservation, kubelet evicts pods whenever the overall node memory usage is higher than 31.5Gi or storage is greater than 90Gi
You can allocate as many as you need for Kubernetes with flag --kube-reserved and for system with flag -system-reserved.
Additionally, if you need stricter rules for spawning pods, you could try to use Pod Affinity.
Kubelet has the parameter --system-reserved that allows you to make a reservation of cpu and memory for system processes.
It is not dynamic (you reserve resources only at launch) but is the only way to tell Kubelet not to use all resources in node.
--system-reserved mapStringString
A set of ResourceName=ResourceQuantity (e.g. cpu=200m,memory=500Mi,ephemeral-storage=1Gi) pairs that describe resources reserved for non-kubernetes components. Currently only cpu and memory are supported. See http://kubernetes.io/docs/user-guide/compute-resources for more detail. [default=none]

JVM memory settings in docker container in AWS beanstalk

I run my java application in a docker container. I use AWS Beanstalk. The docker base image is CentOS. I run a container on an EC2 instance with 4gb of RAM on Amazon Linux AMI for Beanstalk.
How should I configure the container and JVM memory settings.
right now I have:
4GB on Amazon Linux Beanstalk AMI ec2 instance
I dedicated 3GB of 4 for a docker container
{
"AWSEBDockerrunVersion": 2,
"Authentication": {
"Bucket": "elasticbeanstalk-us-east-1-XXXXXX",
"Key": "dockercfg"
},
"containerDefinitions": [
{
"name": "my-service",
"image": "docker-registry:/myapp1.0.2",
"essential": true,
"memory": 3184,
"portMappings": [
{
"hostPort": 80,
"containerPort": 8080
}
]
}
}
JVM settings are
-Xms2560m
-Xmx2560m
-XX:+UseConcMarkSweepGC
-XX:+CMSParallelRemarkEnabled
-XX:+UseCMSInitiatingOccupancyOnly
-XX:CMSInitiatingOccupancyFraction=70
-XX:+ScavengeBeforeFullGC
-XX:+CMSScavengeBeforeRemark
-XX:+HeapDumpOnOutOfMemoryError
-XX:HeapDumpPath=./heapdump.hprof
Can I dedicate whole ec2 instance memory for a docker container, 4gb, and set JVM to 4GB too?
I think JVM could crash when it is not able to allocate whatever the Xmx memory param says. If so what are the most optimal values I could use for the container and JVM?
Right now I left 1gb margin for Amazon linux itself and 512MB for CentOS running in a container. 1.5GB wasted. Can it be done better?
The real answer is very dependent on your Java app and usage.
Start with a JVM heap of 3.5G.
Set the container max to 3.8G to cover javas overhead.
Load test, repeatedly.
Java
-Xmx and -Xms only control Java's heap size so you can't allocate all your available memory to Java's heap and expect java and the rest of the system to run well (or at all).
You will also need to cater for:
Stack: -Xss * number of threads (Xss defaults to 1MB on 64bit)
PermGen/Metaspace: -XX:MaxPermSize defaults to 64MB. Metaspace to 21MB but this can grow.
Your app could also use a lot of shared memory, do JNI things outside of the heap, rely on large mmaped files for performance or exec external binaries or any number of other oddities outside the heap. Do some load testing at your chosen memory levels, then above and below those levels to make sure you're not hitting or getting close to any memory issues that affect performance.
Container
Centos doesn't "run" in the container as such, docker just provides a file system image for your JVM process to reference. Normally you wouldn't "run" any more than the java process in that container.
There's not a huge benefit to limiting a containers max memory when you are on a dedicated host with only one container but it doesn't hurt either, in case something in the native java runs away with the available memory.
The memory overhead here needs to cater for the possible native Java usage mentioned above and the extra system files for the jre that will be loaded from the container image (rather than the host) and cached. You also won't get a nice stack or heapdump when you hit a container memory limit.
OS
A plain Amazon AMI only needs about 50-60MB of memory to run plus some spare for the file cache, so say 256MB to cover the OS with some leeway.

Resources