Allow docker to utilize the complete CPUs and underlying RAM - linux

Is there a way to specify the total RAM and total CPUs that a docker can use? My machine's size is 16 GB RAM and 4 CPUs. How can I make the docker-machine to utilize the complete RAM and underlying CPUs?
version: '3'
services:
prisma:
image: prismagraphql/prisma:1.34
restart: always
ports:
- "4466:4466"
environment:
JAVA_OPTS: '-Xmx8192m'
PRISMA_CONFIG: |
port: 4466
databases:
default:
connector: mongo
uri: 'mongodb+srv://abc:test#abc.net/default_default?retryWrites=true&w=majority'

What you are asking is the default behavior.
From docs
By default, a container has no resource constraints and can use as much of a given resource as the host’s kernel scheduler allows.
What you can do to ensure a container gets minimum amount of resources and is not at the mercy of scheduler is to use the resource reservations to ensure the container always has minimum required resources and optionally limits if you also want to enforce upper threshold for the resource usage.
See Resources section in docs
version: "3.8"
services:
redis:
image: redis:alpine
deploy:
resources:
limits:
cpus: '0.50'
memory: 50M
reservations:
cpus: '0.25'
memory: 20M
In this general example, the redis service is constrained to use no more than 50M of memory and 0.50 (50% of a single core) of available processing time (CPU), and has 20M of memory and 0.25 CPU time reserved (as always available to it).

Related

Kubernetes CPU allocation: is vCore vs vCPU vs core in Azure?

I am running a Java application on an Azure Kubernetes node with a Standard_D8s_v3 VM. I am unsure about CPU allocation for a kubernetes deployment. This mentions that 1 CPU is equals to 1 Azure vCore. However the Azure VM specs mentions that Standard_D8s_v3 has 8 vCPUs (not vCores). What is the difference between vCPU and vCore?
Here you can see that the ratio of a Ds_v3 VM vCPU to cores (not vCores) is 2:1 due to Hyperthreading. Which means that 2 vCPUs are needed for the same performance of 1. Is vCore == core? If so, my assumption is that I should double the size of the VM.
Or, should I assume that 1 kubernetes CPU is equals to 1 vCPU?
Correct, 1 Kubernetes CPU equals 1 vCPU.
For example i am using Standard_D4s_v3 nodes which have 4 vCPU according to here.
When i do
kubectl get nodes
kubectl describe node <node-name>
i can see this
Capacity:
cpu: 4
Allocatable:
cpu: 3860m
Here is also good explanation whats the difference between a Core and vCPU on Azure.

Azure Kubernetes CPU multithreading

I wish to run the Spring batch application in Azure Kubernetes.
At present, my on-premise VM has the below configuration
CPU Speed: 2,593
CPU Cores: 4
My application uses multithreading(~15 threads)
how do I define the CPU in AKS.
resources:
limits:
cpu: "4"
requests:
cpu: "0.5"
args:
- -cpus
- "4"
Reference: Kubernetes CPU multithreading
AKS Node Pool:
First of all, please note that Kubernetes CPU is an absolute unit:
Limits and requests for CPU resources are measured in cpu units. One
cpu, in Kubernetes, is equivalent to 1 vCPU/Core for cloud providers
and 1 hyperthread on bare-metal Intel processors.
CPU is always requested as an absolute quantity, never as a relative
quantity; 0.1 is the same amount of CPU on a single-core, dual-core,
or 48-core machine
In other words, a CPU value of 1 corresponds to using a single core continiously over time.
The value of resources.requests.cpu is used during scheduling and ensures that the sum of all requests on a single node is less than the node capacity.
When you create a Pod, the Kubernetes scheduler selects a node for the
Pod to run on. Each node has a maximum capacity for each of the
resource types: the amount of CPU and memory it can provide for Pods.
The scheduler ensures that, for each resource type, the sum of the
resource requests of the scheduled Containers is less than the
capacity of the node. Note that although actual memory or CPU resource
usage on nodes is very low, the scheduler still refuses to place a Pod
on a node if the capacity check fails. This protects against a
resource shortage on a node when resource usage later increases, for
example, during a daily peak in request rate.
The value of resources.limits.cpu is used to determine how much CPU can be used given that it is available, see How pods with limist are run
The spec.containers[].resources.limits.cpu is converted to its
millicore value and multiplied by 100. The resulting value is the
total amount of CPU time in microseconds that a container can use
every 100ms. A container cannot use more than its share of CPU time
during this interval.
In other words, the requests is what the container is guaranteed in terms of CPU time, and the limit is what it can use given that it is not used by someone else.
The concept of multithreading does not change the above, the requests and limits apply to the container as a whole, regardless of how many threads run inside. The Linux scheduler do scheduling decisions based on waiting time, and with containers Cgroups is used to limit the CPU bandwidth. Please see this answer for a detailed walkthrough: https://stackoverflow.com/a/61856689/7146596
To finally answer the question
Your on premises VM has 4 cores, operating on 2,5 GHz, and if we assume that the CPU capacity is a function of clock speed and number of cores, you currently have 10 GHz "available"
The CPU's used in standard_D16ds_v4 has a base speed of 2.5GHz and can run up to 3.4GHz or shorter periods according to the documentation
The D v4 and Dd v4 virtual machines are based on a custom Intel® Xeon®
Platinum 8272CL processor, which runs at a base speed of 2.5Ghz and
can achieve up to 3.4Ghz all core turbo frequency.
Based on this specifying 4 cores should be enough ti give you the same capacity as onpremises.
However number of cores and clock speed is not everything (caches etc also impacts performance), so to optimize the CPU requests and limits you may have to do some testing and fine tuning.
I'm afraid there is no easy answer to your question, while planning the right size of VM Node Pools for Kubernetes cluster to fit appropriately your workload requirements for resource consumption . This is a constant effort for cluster operators, and requires you to take into account many factors, let's mention few of them:
What Quality of Service (QoS) class (Guaranteed, Burstable, BestEffort) should I specify for my Pod Application, and how many of them I plan to run ?
Do I really know the actual usage of CPU/Memory resources by my app VS. how much of VM compute resources stay idle ? (any on-prem monitoring solution in place right now, that could prove it, or be easily moved to Kubernetes in-cluster one ?)
Do my cluster is multi-tenant environment, where I need to share cluster resources with different teams ?
Node (VM) capacity is not the same as total available resources to workloads
You should think here in terms of cluster Allocatable resources:
Allocatable = Node Capacity - kube-reserved - system-reserved
In case of Standard_D16ds_v4 VM size in AZ, you would have for workloads disposal: 14 CPU Cores not 16 as assumed earlier.
I hope you are aware, that scpecified through args number of CPUs:
args:
- -cpus
- "2"
is app specific approach (in this case the 'stress' utility written in go), not general way to spawn a declared number of threads per CPU.
My suggestion:
To avoid over-provisioning or under-provisioning of cluster resources to your workload application (requested resource VS. actually utilized resources), and to optimize costs and performance of your applications, I would in your place do a preliminary sizing estimation on your own of VM Node Pool size and type required by your SpringBoot multithreaded app, and thus familiarize first with concepts like bin-packing and app right-sizing. For these two last topics I don't know a better public guide than recently published by GCP tech team:
"Monitoring gke-clusters for cost optimization using cloud monitoring"
I would encourage you to find an answer to your question by your self. Do the proof of concept on GKE first (with free trial), replace in quide above the demo app with your own workload, come back here, and share your own observation, would be valuable for others too with similar task !

Why does Spark standalone show incorrect "Memory in use" total with dockerized worker

I'm using Spark v3.1.2 and the standalone cluster manager but the total "Memory in use" reported in the UI is incorrect. It is reporting "1024.0 MiB Total" even though the single worker has been limited to 768MiB. Is this a bug when workers are running inside a container (docker or ECS)?
Screenshot of master UI showing memory
Setup:
docker-compose.yml file
1 x master service (container)
1 x worker service (container)
worker memory constrained to 768MiB and 1 CPU
Relevant section in docker-compose.yml to limit the worker container memory:
deploy:
mode: replicated
replicas: 2
resources:
limits:
cpus: "1.0"
memory: 768M
Verification docker limited the memory using "docker container inspect":
"Memory": 805306368
I've tried spinning up another worker and the UI reports a total of 2GiB in memory as it thinks each worker has 1024Mib, verified in the log:
INFO Master: Registering worker 172.22.0.4:7078 with 1 cores, 1024.0 MiB RAM
I've also tried this in ECS and observed similar behaviour.
I've also tried allocating > 1GiB to the worker in docker e.g. 5GiB and the master keeps reporting 1GiB lower? When the worker has been allocated 5GiB, the master registers the worker as having 4GiB:
Registering worker 172.22.0.3:7078 with 1 cores, 4.0 GiB RAM
Is this a problem with the Spark standalone cluster manager or the Spark Java worker code not able to determine the correct memory when running within a container?

How we can set the memory and CPU resources limits with spark operators?

i'm new in spark-operator i'm confused how to setup the resources request and limits in YAML file for example in my case i have request 512m of memory for driver pod, but what about limits, it is unbounded ?
spec:
driver:
cores: 1
coreLimit: 200m
memory: 512m
labels:
version: 2.4.5
serviceAccount: spark
spec:
driver:
cores: 1
coreLimit: 200m
memory: 512m
labels:
version: 2.4.5
serviceAccount: spark
It is good practice to set limits when defining your yaml file. If you do not do so you run the risk of using all resources on node as per this doc since there is no upper bound.
Memory limits for the driver and executor pods are set internally by the Kubernetes scheduler backend in Spark, and calculated as the value of spark.{driver|executor}.memory + memory overhead.

Set hard CPU limits in a Docker Swarm

I am trying to set hard CPU and memory limits in a Docker Swarm consisting of three VMs. I am using the cpu and memory limit configurations suggested by docker documentation in my docker-compose.yml file. My docker-compose.yml file looks like
version: "3"
services:
app:
# replace username/repo:tag with your name and image details
image: user/testing:part2
deploy:
replicas: 10
resources:
limits:
cpus: "0.5"
memory: 4M
restart_policy:
condition: on-failure
The CPU and memory resources of my host machine and the VMs are shown in the figure below. My host machine has 4 CPUs and all the VMs have 1 CPU each.
To figure out if Docker containers can limit their resources, I am running a test program with an infinite loop in my swarm. One of the snapshots from my experiments is shown below. It shows docker stats results on the three VMs (VM1: bottom left, VM2: top right, VM3: bottom right).
Looking at the results, I have a few questions.
How is the CPU limited to 50% for each container?
Each VM has 1 CPU then how come on one VM, docker containers CPU% sum exceeds 100%?
In the image above, I have 7 docker containers running and sum of CPU% for all of them is 22+21+88+52+66+78+76 = 403 approx. which
means that the swarm is using 4 cores instead of 3. Is it
possible that docker allows the swarm to use CPU resources of the
host machine as well if needed?
Can anybody please guide me with these questions? Thank you.

Resources