Linux container (LNX) resource assignment - linux

For a linux container, after it is created and some application are running in it, can CPU and memory be dynamically added to the container?

Related

How does an AMI create processes?

I understand, that creating an image copies that system, but how does this work with processes?
Does a new AMI start each momentarily running process from the beginning or is the running process snapshotted and continued?
Running processes are not part of an AMI. An AMI captures the contents of the instance's disk. The new instance launched from the AMI will boot from scratch, and if you want anything to run on the instance it needs to be configured to run at boot (for example, as a service). By default, the AMI creation process shuts the instance down before capturing a snapshot of its disk and then boots it back up afterwards. While you can choose to suppress this behavior and take the snapshot of the running instance, this doesn't have the effect of preserving the system RAM or running processes, and when a new instance is launched the state will be equivalent to the source instance having been powered off (without a clean shutdown) and rebooted.

How to make wine use more memory in docker on k8s cluster?

I'm using the k8s v1.16 which is icp (ibm container platform).
I want to run some.exe files on the container.
So that I use the wineserver to run window based exe files.
But there is a problem with the memory usage.
Though I allocated 32GB of memory on the pod where the wineserver container will be running, the container does not use memory more than 3GB.
What should I do to make the wine container uses memory more than 3GB?

Is there a way to set the available resources of a docker container system using the docker container limit?

I am currently working on a Kubernetes cluster, which uses docker.
This cluster allows me to launch jobs. For each job, I specify a memory request and a memory limit.
The memory limit will be used by Kubernetes to fill the --memory option of the docker run command when creating the container. If this container exceeds this limit it will be killed for OOM reason.
Now, If I go inside a container, I am a little bit surprised to see that the available system memory is not the one from the --memory option, but the one from the docker machine. (The Kubernetes Node)
I am surprised because a system with wrong information about available resources will not behave correctly.
Take for example the memory cache used by IO operations. If you write on disk, pages will be cached on the RAM before being written. To do this the system will evaluate how many pages could be cached using the sysctl vm.dirty_ratio (20 % by default) and the memory size of the system. But how this could work if the container system memory size is wrong.
I verified it:
I ran a program with a lot of IO operations (os.write, decompression, ...) on a container limited at 10Gi of RAM, on a 180Gi Node. The container will be killed because it will reach the 10Gi memory limit. This OOM is caused by the wrong evaluation of dirty_ratio * the system memory.
This is terrible.
So, my question is the following:
Is there a way to set the available resources of a docker container system using the docker container limit?

Node goes to unusable state when using GPU Container supported VMs in Azure batch pool

I am trying to create a pool of GPU based Containers supported VMs. I have valid ContainerConfiguration and start task. The VM size is Standard_NC6. But whenever i create a pool it always goes to unusable state. If i remove ContainerConfiguration setting the node are in idle state but I dont see problem with ContainerConfiguration settings because If i choose the VM size standard_f2s_v2 (not-gpu) and keep the same ContainerConfiguration settings then it works fine and installs all images on machine. I think it has to do with some nvidia libraries installation while setting up the nodes.

How to config the Docker resources

I am running Docker on a Linux server. By default only 2GB of memory and 0GB of Swap space are allocated. How can I change the memory and swap space in Docker?
From official documentation:
By default, a container has no resource constraints and can use as much of a given resource as the host’s kernel scheduler allows. Docker provides ways to control how much memory, CPU, or block IO a container can use, setting runtime configuration flags of the docker run command
You can use the -m or --memory option and set it to a desired value depending on your host's available memory.

Resources