AMD 24 CORE Threadripper and 200GB RAM
Ubuntu 20
Docker Latest Version
Docker Swarm Mode (but the only Host)
I have my docker stack compose file.
Scaling the service up I don't have any problems to 249 Containers, but then I have a bottleneck and don't know why
Do I need to change Settings somewhere to remove the bottleneck?
I already have
fs.inotify.max_queued_events = 100000000
fs.inotify.max_user_instances = 100000000
fs.inotify.max_user_watches = 100000000
in /etc/sysctl.conf
as I had a bottleneck with inotify instances by nearly 100 containers, solved that problem with that.
But I cant scale past 249 Containers
One issue is certainly going to be IP availability if this is docker swarm, as overlay networks - the default on swarm - are automatically /24 and thus limited to ~255 hosts.
So:
a. attach the service to a manually created network that you have scoped to be larger than /24
b. ensure that the services endpoint mode is dnsrr as vip's cannot be used safely with networks with larger than /24 address space.
Related
I am currently working on a Kubernetes cluster, which uses docker.
This cluster allows me to launch jobs. For each job, I specify a memory request and a memory limit.
The memory limit will be used by Kubernetes to fill the --memory option of the docker run command when creating the container. If this container exceeds this limit it will be killed for OOM reason.
Now, If I go inside a container, I am a little bit surprised to see that the available system memory is not the one from the --memory option, but the one from the docker machine. (The Kubernetes Node)
I am surprised because a system with wrong information about available resources will not behave correctly.
Take for example the memory cache used by IO operations. If you write on disk, pages will be cached on the RAM before being written. To do this the system will evaluate how many pages could be cached using the sysctl vm.dirty_ratio (20 % by default) and the memory size of the system. But how this could work if the container system memory size is wrong.
I verified it:
I ran a program with a lot of IO operations (os.write, decompression, ...) on a container limited at 10Gi of RAM, on a 180Gi Node. The container will be killed because it will reach the 10Gi memory limit. This OOM is caused by the wrong evaluation of dirty_ratio * the system memory.
This is terrible.
So, my question is the following:
Is there a way to set the available resources of a docker container system using the docker container limit?
I'm currently trying to deploy a node.js app on docker containers. I need to deploy 30 of them but they begin to have a weird behavior at some point, some of them freeze.
I am currently running Docker version for windows 18.03.0-ce, build 0520e24302, my computer specs (cpu and memory):
I5 4670 K
24 GB of ram
My docker default machine resource allocation is the following :
Allocated RAM : 10 Gb
Allocated vCPUs : 4
My node application is running on apline3.8 and node.js 11.4 and mostly do http requests every 2-3 seconds.
When i try to deploy 20 containers everything is running like a charm, my application do the job and i can see that there is an activity on every on my containers through the logs, activity stats.
The problem comes when i try to deploy more containers, more than 20, i can notice that some of the previously deployed containers start to stop their activities (0% cpu using, logs freezing). When everything is deployed (30 containers), Docker start to block the activity of some of them and unblock them at some point to block some others (blocking/unblocking is random). It seems to be sequential. I tried to wait and see what happened and the result is that some of the containers are able to poursue their activities and some others are stuck forever (still running but no more activity).
It's important to notice that i used the following resources restrictions on each of my containers :
MemoryReservation : 160mb
Memory soft limit : 160mb
NanoCPUs : 250000000 (0.25 cpus)
I had to increase my docker default machine resource allocation and decrease container's ressource allocation because it was using almost 100% of my cpu, maybe i did a mistake in my configuration. I tried to tweak those values, but no success i still have some containers freezing.
I'm kind of lost right know.
Any help would be appreciated even a little one, thank you in advance !
I am running Docker on a Linux server. By default only 2GB of memory and 0GB of Swap space are allocated. How can I change the memory and swap space in Docker?
From official documentation:
By default, a container has no resource constraints and can use as much of a given resource as the host’s kernel scheduler allows. Docker provides ways to control how much memory, CPU, or block IO a container can use, setting runtime configuration flags of the docker run command
You can use the -m or --memory option and set it to a desired value depending on your host's available memory.
I have a Kubernetes cluster setup with 3 worker nodes and a master node. Using docker images to generate pods and altering the working containers. I have been working on one image and have altered it heavily to make the servers within work. The problem I am facing is -
There is no space on the device left.
On further investigation, I found the docker container is set at a 10G size limit which now, I would definitely want to change. How can I change that without losing all my changes in the container and without the need of storing the changes as a separate image altogether?
Changing the limit without a restart is impossible.
To prevent data loss during the container restart, you can use Volumes and store data there, instead of the root image.
P.S. It is not possible to mount Volume dynamically to a container without restart.
I am new to Google Compute Engine. I want to create a Web Server having following properties:
1 Core
Red Hat Enterprise Linux 7.1 64 bit
RAM 8 GB
HDD: 100 GB
SSH, JDK 1.7
Apache Web server as the proxy to Jboss App Server
Enable HTTP / 80 and HTTPS /443 on public IP
Access Mode – SSH/SCP
I created a new instance having Linux Red Hat 7.1 and machine type n1-standard-2 that provide 2 CPU cores and 7.5 GB of RAM. Can I define exactly one core with 100 GB HD and 8 GB RAM? And how can I define access mode SSH/SCP ?
* I'd like add this update to my answer: Now it's possible to customize the machine type based on your hardware requirements.*
When creating a Compute Engine VM instance you will need to specify a machine type. There is no way to specify amount of CPU and memory. However, you can select a machine type which will be close to your hardware requirements.
For the persistent disk, using the gcloud command tool you can create a disk with the desired size:
$ gcloud compute disks create DISK_NAME --image IMAGE --size 100GB --zone ZONE
Then create your VM instance using your root persistent disk:
$ gcloud compute instances create INSTANCE_NAME --disk name=DISK_NAME boot=yes --zone ZONE
Since Automatic Resizing of root persistent disks is not supported by Compute Engine for Red Hat Enterprise operating system, you will need to manually repartition your disk. You can visit this article for information about repartitioning a root persistent disk.
Can I define exactly one core with 100 GB HD and 8 GB RAM?
No, you can only use predefined machine shapes with the preassigned amounts of CPU and RAM.
See Kamran's answer for how to create a disk of a different size, that is separate from CPU and RAM.
And how can I define access mode SSH/SCP?
That's automatically done for you and it's already running an SSH server. Note that by default, it uses SSH keys, not passwords. To connect to your GCE VM, see these docs; the command would look like:
gcloud compute ssh INSTANCE-NAME --project=PROJECT --zone=ZONE
You can also connect to your instance via your web browser by using the SSH button in the Developers Console.
To use scp, use the flags that are provided for the ssh command, e.g.,
scp -i KEY_FILE \
-o UserKnownHostsFile=/dev/null \
-o CheckHostIP=no \
-o StrictHostKeyChecking=no \
[source-files ...] \
USER#IP_ADDRESS:[dest-location]
or vice versa to copy them back.