I am learning auto-scaling as a part of my AWS class and running an Amazon Linux instance and I want to increase the CPU utilization so that my auto-scaling policy will launch a new instance. What is the Linux command to increase process utilization of my current running instance? I am a new in Linux CLI.
You can...
Write a python script that uses multiprocessing with an infinite loop to simulate CPU load.
use linux stress
You can fake it. select your autoscaling group > scaling policies tab > actions button > execute
Related
I understand, that creating an image copies that system, but how does this work with processes?
Does a new AMI start each momentarily running process from the beginning or is the running process snapshotted and continued?
Running processes are not part of an AMI. An AMI captures the contents of the instance's disk. The new instance launched from the AMI will boot from scratch, and if you want anything to run on the instance it needs to be configured to run at boot (for example, as a service). By default, the AMI creation process shuts the instance down before capturing a snapshot of its disk and then boots it back up afterwards. While you can choose to suppress this behavior and take the snapshot of the running instance, this doesn't have the effect of preserving the system RAM or running processes, and when a new instance is launched the state will be equivalent to the source instance having been powered off (without a clean shutdown) and rebooted.
I am trying to run erlang application on openstack vm and getting very poor performance and after testing i found something going on with NUMA, This is what i observe in my test.
My openstack compute host with 32 core so i have created 30 vCPU core vm on it which has all NUMA awareness, when i am running Erlang application benchmark on this VM getting worst performance but then i create new VM with 16 vCPU core (In this case my all VM cpu pinned with Numa-0 node) and in this case benchmark result was great.
based on above test its clear if i keep VM on single numa node then performance is much better but when i spread it out to multiple numa zone it get worse.
But interesting thing is when i run same erlang application run on bare metal then performance is really good, so trying to understand why same application running on VM doesn't perform well?
Is there any setting in erlang to better fit with NUMA when running on virtual machine?
It's possible that Erlang is not able to properly detect the cpu topology of your VM.
You can inspect the cpu topology as seen by the VM using lscpu and lstopo-no-graphics from the hwloc package:
#lscpu | egrep '^(CPU\(s\)|Thread|Core|Socket|NUMA)'
#lstopo-no-graphics --no-io
If it doesn't look correct, consider rebuilding the VM using OpenStack options like hw:cpu_treads=2 hw:cpu_sockets=2 as described at https://specs.openstack.org/openstack/nova-specs/specs/juno/implemented/virt-driver-vcpu-topology.html
On the Erlang side, you might experiment with the Erlang VM options +sct, +sbt as described at http://erlang.org/doc/man/erl.html#+sbt
I am running a Kubernetes cluster on Azure deployed using "azure acs ...". Recently I noticed that the pods on one of the nodes were not responsive and that the CPU on the node was maxed out. I logged in, executed top and found that a process called "mdsd" was using up all available CPU.
When I killed that process with "sudo kill -9", the CPU usage returned to normal and my pods were working fine.
It seems to me like "mdsd" is part of the Azure linux monitoring framework. I installed omi-1.4.0-6.ssl_100.ulinux.x64.deb.
Is there a way to make sure that mdsd is not eating up all my CPU and stopping my pods from working properly?
I have a Python code that I need to run on 1000 CSVs in parallel computing to do calculations. One CPU core can finish running the code over each CSV in 8 hours.
Thus I am looking for a way to use Azure for this. I would like to create several virtual machines, say 4x D5v2 with 16 cores each to access a Windows Server that runs on a 64 Cores machine.
I tried to create these VMs in the same Cloud Service and I put them into the same Availability Set, which worked fine. When all VMs are running and I access any one of those VMs, I see that the cores on all other VMs are allocated to "Other Roles".
My questions are:
1) Is it possible to create a hypothetical VM out of 4 VMs to use more cores?
2) How can I manually allocate all cores in the Cloud Service to one specific VM?
Your best solution would be to use Azure Batch With Batch you create a job, and it will run on as many CPU's as you specify it can run on.
Taken from the Batch front page
When you are ready to run a job, Batch starts a pool of compute virtual machines for you, installing applications and staging data, running jobs with as many tasks as you have, identifying failures and re-queuing work and scaling down the pool as work completes. You have control over scale to meet deadlines, manage costs, and run at the right scale for your application.
1) Is it possible to create a hypothetical VM out of 4 VMs to use more cores?
No you can not.
2) How can I manually allocate all cores in the Cloud Service to one specific VM?
You can not do this. You need to use a cloud native solution to scale your process over multiple resources.
Is there a way to restrict cpus and memory for users running scripts directly, but allow more cpus and memory on job submission?
I am running torque/pbs on Ubuntu 14.04 server and want to allow "normal" usage of 8 cpus and 16GB RAM, and the rest to be dedicated as a "mom" resource for the cluster. Normal cgroups/cpuset configuration also restricts the running jobs.
If you configure Torque with --enable-cpuset the mom will automatically create a cpuset for each job. Torque isn't really equipped to use part of a machine, but a hack that might work to make this work in conjunction with only using half the machine is to specify np= in the nodes file, and then the mom will restrict the jobs to the first X cpus.