How does an AMI create processes? - amazon-ami

I understand, that creating an image copies that system, but how does this work with processes?
Does a new AMI start each momentarily running process from the beginning or is the running process snapshotted and continued?

Running processes are not part of an AMI. An AMI captures the contents of the instance's disk. The new instance launched from the AMI will boot from scratch, and if you want anything to run on the instance it needs to be configured to run at boot (for example, as a service). By default, the AMI creation process shuts the instance down before capturing a snapshot of its disk and then boots it back up afterwards. While you can choose to suppress this behavior and take the snapshot of the running instance, this doesn't have the effect of preserving the system RAM or running processes, and when a new instance is launched the state will be equivalent to the source instance having been powered off (without a clean shutdown) and rebooted.

Related

Node goes to unusable state when using GPU Container supported VMs in Azure batch pool

I am trying to create a pool of GPU based Containers supported VMs. I have valid ContainerConfiguration and start task. The VM size is Standard_NC6. But whenever i create a pool it always goes to unusable state. If i remove ContainerConfiguration setting the node are in idle state but I dont see problem with ContainerConfiguration settings because If i choose the VM size standard_f2s_v2 (not-gpu) and keep the same ContainerConfiguration settings then it works fine and installs all images on machine. I think it has to do with some nvidia libraries installation while setting up the nodes.

Command to increase CPU utilization in my Amazon Linux AMI

I am learning auto-scaling as a part of my AWS class and running an Amazon Linux instance and I want to increase the CPU utilization so that my auto-scaling policy will launch a new instance. What is the Linux command to increase process utilization of my current running instance? I am a new in Linux CLI.
You can...
Write a python script that uses multiprocessing with an infinite loop to simulate CPU load.
use linux stress
You can fake it. select your autoscaling group > scaling policies tab > actions button > execute

Linux container (LNX) resource assignment

For a linux container, after it is created and some application are running in it, can CPU and memory be dynamically added to the container?

I could not submit a job to the executing node in condor apart from the central manager

I have a condor pool which consist of 4 dedicated machine one is set as a centeral manager, submitting, and executing node while the other three is set to be executing nodes I used CentOS 5.4 as an OS for all the machines. My problem is when I submitted a job from the central manager it works just on the central manager so when I specify in the JDL file that the job should run in any machine apart from the central manager the job stay in hold and does not run. When I type condor_status all nodes appear. I keep the daemon MASTER, STARTD in the daemon list for the executing nodes. Does any one come across this problem?
There's not enough information to answer your question, but the first thing to do is to run condor_q -analyze <jobid> and see what it tells you. See the Condor manual Section 2.6.5: Why is the job not running?
One possible cause is that you're not telling Condor to transfer your input/output files for you, and your nodes have different "filesystem domains", so Condor is unable to find a host which shares a common filesystem with your submit host.

prioritizing gearman job servers?

I have 2 machines running same workers. One machine shoud be "primary" as it is very powerful, and the other machine should server as a backup for when primary machine goes down or crashes. When primary machine is up and running, all jobs should default to the primary machine for as long as there are available workers.
From my tests, I've noticed that gearmanD randomly picks a machine to send the job to. Is there any way at all to prioritize the machines to send jobs to?
Example:
Primary machine running 8 instasnces of the same worker
Backup machine running 1 instance
Do:
Use primary machine until it no more available workers are there to fullfill the job queue, then continue onto backup machine.
Any way of accomplishing this?
Thanks everyone!
I don't think this is possible with the current API. You could run 2 gearmand's though, one on each of your worker servers and set them both in the client, the powerful machine's first. This way, at least current versions of the client APIs I'm aware of, will first use the first gearmand and its workers and if that isn't available, it will switch to the second, which has the less powerful machine's workers...

Resources