I need to request some AWS resources and in order to do so, I need to identify what are my requirements for:
Number of CPU cores (maybe GPUs as well) (performs parallel processing)
Amount of Memory required
I/O & network read/write time (optional / good to know)
How can I profile my script so that I know if I am using the requested resources to the fullest?
How can I profile the whole system? Something along the following lines:
i. Request large number of compute resources (CPUs and RAM) on AWS
ii. Start the system profiler
iii. Run my program and wait for it to finish
iv. Stop the system profiler and identify the peak #CPUs and RAM used
Context: Unix / Linux
You could use time(1), but there are two variants of it.
the time builtin in most shells is usually not enough for your needs, so...
you need the /usr/bin/time program from the time package
Then you'll run /usr/bin/time -v yourscript
and inside your script you could use the times builtin (see this)
You also should consider using perf(1) and oprofile(1)
At last, you might eventually code something or use something querying the kernel thru /proc/ (see proc(5)). Utilities like xosview are using that.
Related
I am trying to debug a performance of my program. What would be ideal is to have a way to see in detail when was the thread doing useful work, when was it blocked by page faults, when was it executing some memory writes and reads, etc...
I would simply like to have a detailed understanding of whats going on. Is it possible?
The linux kernel sources come with the perf tool that can measure a large number of performance counter, all of those you listed included, and can print statistics about it, annotate symbols, instructions and source lines with them (if debug symbols are available), and can track any process or also logical cpu cores.
Your Linux distribution will have the tool probably in a standalone package. Some hardening options of the kernel may limit what information root or non-root users can collect with it.
You can use perf and visualizing a perf output file graphically with hotspot
I'm new to Linux and Terminal (or whatever kind of command prompt it uses), and I want to control the amount of RAM a process can use. I already looked for hours to find an easy-t-use guide. I have a few requirements for limiting it:
Multiple instances of the program will be running, but I only want to limit some of the instances.
I do not want the process to crash once it exceeds the limit. I want it to use HDD page swap.
The program will run under WINE, and is a .exe.
So can somebody please help with the command to limit the RAM usage on a process in Linux?
The fact that you’re using Wine makes no difference in this particular context, which leaves requirements 1 and 2. Requirement 2 –
I do not want the process to crash once it exceeds the limit. I want it to use HDD page swap.
– is known as limiting the resident set size or rss of the process, and it’s actually rather nontrivial to do on Linux, as is demonstrated by a question asked in 2010. You’ll need to set up Linux control groups (cgroups). Fortunately, Justin L.’s answer gives a brief rundown on how to do so. Note that
instead of jlebar, you should use your own Unix user name, and
instead of your/program, you should use wine /path/to/Windows/program.exe.
Using cgroups will also satisfy your other requirements – you can start as many instances of the program as you wish, but only those which you start with cgexec -g memory:limited will be limited.
How can I benchmark a process in Linux? I need something like "top" and "time" put together for a particular process name (it is a multiprocess program so many PIDs will be given)?
Moreover I would like to have a plot over time of memory and cpu usage for these processes and not just final numbers.
Any ideas?
I typically throw together a simple script for this type of work.
Take a look at the kernel documentation for the proc filesystem (Google 'linux proc.txt').
The first line of /proc/stat (Section 1.8 in proc.txt) will give you cumulative cpu usage stats (i.e. user, nice, system, idle, ...). For each process, the file /proc/$PID/stat (Table 1-4 in proc.txt) will provide you with both process-specific cpu usage stats and memory usage stats (see rss).
If you google a bit you'll find plenty of detailed info on these files, and pointers to libraries / apps / code snippets that can help you obtain / derive the values you need. With that in mind, I'll focus on the high-level strategy.
For CPU stats, use your favorite scripting language to create an executable that takes a set of process ids for monitoring. At a fixed interval (ex: 1 second) poll / calculate the cumulative totals for each process and the system as a whole. During each poll interval, write all results on a single line to stdout.
For memory stats, write a similar script, but simply log the per-process memory usage. Memory is a bit easier as we directly obtain the instantaneous values.
Run these script for the duration of your test, passing the set of processes ids that you'd like to monitor and redirecting its output to a log file.
./logcpu $(pidof foo) $(pidof bar) > cpustats
./logmem $(pidof foo) $(pidof bar) > memstats
Import the contents of these files into a spreadsheet (for certain applications this is as easy as copy / paste). For CPU, you are after instantaneous values but have cumulative values, so you'll need to do some minor spreadsheet work to derive these values (it's just the delta 't(x + 1) - t(x)'). Of course you could have your cpu logger write the delta, but you'll be spending a bit more time up front on the script.
Finally, use your spreadsheet to generate a nice plot.
Following are the tools for monitoring a linux system
System commands like top, free -m, vmstat, iostat, iotop, sar, netstat, etc. Nothing comes near these linux utility when you are debugging a problem. These command give you a clear picture that is going inside your server
SeaLion: Agent executes all the commands mentioned in #1 (also user defined) and outputs of these commands can be accessed in a beautiful web interface. This tool comes handy when you are debugging across hundreds of servers as installation is clear simple. And its FREE
Nagios: It is the mother of all monitoring/alerting tools. It is very much customization but very much difficult to setup for beginners. There are sets of tools called nagios plugins that covers pretty much all important Linux metrics
Munin
Server Density: A cloudbased paid service that collects important Linux metrics and gives users ability to write own plugins.
New Relic: Another well know hosted monitoring service.
Zabbix
How can you profile a very long running script that is spawning lots of other processes?
We have a job that takes a long time to run - 11 or more hours, sometimes more than 17 - so it runs on a Amazon EC2 instance.
(It is doing cufflinks DNA alignment and stuff.)
The job is executing lots of processes, scripts and utilities and such.
How can we profile it and determine which component parts of the job take the longest time?
A simple CPU utilisation per process per second would probably suffice. How can we obtain it?
There are many solutions to your question :
munin is a great monitoring tool that can scan almost everything in your system and make nice graph about it :). It is very easy to install and use it.
atop could be a simple solution, it can scan cpu, memory, disk regulary and you can store all those informations into files (the -W option), then you'll have to anaylze those file to detect the bottleneck.
sar, that can scan more than everything on your system, but a little more hard to interpret (you'll have to make the graph yourself with RRDtool for example)
I have a RHEL box that I need to put under a moderate and variable amount of CPU load (50%-75%).
What is the best way to go about this? Is there a program that can do this that I am not aware of? I am happy to write some C code to make this happen, I just don't know what system calls will help.
This is exactly what you need (internet archive link):
https://web.archive.org/web/20120512025754/http://weather.ou.edu/~apw/projects/stress/stress-1.0.4.tar.gz
From the homepage:
"stress is a simple workload generator for POSIX systems. It imposes a configurable amount of CPU, memory, I/O, and disk stress on the system. It is written in C, and is free software licensed under the GPL."
Find a simple prime number search program that has source code. Modify the source code to add a nanosleep call to the main loop with whichever delay gives you the desired CPU load.
One common way to get some load on a system is to compile a large software package over and over again. Something like the Linux kernel.
Get a copy of the source code, extract the tar.bz2, go into the top level source directory, copy your kernel config from /boot to .config or zcat /proc/config.gz > .config, the do make oldconfig, then while true; do make clean && make bzImage; done
If you have an SMP system, then make -j bzImage is fun, it will spawn make tasks in parallel.
One problem with this is adjusting the CPU load. It will be a maximum CPU load except for when waiting on disk I/O.
You could possibly do this using a Bash script. Use " ps -o pcpu | grep -v CPU" to get the CPU Usage of all the processes. Add all those values together to get the current usage. Then have a busy while loop that basically keeps on checking those values, figuring out the current CPU usage, and waiting a calculated amount of time to keep the processor at a certain threshhold. More detail is need, but hopefully this will give you a good starting point.
Take a look at this CPU Monitor script I found and try to get some other ideas on how you can accomplish this.
It really depends what you're trying to test. If you're just testing CPU load, simple scripts to eat empty CPU cycles will work fine. I personally had to test the performance of a RAID array recently and I relied on Bonnie++ and IOZone. IOZone will put a decent load on the box, particularly if you set the file size higher than the RAM.
You may also be interested in this Article.
Lookbusy enables set value of CPU load.
Project site
lookbusy -c util[-high_util], --cpu-util util[-high_util]
i.e. 60% load
lookbusy -c 60
Use the "nice" command.
a) Highest priority:
$ nice -n -20 my_command
or
b) Lowest priority:
$ nice -n 20 my_command
A Simple script to load & hammer the CPU using awk. The script does mathematical calculations and thus CPU load peaks up on higher values passwd to loadserver.sh .
checkout the script # http://unixfoo.blogspot.com/2008/11/linux-cpu-hammer-script.html
You can probably use some load-generating tool to accomplish this, or run a script to take all the CPU cycles and then use nice and renice on the process to vary the percentage of cycles that the process gets.
Here is a sample bash script that will occupy all the free CPU cycles:
#!/bin/bash
while true ; do
true
done
Not sure what your goal is here. I believe glxgears will use 100% CPU.
So find any process that you know will max out the CPU to 100%.
If you have four CPU cores(0 1 2 3), you could use "taskset" to bind this process to say CPUs 0 and 1. That should load your box 50%. To load it 75% bind the process to 0 1 2 CPUs.
Disclaimer: Haven't tested this. Please let us know your results. Even if this works, I'm not sure what you will achieve out of this?