python: will memory_profiler affect runtime? - python-3.x

I am evaluating the tools that profile my python program. One of the interesting tools here is memory_profiler. Before moving forward, just want to know whethermemory_profiler affects runtime. The reason I am asking this question is that memory_profiler will output a lot of memory usages. So I am suspecting it might affect runtime.
Thanks
Derek

It depends how you are using memory_profiler. This can be used in two different ways:
To get memory usage line-by-line (run with python -m memory_profiler my_script.py). This needs to get memory information (from the OS) for every line executed within the profiled function. How this affects run-time depends on the amount of lines in the function: if it has a lot of lines with fast execution times, it might suppose a significant overhead. On the other hand, if the function to profile has few lines, and each lines has a significant computing time, then the overhead will be negligible.
To get memory as a function of time (run with mprof run my_script.py and plot with mprof plot). In this case the function that collects the memory usage is in a different process as the one that runs your script, hence the overhead is minimal (unless you are using all CPUs).

Related

Results of Kernal PCA and LLE are different for different number of CPU cores provided for the run

I am doing dimensionality reduction using Scikit-Learns's KPCA and sometimes LLE APIs.
I have dataset which has a shape of around (700X150) all numerical.
I am just trying to pass this data to one of the above mentioned APIs to reduce its features, I have written a simple python script(say run.py) for it which I can run from terminal, that also saves the data after reduction.
What issue I am facing is, I am using "taskset" command in linux terminal to assign certain number of CPUs for a particular run. I can give any number of CPUs out of how much I have on my machine, for example, the terminal command could be:
taskset -c 1-3 python run.py when I want to give 3 cores
or taskset -c 1-2 python run.py when I want to use just 2 cores.
or simply just python run.py when I do not want to specify any CPU.
The problem is I am getting different results in all the three cases, by different results i mean output data of there three runs are different from one another, which should not happen since I using the script, same input data, and same algorithm(either KPCA or LLE) for all the three runs, I have also kept 'n_jobs' parameter to 2 because I am at least using 2 CPUs when I am using taskset. I have also supplied a random_state. All these 3 results are totally reproducible fortunately, that means the 1st command(with 3 cores) will produce same output data on every run, similarly 2nd and 3rd command also produces same results in each of their respective runs if run multiple times.
But the question why are these output different from each other ?
Setting up the taskset in my run is important for me because I am using a multi-core machine and I need to schedule different CPUs for different tasks, sometimes I have 2, sometimes I have 3, sometimes n number of CPUs for the same task which I give them accordingly but I don't want the results to be different based on how many CPUs I gave, this is affecting my classification performance as well which is later in the pipeline.
Also, done some experiments , I don't see this behavior when I use Isomap for reducing my data. The results are same doesn't matter how many CPUs I give.
I also used "numactl" command in place of "taskset" but the behavior was same.
Surprisingly, we could also see this same behaviour when using kpca function in R language! When I use R do to the same thing. Is there anything common and fundamental here regarding KPCA that I am missing ?
Please help.
Thanks,
Pranay
There might be something interesting in understanding exactly how the results differ. Algorithms like LLE, PCA and k-PCA that have a matrix factorization that has a sign ambiguity (e.g. in PCA, you can negate the component vectors and negate the coefficients and have the "same" answer). I'm not exactly what approach is being used for that matrix factorization, and what role randomization plays in that, and how it varies when it is parallelized, but it doesn't surprise me that it might be different when the computation is split across more processors, even with the same random seed.
TL;DR: If the results are different just in that some coordinates are negated, that isn't surprising. If they are more different than that, then I don't have a good answer.

Linux CPU Usage Tools

Background
I've written a tool to capture CPU usage on a per/thread basis. The output of the tools is a binary file, that I can pump into my parsing utility that I wrote. And the output of the parsing utility is a CSV file that I can import into Excel to chart pretty graphs of process/thread CPU usage.
This CPU usage capture tool is running on an embedded ARM platform running a Linux kernel based on 2.6.35.3. That being said, I was concerned about making the tool light weight. I didn't want it to store directly to a CSV file, in order to minimize the processing time and the file size of the captured data.
Question
The tool works, but I'm wondering if I took the long way around the problem? Is there already a tool out there that does this (or something like it)?
You're probably wondering why I care if I already made a tool that works. Well, it's not as light weight as I'd like. It's taking up about 10% of CPU usage. As a benchmark, top only takes up about 1% (max).
Update
I've decided to continue using my tool for now. At least until a better solution becomes available. I was able to shave off a couple percentage points by using open() instead of fopen() on /proc/stat. I'm also using read() instead of fgets().
IBM has a tool called nmon which does the same(for AIX & Linux): According to IBM's documentation, it takes ~2% CPU. You may want to look at that.
Comparing nmon with your tool could give you a fair idea about your program's performance and how you may improve your csv capture.
This might be a bit of a steep learning curve, but you might want look into SystemTap: http://sourceware.org/systemtap/

Accurate way of measuring overhead in kernel space

I recently implemented a security mechanism for Linux which hooks into system calls. Now I have to measure the overhead caused by it. The project requires to compare the execution time of typical Linux apps with and without the mechanism. By typical Linux apps I assume ex. gzipping 1G file, doing 'find /', grepping files. The main goal is to show the overhead in different types of tasks: CPU bound, I/O bound etc.
The question is: how to organise the test so that they will be reliable? The first important thing is the fact that my mechanism works only in kernel space, so it is relevant to compare systime. I can use 'time' command for it, but is it the most accurate way of measuring systime? Another idea is to run those apps in long loops to minimize error. Then the loops should be inside or outside time command? If they are outside I will get many results - should I choose min, max, median, average?
Thanks for any suggestions.
I think you want more to measure a typical application payload (as Ninjajl's comment suggests, the compilation of the kernel could be a good payload). You probably don't want to measure the overhead inside each syscall itself, or even inside the kernel as a whole.
The reason for this is that most applications spend much more time and resource in user-space than in kernel-land (i.e. syscalls), so overhead inside syscalls is a "second-order" effect and probably don't matter as much. Of course, there are probable exceptions.
Perhaps phoronix test suite might be relevant.
You might be interested by oprofile
See also this answer and this question

Benchmark a linux Bash script

Is there a way to benchmark a bash script's performance? the script downloads a remote file, and then makes calls to multiple commandline programs to manipulate. I would like to know (or as much as possible):
Total time
Time spent downloading
Time spent on each command called
-=[ I think these could be wrapped in "time" calls right? ]=-
Average download speed
uses wget
Total Memory used
Total CPU usage
CPU usage per command called
I'm able to make edits to the bash script to insert any benchmark commands needed at specific points (ie, between app calls). Not sure if some "top" ninja-ry could solve this or not. Not able to find anything useful (at least to limited understanding) in man file.
Will be running the benchmarks on OSX Terminal as well as Ubuntu (if either matter).
strace -o trace -c -Ttt ./scrip
-c is to trace the time spent by cpu on specific call.
-Ttt will tell you time in microseconds at time of each system call running.
-o will save output in file "trace".
You should be able to achieve this a number of ways. One way is to use time built-in function for each command of interest and capture the results. You may have to be careful about any pipes and redirects;
You may also consider trapping SIGCHLD, DEBUG, RETURN, ERR and EXIT signals and putting timing information in there, but you may not get some results.
This concept of CPU usage of each command won't give you any thing useful, all commands use 100% of cpu. Memory usage is something you can pull out but you should look at
If you want to get deep process statistics then you would want to use strace... See strace(1) man page for details. I doubt that -Ttt as it is suggest elsewhere is useful all that tells you are system call times and you want other process trace info.
You may also want to see ltrace and dstat tools.
A similar question is answered here Linux benchmarking tools

ImageMagick's display GPU "memory leak"?

I'm testing CUDA app and I have run into strange memory issue:
My program performs some image operations and displays it using ImageMagick's display program.
The problem is that every time I run that IM's display I get more GPU memory usage, so less memory for GPU computation.
I'm using IM's display, because I couldn't find anything that displays image from the pipe input. Any suggestions?
Anyway why IM's display takes so much GPU memory and why is it not freed?
Based on your question, you're attempting to display a series of files in sequence using a shell not unlike Bash after performing a set of GPU-intensive operations. You're curious why more GPU memory is being consumed with every subsequent invocation of ImageMagick display, which appears to be closing out successfully after the conclusion of each operation.
We may further theorize that you're using ImageMagick's OpenCL support for at least some of your processing. While we don't have enough information to determine what your GPU's texture buffers look like at the completion of each rendering via display, I speculate your GPU isn't freeing textures expediently, causing memory to slowly creep up.
Instead of continuing to build conjecture around this hypothesis, I will instead recommend a tool to debug your issue: gDEBugger. This should allow you to interrogate your video card to determine exactly why things are slowing down.
Best of luck with your application.
I know it's old, but we have figured out that using pipes (popen()) makes sophisticated copy of the program in memory, what also causes copying the end program directives, or whatever called... So when I close program opened with popen I also finish all CUDA related context that are usually freed in "background", when program ends. So cleaning CUDA memory after I close popen application won't work, and I thing here was my memory leak and general major program error.
I hope someone will find it useful.

Resources