See how much memory my program used in Terminal - linux

I know when I type time ./myProgram <input.txt >output.txt on the terminal, once my program finish running, I can see the runtime conveniently. There must be some similar command to see memory usage of the program right? Please inform me concisely. Thanks!

One way to tell how much memory the terminal is using at any one time is to use top and:
find the line that corresponds to the terminal's process.
find the column that corresponds to memory usage (man top could likely tell you -- I'm on a Windows machine right now, and can't easily look it up.)
Perhaps try this command:
top | grep terminal
('terminal' can be replaced with 'xterm' or whatever terminal program you use)
Again, this will tell you the memory usage of a program at a particular time.

Related

extract total HD memory and HD memory usage with linux command top

I'm looking forward to extract disk memory usage from top command using grep command an save it into a variable available for the bash interpreter.
... and I assume you failed to grep those part from top's output without understanding why...
This happens to many users, the reason is that the top command has a somewhat unexpected output strategy. It does make sense, once you dig into it, but as said, unexpected:
By default, top does not write it's output to stdout which is the precondition so that you can grep through it's output. However if you take a look into it's man page (man top), then you will spot the -b flag you can use to invoke it such that it does write it's output to stdout. This enables you to process the output as you are used to under unixoid environments, for example by "grepping" parts of the output for further usage.
Together with the -n flag which allows you to limit the commands output to a single iteration (again: see the man page) you can construct a simple extraction command:
top -b -n 1|grep "KiB"
I think from here on you will find the way by yourself :-) Have fun!
top provides no information about your hard disk usage.
Take a look at df.

Is there a better way to display cpu usage in tmux?

Here's a solution which may consume a lot of cpu usage (stolen from this article):
There's a difference in my Ubuntu 12 ec2 server, I have to use top -bn1 instead of top -ln.
Here's my related .tmux.conf file:
set -g status-right '#[fg=yellow]#[(getCpuUsage.sh)]'
It actually calls top every 2 seconds and outputs a whole lot of information. I think there should be a way involving less cpu consumption or use some flag to limit the output of top to only cpu usage.
I use the small tmux-mem-cpu-load C++ program. It's at least one fork/exec per update either way, but probably better than invoking a shell.
If I knew tmux-mem-cpu-load, I would become too lazy to write my own rainbarf:
It has a fancier look, but it is a Perl script so it is not a good idea to run it every 2 seconds (on my experience, 15 seconds suffice).
You can try vmstat(1). It displays the averaged CPU load over all CPUs: user, system, idle and IO wait in the last four fields:
vmstat|while read s;do [[ "$s" =~ ([[:space:]]+[0-9]+){4}$ ]]&&echo $BASH_REMATCH; done
stat top command.
press 1.
Press 0 then
Press "t" twice.
It will display bar graph of the CPU usage.
You can change the color by +z. Then color number in the list.

Benchmark a linux Bash script

Is there a way to benchmark a bash script's performance? the script downloads a remote file, and then makes calls to multiple commandline programs to manipulate. I would like to know (or as much as possible):
Total time
Time spent downloading
Time spent on each command called
-=[ I think these could be wrapped in "time" calls right? ]=-
Average download speed
uses wget
Total Memory used
Total CPU usage
CPU usage per command called
I'm able to make edits to the bash script to insert any benchmark commands needed at specific points (ie, between app calls). Not sure if some "top" ninja-ry could solve this or not. Not able to find anything useful (at least to limited understanding) in man file.
Will be running the benchmarks on OSX Terminal as well as Ubuntu (if either matter).
strace -o trace -c -Ttt ./scrip
-c is to trace the time spent by cpu on specific call.
-Ttt will tell you time in microseconds at time of each system call running.
-o will save output in file "trace".
You should be able to achieve this a number of ways. One way is to use time built-in function for each command of interest and capture the results. You may have to be careful about any pipes and redirects;
You may also consider trapping SIGCHLD, DEBUG, RETURN, ERR and EXIT signals and putting timing information in there, but you may not get some results.
This concept of CPU usage of each command won't give you any thing useful, all commands use 100% of cpu. Memory usage is something you can pull out but you should look at
If you want to get deep process statistics then you would want to use strace... See strace(1) man page for details. I doubt that -Ttt as it is suggest elsewhere is useful all that tells you are system call times and you want other process trace info.
You may also want to see ltrace and dstat tools.
A similar question is answered here Linux benchmarking tools

How to only allowed certain text to be printed on Emacs Console?

This question maybe not related to Emacs only, but to all development environment that use Console for its debugging process. Here is the problem. I use Eshell to run the application we are being developed. It's a J2ME application. And for debugging, we just use System.out.println(). And now, suppose I want to allow only text that started with Eko: to be displayed in the console (interactively), is it possible?
I installed Cygwin in my Windows environment, and try to grep the output like this :
run | grep Eko:. It surely filtered only output with Eko: as the beginning, but it's not interactive. The output suppressed until the application quit. Well, that's useless anyway.
Is it possible to do it? What I mean is, we don't have to touch the application code itself?
I tag to linux also, because maybe some guys in Linux know the answer.
Many thanks!
The short: Try adding --line-buffered to your grep command.
The long: I assume that your application is flushing its output stream with every System.out.println(), and that grep has the lines available to read immediately, but is choosing to buffer output until it has 'enough' output saved up to make writing make sense. (This is typically 4k or 8k of data, which could be several hundred lines, depending upon your line length.)
This buffering makes great sense when the output is another program in the pipeline; reducing needless context switches is a great way to improve program throughput.
But if your printing is slow enough that it doesn't fill the buffer quickly enough for 'real time' output, then switching to line-buffered output might fix it.

How can I record what process or kernel activity is using the disk in GNU/Linux?

On a particular Debian server, iostat (and similar) report an unexpectedly high volume (in bytes) of disk writes going on. I am having trouble working out which process is doing these writes.
Two interesting points:
Tried turning off system services one at a time to no avail. Disk activity remains fairly constant and unexpectedly high.
Despite the writing, do not seem to be consuming more overall space on the disk.
Both of those make me think that the writing may be something that the kernel is doing, but I'm not swapping, so it's not clear to me what Linux might try to write.
Could try out atop:
http://www.atcomputing.nl/Tools/atop/
but would like to avoid patching my kernel.
Any ideas on how to track this down?
iotop is good (great, actually).
If you have a kernel from before 2.6.20, you can't use most of these tools.
Instead, you can try the following (which should work for almost any 2.6 kernel IIRC):
sudo -s
dmesg -c
/etc/init.d/klogd stop
echo 1 > /proc/sys/vm/block_dump
rm /tmp/disklog
watch "dmesg -c >> /tmp/disklog"
CTRL-C when you're done collecting data
echo 0 > /proc/sys/vm/block_dump
/etc/init.d/klogd start
exit (quit root shell)
cat /tmp/disklog | awk -F"[() \t]" '/(READ|WRITE|dirtied)/ {activity[$1]++} END {for (x in activity) print x, activity[x]}'| sort -nr -k2
The dmesg -c lines clear your kernel log . The logger is then shut off, manually (using watch) dumped to a disk (the memory buffer is small, which is why we need to do this). Let it run for about five minutes or so, and then CTRL-c the watch process. After shutting off the logging and restarting klogd, analyze the results using the little bit of awk at the end.
If you are using a kernel newer than 2.6.20 that is very easy, as that is the first version of Linux kernel that includes I/O accounting. If you are compiling your own kernel, be sure to include:
CONFIG_TASKSTATS=y
CONFIG_TASK_IO_ACCOUNTING=y
Kernels from Debian packages already include these flags, so there is no need for recompiling your kernel. Standard utility for accessing I/O accounting data in real time is iotop(1). It gives you a complete list of processes managed by I/O scheduler, and displays per process statistics for read, write and total I/O bandwidth used.
You may want to investigate iotop for Linux. There are some Solaris versions floating around, but there is a Debian package for example.
You can use the UNIX-command lsof (list open files). That prints out the process, process-id, user for any open file.
You could also use htop, enabling IO_RATR column. Htop is an exelent top replacement.
Brendan Gregg's iosnoop script can (heuristically) tell you about currently using the disk on recent kernels (example iosnoop output).
You could try to use SystemTap , it has a lot of examples , and if I'm not mistaken , it shows how to do this sort of thing .
I've recently heard about Mortadelo, a Filemon clone, but have not checked it out myself yet:
http://gitorious.org/mortadelo

Resources