I want to check the network bandwidth used by my process.
For this i found that nethogs tool is useful. Using this tool i can see which process is eating up a network bandwidth and process behaviour.
But how do I capture data from nethogs for a my process and store it into log file ?
You can run nethogs in background in tracemode and write output to a file like this:
sudo nethogs -t eth1 &> /var/tmp/nethogs.log &
Download and build the nethogs-parser as described here.
Then after you have accumulated enough data you can run the parser to see the results:
./hogs -type=pretty /var/tmp/nethogs.log
Make sure to kill the running nethogs process when you are done collecting data.
More info here on automating the task.
I dont know when these options got implemented but you can use nethogs -t or nethogs -b, the pid and user are strangely placed at the end of the pid command string, but easy enough to parse.
I think you need to use the latest cvs version 0.8.1-SNAPSHOT
You can use this command to capture output:
nethogs -d 5 | sed 's/[^[:print:][:cntrl:]]//g' > output.txt
The right command of nethogs is
nethogs -d 1 eth0 > output.txt
You need to specify the network interface otherwise, the default interface eth0 will be used. Sometime, nethogs might not show the proper output because of the network interface. It is always better to provide the network interface and generate some traffic during the experimentation. You can print the output to a file by adding > output.txt
-d argument specifies how frequently the output will be shown. Here, I gave 1, this indicates that the output will be shown per second.
Hope this might be useful.
Related
I want to monitor and log all traffic that a specific process produces.
I know about tcpdump, but it seems it doesn't support filtering by process (pid/path, or at least user).
It there any other way to log all traffic from a process? Ideally I should be able to filter ports as well.
Thanks!
You should use strace command:
strace -o /tmp/network.out -e trace=network -fp <PID>
I am currently using the following to read the system log:
journalctl -u <service name> | tail -n1
However I have a need to monitor the system log live, as changes come in (in a bash script), instead of just looping through the log file.
What is the best way to do this? I did some research to where the journalctl command is reading from, and it seems that the system logs are unreadable (or at least when I attempted with cat.
Any suggestions would be much appreciated!
journalctl tool has a -f flag which enables printing the contents of log file as soon as it is changed. Use it like this:
$ journalctl -u <service name> -f
I want to run a game server inside my Ubuntu machine. I want to run it in the background and write the live output of that process inside a log file. I tried using nohup and running the game server using "&" at the end but I couldn't make it work the way I wanted.
Then I started reading about named pipes and actually gave it a go. I made a simple script that in theory should work. But, of course I am missing something.
First, I made a pipe using the mkfifo command.
mkfifo testpipe
Then I created a small script:
#!/bin/bash
./mta-server64 > pipe &
pid=$!
echo $pid // so I know the pid of the process
cat < pipe > log.txt &
(Note: I wrote this code from memory.)
The code works only when there is an error and the process stops. It actually records the game console error. But when the game server is running I get no output in the log file.
I want to read the output (stdout and stderr if I am not mistaken) of a process running in background and record it those inside a log file.
I also thought about using screen as it logs everything inside a file but I would prefer not using it if there is a better solution.
EDIT:
First of all: thank you for the interest you had in helping me. In the same way, I have to apologize for only giving scarce details about what I intend to do with this small project and for my limited understanding of stdout and stderr.
Let's go to the first base.
I want to run a game server named Multi Theft Auto (https://multitheftauto.com/). This is GTA San Andreas but multiplayer.
I can easily run this game server in my Ubuntu server by calling the executable ./mta-server-64. After calling it the game server console appears:
[|] MTA: San Andreas :: 0/32 players :: 196 resources :: 125 fps (25)
MTA:BLUE Server for MTA:SA
==================================================================
= Multi Theft Auto: San Andreas v1.5.6 [64 bit]
==================================================================
= Server name : Default MTA Server
= Server IP address: auto
= Server port : 22884
=
= Log file : /root/mta/mods/deathmatch/logs/server.log
= Maximum players : 32
= HTTP port : 22564
= Voice Chat : Disabled
= Bandwidth saving : Medium
==================================================================
[09:49:07] Resource 'mapmanager' requests some acl rights. Use the command 'aclrequest list mapmanager'
[09:49:07] Resources: 196 loaded, 0 failed
[09:49:07] Starting resources...
[09:49:07] Server minclientversion is now 1.5.6-9.16588.0
[09:49:07] INFO: MAPMANAGER: Some important ACL permissions are missing. To ensure the correct functioning of Mapmanager, please write: aclrequest allow mapmanager all
[09:49:07] Gamemode 'play' started.
[09:49:07] Authorized serial account protection is enabled for the ACL group(s): `Admin` See http://mtasa.com/authserial
[09:49:07] WARNING: <owner_email_address> not set
[09:49:07] Server started and is ready to accept connections!
[09:49:07] To stop the server, type 'shutdown' or press Ctrl-C
[09:49:07] Type 'help' for a list of commands.
[09:49:07] Querying MTA master server... success! (Auto detected IP:xxx.xxx.xxx.xxx)
I am using the following script to run the process in the background and (try to) get the live output from:
#!/bin/bash
newport=$(shuf -i 22003-22900 -n 1)
newip=$(shuf -i 22003-22900 -n 1)
rm -rf ~/server/*
cp -r /home/user*/ftp/server/mtaserver/serverfiles/* ~/server
sed -i "s/<httpport>[0-9][0-9][0-9][0-9][0-9]<\/httpport>/<httpport>$newport<\/httpport>/g" ~/server/mods/deathmatch/mtaserver.conf
sed -i "s/<serverport>[0-9][0-9][0-9][0-9][0-9]<\/serverport>/<serverport>$newip<\/serverport>/g" ~/server/mods/deathmatch/mtaserver.conf
~/server/mta-server64 2>&1 | tee -a outfile &
mta_pid=$!
echo $mta_pid
sleep 6
pkill $mta_pid
(Note: Because of some technical problems I had to add the first few lines of script which automatically replace the game files with new ones and also replace the existing ports with random ones.)
This script starts the server and tries to log the output of the process. The process is automatically killed after few seconds so there is only one instance of the game server at any given time.
THE ISSUE:
This script only logs the output if there is an error. I still cannot get the live output of the process when it is still running. Maybe this is an issue with the game server but truly believe there should be a way to make it work the way I intend.
I believe you want to use tee command to split the pipe output to log file.
I suggest you read this article and these answers 1 2.
Usually this is enough nohup somecommand > somecommand.log 2>&1 & then, tail -F somecommand.log to follow the logs.
After 2 days I finally figured out a way to make it work (the way I intended to work, without taking in consideration any major security/performance risks).
Reading the comments made me realize I was attacking the wrong point. The stdout of the game server is buffered, thus making it impossible to log it into a log file using the methods I tried when I posted my question At least this is what I came to understand).
I did some research on how to run the application without having the stdout buffered: https://serverfault.com/questions/294218/is-there-a-way-to-redirect-output-to-a-file-without-buffering-on-unix-linux
My code now:
stdbuf -o0 ~/server/mta-server64 >> pipe &
cat < pipe | tee -a outfile &
After creating the named pipe it executes the game server inside that pipe and then appends the stdout into the log file.
The stdbug -o0 command disables the stdout buffering (as noted in the link above).
This works for me and I cannot guarantee it will work for anybody else. I am still not aware if disabling the buffering is a safe approach to my issue but for now it is what I need.
I'm working on a simulation model, where I want to determine when the storage IOPS capacity becomes a bottleneck (e.g. and HDD has ~150 IOPS, while an SSD can have 150,000). So I'm trying to come up with a way to benchmark IOPS in a command (git) for some of it's different operations (push, pull, merge, clone).
So far, I have found tools like iostat, however, I am not sure how to limit the report to what a single command does.
The best idea I can come up with is to determine my HDD IOPS capacity, use time on the actual command, see how long it lasts, multiply that by IOPS and those are my IOPS:
HDD ->150 IOPS
time df -h
real 0m0.032s
150 * .032 = 4.8 IOPS
But, this is of course very stupid, because the duration of the execution may have been related to CPU usage rather than HDD usage, so unless usage of HDD was 100% for that time, it makes no sense to measure things like that.
So, how can I measure the IOPS for a command?
There are multiple time(1) commands on a typical Linux system; the default is a bash(1) builtin which is somewhat basic. There is also /usr/bin/time which you can run by either calling it exactly like that, or telling bash(1) to not use aliases and builtins by prefixing it with a backslash thus: \time. Debian has it in the "time" package which is installed by default, Ubuntu is likely identical, and other distributions will be quite similar.
Invoking it in a similar fashion to the shell builtin is already more verbose and informative, albeit perhaps more opaque unless you're already familiar with what the numbers really mean:
$ \time df
[output elided]
0.00user 0.00system 0:00.01elapsed 66%CPU (0avgtext+0avgdata 864maxresident)k
0inputs+0outputs (0major+261minor)pagefaults 0swaps
However, I'd like to draw your attention to the man page which lists the -f option to customise the output format, and in particular the %w format which counts the number of times the process gave up its CPU timeslice for I/O:
$ \time -f 'ios=%w' du Maildir >/dev/null
ios=184
$ \time -f 'ios=%w' du Maildir >/dev/null
ios=1
Note that the first run stopped for I/O 184 times, but the second run stopped just once. The first figure is credible, as there are 124 directories in my ~/Maildir: the reading of the directory and the inode gives roughly two IOPS per directory, less a bit because some inodes were likely next to each other and read in one operation, plus some extra again for mapping in the du(1) binary, shared libraries, and so on.
The second figure is of course lower due to Linux's disk cache. So the final piece is to flush the cache. sync(1) is a familiar command which flushes dirty writes to disk, but doesn't flush the read cache. You can flush that one by writing 3 to /proc/sys/vm/drop_caches. (Other values are also occasionally useful, but you want 3 here.) As a non-root user, the simplest way to do this is:
echo 3 | sudo tee /proc/sys/vm/drop_caches
Combining that with /usr/bin/time should allow you to build the scripts you need to benchmark the commands you're interested in.
As a minor aside, tee(1) is used because this won't work:
sudo echo 3 >/proc/sys/vm/drop_caches
The reason? Although the echo(1) runs as root, the redirection is as your normal user account, which doesn't have write permissions to drop_caches. tee(1) effectively does the redirection as root.
The iotop command collects I/O usage information about processes on Linux. By default, it is an interactive command but you can run it in batch mode with -b / --batch. Also, you can a list of processes with -p / --pid. Thus, you can monitor the activity of a git command with:
$ sudo iotop -p $(pidof git) -b
You can change the delay with -d / --delay.
You can use pidstat:
pidstat -d 2
More specifically pidstat -d 2 | grep COMMAND or pidstat -C COMMANDNAME -d 2
The pidstat command is used for monitoring individual tasks currently being managed by the Linux kernel. It writes to standard output activities for every task selected with option -p or for every task managed by the Linux kernel if option -p ALL has been used. Not selecting any tasks is equivalent to specifying -p ALL but only active tasks (tasks with non-zero statistics values) will appear in the report.
The pidstat command can also be used for monitoring the child processes of selected tasks.
-C commDisplay only tasks whose command name includes the stringcomm. This string can be a regular expression.
How would I track the CPU and Ram usage for a process that may run, stop, and then re-run with a different PID?
I am looking to track this information for all processes on a Linux server but the problem is when the process stops and restarts, it will have a different PID and I am not sure how to identify it as the same process.
What you're looking for here is called "process accounting".
http://tldp.org/HOWTO/Process-Accounting/
If you know the command of the process, just pipe it to a grep like this:
ps ux | grep yourcommandgoeshere
You can setup a crontab to record output of commands like
top -b -n1 | grep
ps ux | grep
Alternatively, you can use sealion service. By simply installing agent and configuring it according to your needs in simple steps, you can see output of the executed commands online.
Hope it helps...