shell script with synchronization - multithreading

I have to write a script where I take a tcpdump on my machine and on a remote machine "simultaneously". That is the beginning of capture (0th second) should be simultaneous, so that I can compare the two tcpdumps in my analysis.
Is there a way I can achieve this?

If you just need approximate time (e.g. with a margin of error in range of, say, 200ms), then just make sure both machines have the same time (e.g. via NTP) and then use e.g. cron to run both commands at the same time.
If you want this to be more often, you might want to use at command instead of cron. You can do some simple date arithmetics, e.g. see this:
Bash date/time arithmetic
or sleep until the specified time:
Bash: Sleep until a specific time/date
in both scripts (i.e. local and remote), then run the local command and run the command on the remote machine using ssh.
If you are OK to use e.g. Python, you can make the use of datetime module, e.g. see this:
http://www.doughellmann.com/PyMOTW/datetime/
The idea is pretty much this:
Take current time
Calculate target time - add some cushion seconds (e.g. 10 seconds)
Run both scripts with that time as the parameter (one locally, one remotely with ssh)
Sleep until that time in both scripts - if you cannot ssh in 10 seconds or even worse if it takes more than 10 seconds to run local script, you have more serious problems than this one :)
Run tcpdump in both scripts - they should be pretty much synced up (with some tolerance, but I don't think it will ever go over 50ms on any recent system)
Hope this helps.

Here's something I wrote just now to synchronise multiple test clients:
#!/usr/bin/python
import time
import sys
now = time.time()
mod = float(sys.argv[1])
until = now - now % mod + mod
print "sleeping until", until
while True:
delta = until - time.time()
if delta <= 0:
print "done sleeping ", time.time()
break
time.sleep(delta / 2)
This script sleeps until next "rounded" or "sharp" time.
A simple use case is to run ./sleep.py 10; ./test_client1.py in one terminal and ./sleep.py 10; ./test_client2.py in another.
You want to make sure clocks on your machines are synchronised.
Alternatively, use one of these options in tcpdump, use something that gives you full timestamp.
-t
Don't print a timestamp on each dump line.
-tt
Print an unformatted timestamp on each dump line.
-ttt
Print a delta (micro-second resolution) between current and previous line on each dump line.
-tttt
Print a timestamp in default format proceeded by date on each dump line.
-ttttt
Print a delta (micro-second resolution) between current and first line on each dump line.
Finally you could run something like execnet to start commands on multiple machines at (almost) the same time.

Related

Running Bash Script at random time

I have as iperf.sh shell script in multiple sub servers that runs at every " 1,14,28,42,50 * * * * " and pings the iperf server to check bandwidth , is there any way to randomize this cron or setting up a shell script that sleeps and runs at random time...?
[ Note : The issue that i am facing with this classic cron system is all sub-servers are running the iperf.sh script at the same time and my main-Iperf server is getting high cpu utilization which is resulting to improper ping data. ]
Thanks In Advance.
You can add a randomized wait period at the start of your script (or even in the crontab itself, as suggested in the comments).
I recommend GNU shuf which will be more portable than $RANDOM (since not all shells will support it, e.g. dash won't).
sleep $(shuf -i5-20 -n1)
# Rest of script
You can experiment with the range of random wait periods (5 to 20 seconds in this example).

Python: Pause some activities but allow other parts of the code to continue

I have a Python 3 script running on a Raspberry Pi (Buster) which writes some instrument data to my Nextion Display using the serial/UART interface. I have, for now, setup my code to sleep for 5 minutes after the current data are displayed. This working.
The Nextion Display is touch sensitive so that if I touch it, it will send a serial data string which can be read via my script and will tell me where on the screen it was touched.
Now, I would like to modify my code such that it will react to the touch-screen even during the sleep period. I could put the program into a tight loop instead of using time.sleep(300) and check the elapsed time and read the serial port during each loop. This sounds to me like I would be overworking the Pi and wasting CPU cycles. Is there a better way to pause certain sections of code while allowing other to continue?

Time taken by `less` command to show output

I have a script that produces a lot of output. The script pauses for a few seconds at point T.
Now I am using the less command to analyze the output of the script.
So I execute ./script | less. I leave it running for sufficient time so that the script would have finished executing.
Now I go through the output of the less command by pressing Pg Down key. Surprisingly while scrolling at the point T of the output I notice the pause of few seconds again.
The script does not expect any input and would have definitely completed by the time I start analyzing the output of less.
Can someone explain how the pause of few seconds is noticable in the output of less when the script would have finished executing?
Your script is communicating with less via a pipe. Pipe is an in-memory stream of bytes that connects two endpoints: your script and the less program, the former writing output to it, the latter reading from it.
As pipes are in-memory, it would be not pleasant if they grew arbitrarily large. So, by default, there's a limit of data that can be inside the pipe (written, but not yet read) at any given moment. By default it's 64k on Linux. If the pipe is full, and your script tries to write to it, the write blocks. So your script isn't actually working, it stopped at some point when doing a write() call.
How to overcome this? Adjusting defaults is a bad option; what is used instead is allocating a buffer in the reader, so that it reads into the buffer, freeing the pipe and thus letting the writing program work, but shows to you (or handles) only a part of the output. less has such a buffer, and, by default, expands it automatically, However, it doesn't fill it in the background, it only fills it as you read the input.
So what would solve your problem is reading the file until the end (like you would normally press G), and then going back to the beginning (like you would normally press g). The thing is that you may specify these commands via command line like this:
./script | less +Gg
You should note, however, that you will have to wait until the whole script's output loads into memory, so you won't be able to view it at once. less is insufficiently sophisticated for that. But if that's what you really need (browsing the beginning of the output while the ./script is still computing its end), you might want to use a temporary file:
./script >x & less x ; rm x
The pipe is full at the OS level, so script blocks until less consumes some of it.
Flow control. Your script is effectively being paused while less is paging.
If you want to make sure that your command completes before you use less interactively, invoke less as less +G and it will read to the end of the input, you can then return to the start by typing 1G into less.
For some background information there's also a nice article by Alexander Sandler called "How less processes its input"!
http://www.alexonlinux.com/how-less-processes-its-input
Can I externally enforce line buffering on the script?
Is there an off the shelf pseudo tty utility I could use?
You may try to use the script command to turn on line-buffering output mode.
script -q /dev/null ./script | less # FreeBSD, Mac OS X
script -c "./script" /dev/null | less # Linux
For more alternatives in this respect please see: Turn off buffering in pipe.

Cygwin top command - See processes for all users

Does anybody know how to see the processes for all users using top command in Cygwin (part of procps library under System).
I know this can be done in *nix but I am struggling in Cygwin. I have tried using pslist but it does not behave in a putty SSH console.
I need to have a solution where I can see a top like dialog using SSH. I do not have any NTLM SSO access to the Win2k3 guest at all so ssh is the only way in.
top only displays Cygwin processes. ps -W will list Windows processes as well.
Manytimes the command "tasklist" gets the job done more effectively. It built into windows, just make sure your System32 folder is part of your bash profile PATH. There is also procps itself. You should also try using mintty for your terminal. You could always try attaching any of these task apps to screen, and or using watch to poll the information.
It seems you can do something like:
wmic process get ProcessId,Name,UserModeTime,KernelModeTime /EVERY:1
The User and Kernel mode times there seem to be expressed in 1/10,000,000th of second.
You should be able to post-process that output to get the CPU-usage per second.
Here using cygwin's perl:
wmic process get ProcessId,Name,UserModeTime,KernelModeTime /EVERY:1 |
perl -lne '
if (/\S/) {
my ($k,$c,$p,$u) = split /\s{2,}/;
$n{"$p\t$c"}=$k+$u;
} else {
my %c;
for my $k (keys %n) {
$c{$k} = $n{$k} - $o{$k} if defined $o{$k}
}
print "$_\t" . $c{$_}/1e5 for (sort {$c{$b}<=>$c{$a}} keys %c)[0..20];
%o = %n; %n = undef; print ""
}'
Outputs something like:
0 System Idle Process 588.12377
2196 sh.exe 107.00075
248 svchost.exe 85.80055
7140 explorer.exe 26.52017
[...]
every second.
Note that if the System Idle Process shows just under 800% on an idle system, that's because your system has 8 CPU cores (well at least 8 threads) as that counts the CPU time of all CPUs.
Also note that the EVERY:1 above is a lie. wmic doesn't seem to give that output every second. More likely, it sleeps roughly 1 second between each report and doesn't compensate for the time it takes to compute the report. So in practice, it will run every 1 second and a bit which means those percentages are not very accurate and slightly overestimated.

Linux display average CPU load for last week

On a Linux box, I need to display the average CPU utilisation per hour for the last week. Is that information logged somewhere? Or do I need to write a script that wakes up every 15 minutes to copy /proc/loadavg to a logfile?
EDIT: I'm not allowed to use any tools other than those that come with Linux.
You might want to check out sar (man page), it fits your use case nicely.
System Activity Reporter (SAR) - capture important system performance metrics at
periodic intervals.
Example from IBM Developer Works Article:
Add an entry to your root crontab
# Collect measurements at 10-minute intervals
0,10,20,30,40,50 * * * * /usr/lib/sa/sa1
# Create daily reports and purge old files
0 0 * * * /usr/lib/sa/sa2 -A
Then you can simply query this information using a sar command (display all of today's info):
root ~ # sar -A
Or just for a certain days log file:
root ~ # sar -f /var/log/sa/sa16
You can usually find it in the sysstat package for your linux distro
As far as I know it's not stored anywhere... It's a trivial thing to write, anyway. Just add something like
cat /proc/loadavg >> /var/log/loads
to your crontab.
Note that there are monitoring tools (like Munin) which can do this kind of thing for you, and generate pretty graphs of it to boot... they might be overkill for your situation though.
I would recommend looking at Multi Router Traffic Grapher (MRTG).
Using snmpd to read the load average, it will automatically calculate averages at any time interval and length, along with nice charts for analysis.
Someone has already posted a CPU usage example.

Resources