How to output redirect to overwrite file while command is running Linux? - linux

I am not sure if this is even possible. But I am using this command to get network throughput.
ifstat -t -S -i wlan0
Run just like that it updates inline on the console but when I pipe it, it appends a new line to the file.
ifstat -t -S -i wlan0 >> /tmp/transfer.txt
Time wlan0
HH:MM:SS KB/s in KB/s out
21:33:35 4.27 201.47
21:33:36 4.20 178.88
21:33:37 4.41 190.76
21:33:38 4.32 186.61
21:33:39 5.07 177.42
21:33:40 4.15 182.87
21:33:41 5.70 180.93
21:33:42 4.21 194.71
21:33:43 3.80 181.35
21:33:44 3.86 185.57
21:33:45 3.92 189.78
21:33:46 4.08 195.29
etc...
OK I understand using this will overwrite the file.But only after I run it the first time.Not DURING the execution of the app.
ifstat -t -S -i wlan0 >> /tmp/transfer.txt
I really do not need to keep a log of all the transfer rates and only interested in writing that one line on every update while the application is running. Instead of appending lines during executions, I want it to create a new file or overwrite it every second.

Technically you're not piping, but redirecting output.
Looks like you want to use > instead of >>?
For obtaining just the last line while ifstat is executing you could extract it in a 2nd file like this:
while true; do tail -1 /tmp/transfer.txt > /tmp/transfer2.txt; sleep .5; done
To overwrite the file each time with out keeping a log.
while true; do ifstat -t -i wlan0 1 1 | tail -1 > /tmp/transfer.txt; sleep .5; done;

You can try one of the following (I do not have your version of ifstat, so I cannot verify this on my own system).
while /bin/true; do ifstat -t -i wlan0 1 > tmp/transfer.txt; sleep 1; done
or perhaps just
ifstat -t -i wlan0 > tmp/transfer.txt
So, don't use the -S flag since this does not work when redirecting to file.

Related

Why is Crontab not starting my tcpdump bash script capture?

I have created a simple bash script to start capturing traffic from all interfaces I have in my Linux machine (ubuntu 22), but this script should stop capturing traffic 2 hours after the machine has reboot. Below is my bash script
#!/bin/bash
cd /home/user/
tcpdump -U -i any -s 65535 -w output.pcap &
pid=$(ps -e | pgrep tcpdump)
echo $pid
sleep 7200
kill -2 $pid
The script works fine if I run it, but I need to have it running after every reboot.
Whenever I run the script, it works without problem
user#linux:~$ sudo ./startup.sh
[sudo] password for user:
tcpdump: data link type LINUX_SLL2
tcpdump: listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 65535 bytes
1202
35 packets captured
35 packets received by filter
0 packets dropped by kernel
but when I set it in the crontab as
#reboot /home/user/startup.sh
it does not start at reboot. I used ps -e | pgrep tcpdump to make sure if the script is running but there is not an output, it seems that it is not starting the script after the reboot. I don't know if I need to have root permissions for that. Also, I checked the file permission, and it has
-rwxrwxr-x 1 user user 142 Nov 4 10:11 startup.sh
Any suggestion on why it is not starting the script at the reboot?
Suggesting to update your script:
#!/bin/bash
source /home/user/.bash_profile
cd /home/user/
tcpdump -U -i any -s 65535 -w output.pcap &
pid=$(pgrep -f tcpdump)
echo $pid
sleep 7200
kill -2 $pid
Suggesting to inspect crontab execution log in /var/log/cron
The problem here was that even though the user has root permission, if an script needs to be run in crontab at #reboot, crontab needs to be modified by root. That was the only way I found to run the script. As long as I am running tcpdump, this will require root permission but crontab will not start it at the boot up if it is not modified by sudo.

How to stop writing to a capture file using tcpdump after it reaches a specific size

I am looking for some solution to stop capturing the tcpdump packet after it capture a specified size .I am using the below command to achieve this but it looks like the tcpdump is not writing all the captured packet to the specified file(myfile.pcap).
sudo tcpdump -i en0 -C 10 -W 1 -z ./stop-tcpdump.sh -w myfile.pcap -K -n
cat stop-tcpdump.sh
#!/bin/sh
TCP_EXECUTABLE="tcpdump"
pid=$(pidof ${TCP_EXECUTABLE})
sudo kill -2 $pid
The easiest solution for tcpdump is probably just to increase -W 1 to -W 2. This will cause a 2nd capture file to begin to be written, but the 1st file of 10MB will remain fully intact instead of getting truncated, because the tcpdump instance won't necessarily be killed due to timing issues before that happens.
Alternatively, you could switch to using dumpcap or tshark, both of which support an explicit -a filesize:value option, so no post-rotate kill script is needed. Note that unlike tcpdump's -C option, this option expects the value in units of kB, not MB.

How can I have tcpdump write to file and standard output the appropriate data?

I want to have tcpdump write raw packet data into a file and also display packet analysis into standard output as the packets are captured (by analysis I mean the lines it displays normally when -w is missing).
Can anybody please tell me how to do that?
Here's a neat way to do what you want:
tcpdump -w - -U | tee somefile | tcpdump -r -
What it does:
-w - tells tcpdump to write binary data to stdout
-U tells tcpdump to write each packet to stdout as it is received, rather than buffering them and outputting in chunks
tee writes that binary data to a file AND to its own stdout
-r - tells the second tcpdump to get its data from its stdin
Since tcpdump 4.9.3 4.99.0, the --print option can be used:
tcpdump -w somefile --print
Wednesday, December 30, 2020, by mcr#sandelman.ca, denis and fxl.
Summary for 4.99.0 tcpdump release
[...]
User interface:
[...]
Add --print, to cause packet printing even with -w.
tcpdump ${ARGS} &
PID=$!
tcpdump ${ARGS} -w ${filename}
kill $PID
If you want a way to do it without running tcpdump twice, consider:
sudo tcpdump port 80 -w $(tty) | tee /tmp/output.txt
From the interactive command prompt you could use $TTY instead of $(tty) but in a script the former wouldn't be set (though I'm not sure how common it is to run tcpdump in a script).
Side-note: it's not very Unix-y the way tcpdump by default makes you write to a file. Programs should by default write to stdout. Redirection to a file is already provided by the shell constructs. Maybe there's a good reason tcpdump is designed this way but I don't know what that is.

How to run tshark commands from root mode in linux using TCL script ?

I am using linux pc and installed tshark . And have to capture packets in eth1 interface using TCL script. But tshark is running in root mode. Capturing and script running pc's are same. How to login as root and how to run tshark commands using TCL ? Please provide me a solution for this.
#!/usr/bin/tclsh
set out [exec tshark -V -i eth1 arp -c 1 ]
puts $out
Output
test#test:~$ tclsh pcap.tcl
Capturing on eth1
tshark: The capture session could not be initiated (eth1: You don't have permission to capture on that device (socket: Operation not permitted)).
Please check to make sure you have sufficient permissions, and that you have the proper interface or pipe specified.
0 packets captured
while executing
"exec tshark -V -i eth1 arp -c 1 "
invoked from within
"set out [exec tshark -V -i eth1 arp -c 1 ]"
(file "pcap.tcl" line 5)
test#test:~$
please try below steps and also refer this link http://packetlife.net/blog/2010/mar/19/sniffing-wireshark-non-root-user/
root#test:/usr/bin# setcap cap_net_raw,cap_net_admin=eip /usr/bin/dumpcap
root#test:/usr/bin# getcap /usr/bin/dumpcap
/usr/bin/dumpcap = cap_net_admin,cap_net_raw+eip
root#test:/usr/bin# exit
exit
test#test:/usr/bin$ tshark -V -i eth1
Capturing on eth1
Frame 1 (60 bytes on wire, 60 bytes captured)
Arrival Time: Aug 8, 2013 13:54:27.481528000
[Time delta from previous captured frame: 0.000000000 seconds]
[Time delta from previous displayed frame: 0.000000000 seconds]
You have to either elevate the privileges of your tshark process via sudo (or any other available means) or run your whole script with elevated privileges.
One way to do that which might be simpler than sudo as it would require zero customizations is to write a super-simple C program which would just run /usr/bin/tshark with the necessary arguments and then make that program setuid root and distribute along with your Tcl program. That is only needed if you need portability. Otherwise sudo is much simpler.

linux: redirect stdout after the process started [duplicate]

I have some scripts that ought to have stopped running but hang around forever. Is there some way I can figure out what they're writing to STDOUT and STDERR in a readable way?
I tried, for example, to do:
$ tail -f /proc/(pid)/fd/1
but that doesn't really work. It was a long shot anyway.
Any other ideas?
strace on its own is quite verbose and unreadable for seeing this.
Note: I am only interested in their output, not in anything else. I'm capable of figuring out the other things on my own; this question is only focused on getting access to stdout and stderr of the running process after starting it.
Since I'm not allowed to edit Jauco's answer, I'll give the full answer that worked for me (Russell's page relies on un-guaranteed behaviour that, if you close file descriptor 1 for STDOUT, the next creat call will open FD 1.
So, run a simple endless script like this:
import time
while True:
print 'test'
time.sleep(1)
Save it to test.py, run with
$ python test.py
Get the PID:
$ ps auxw | grep test.py
Now, attach gdb:
$ gdb -p (pid)
and do the fd magic:
(gdb) call creat("/tmp/stdout", 0600)
$1 = 3
(gdb) call dup2(3, 1)
$2 = 1
Now you can tail /tmp/stdout and see the output that used to go to STDOUT.
There's several new utilities that wrap up the "gdb method" and add some extra touches. The one I use now is called "reptyr" ("Re-PTY-er"). In addition to grabbing STDERR/STDOUT, it will actually change the controlling terminal of a process (even if it wasn't previously attached to a terminal).
The best use of this is to start up a screen session, and use it to reattach a running process to the terminal within screen so you can safely detach from it and come back later.
It's packaged on popular distros (Ex: 'apt-get install reptyr').
http://onethingwell.org/post/2924103615/reptyr
GDB method seems better, but you can do this with strace, too:
$ strace -p <PID> -e write=1 -s 1024 -o file
Via the man page for strace:
-e write=set
Perform a full hexadecimal and ASCII dump of all the
data written to file descriptors listed in the spec-
ified set. For example, to see all output activity
on file descriptors 3 and 5 use -e write=3,5. Note
that this is independent from the normal tracing of
the write(2) system call which is controlled by the
option -e trace=write.
This prints out somewhat more than you need (the hexadecimal part), but you can sed that out easily.
I'm not sure if it will work for you, but I read a page a while back describing a method that uses gdb
I used strace and de-coded the hex output to clear text:
PID=some_process_id
sudo strace -f -e trace=write -e verbose=none -e write=1,2 -q -p $PID -o "| grep '^ |' | cut -c11-60 | sed -e 's/ //g' | xxd -r -p"
I combined this command from other answers.
strace outputs a lot less with just -ewrite (and not the =1 suffix). And it's a bit simpler than the GDB method, IMO.
I used it to see the progress of an existing MythTV encoding job (sudo because I don't own the encoding process):
$ ps -aef | grep -i handbrake
mythtv 25089 25085 99 16:01 ? 00:53:43 /usr/bin/HandBrakeCLI -i /var/lib/mythtv/recordings/1061_20111230122900.mpg -o /var/lib/mythtv/recordings/1061_20111230122900.mp4 -e x264 -b 1500 -E faac -B 256 -R 48 -w 720
jward 25293 20229 0 16:30 pts/1 00:00:00 grep --color=auto -i handbr
$ sudo strace -ewrite -p 25089
Process 25089 attached - interrupt to quit
write(1, "\rEncoding: task 1 of 1, 70.75 % "..., 73) = 73
write(1, "\rEncoding: task 1 of 1, 70.76 % "..., 73) = 73
write(1, "\rEncoding: task 1 of 1, 70.77 % "..., 73) = 73
write(1, "\rEncoding: task 1 of 1, 70.78 % "..., 73) = 73^C
You can use reredirect (https://github.com/jerome-pouiller/reredirect/).
Type
reredirect -m FILE PID
and outputs (standard and error) will be written in FILE.
reredirect README also explains how to restore original state of process, how to redirect to another command or to redirect only stdout or stderr.
You don't state your operating system, but I'm going to take a stab and say "Linux".
Seeing what is being written to stderr and stdout is probably not going to help. If it is useful, you could use tee(1) before you start the script to take a copy of stderr and stdout.
You can use ps(1) to look for wchan. This tells you what the process is waiting for. If you look at the strace output, you can ignore the bulk of the output and identify the last (blocked) system call. If it is an operation on a file handle, you can go backwards in the output and identify the underlying object (file, socket, pipe, etc.) From there the answer is likely to be clear.
You can also send the process a signal that causes it to dump core, and then use the debugger and the core file to get a stack trace.

Resources