I am having troubles saving the output of "mtr --report-wide" to a textfile. Probably due to the different way both options output their information. I know i could use the "--raw" argument but i would like to avoid that.
Does anybody have a solution?
Linux version:
Linux debian 3.2.0-4-amd64 #1 SMP Debian 3.2.63-2+deb7u1 x86_64 GNU/Linux
Works:
"nohup mtr --report destination --report-cycles=10 > output &"
Does not work(process never stops):
"nohup mtr --report-wide destination --report-cycles=10 > output &"
process never stops
Quite the contrary - the process is stopped immediately due to a SIGTTOU signal, and thus never terminates.
solution?
Just redirect STDERR also by using … >&output& instead of … >output&.
Related
I am using Xmonad for a while and it is working fine but some of the key bindings are not working and I want to see the log to diagnose the problem. BUT I am not able to find the log files for this? Any idea where these are located?
UPDATE :
I have a binding like this :
, ((myModMask, xK_l), spawn "scrot -s 'Selected_%Y-%m-%d_$wx$h.png' -e 'mv $f ~/Pictures/screenshots/'")
But the key combination does not produce anything, and I am not able to figure out if the command is spawned or not. copy pasting on terminal works but through key combination it does not....How should I diagnose this?
Linux archlinux 5.2.9-arch1-1-ARCH #1 SMP PREEMPT Fri Aug 16 11:29:43 UTC 2019 x86_64 GNU/Linux
Thanks
This is a problem with scrot itself. Details :
https://wiki.archlinux.org/index.php/Screen_capture#scrot
Note: In some window managers (dwmAUR, xmonad and possibly others) scrot -s does not work properly when running via window manager's keyboard shortcut, this can be worked around by prepending scrot invocation with a short pause sleep 0.2; scrot -s.
I use SSH to connect to Linux, maybe run a Linux script multiple times, and use nohup to suspend these processes, and then close the SSH connection. After the next SSH connection to Linux, how can I distinguish between different scripts and get different PIDs?
This Linux script will always print content on the screen. I use Python's paramiko library, SSH to Linux, run the script and use nohup to suspend the process and redirect the output to the file. This process may be multiple times. How to distinguish the starting process, find its PID and kill it. It is best not to modify the Linux script because the script is not written by me.
I use the script name to find the process number, get a lot of PIDs, I can't distinguish them.
You could parse the output of ps -eo pid,lstart,cmd which shows the process id, start time and path, e.g.:
PID STARTED CMD
1 Mon Jun 19 21:31:08 2017 /sbin/init
2 Mon Jun 19 21:31:08 2017 [kthreadd]
3 Mon Jun 19 21:31:08 2017 [ksoftirqd/0]
== Edit ==
Be aware that if the remote is macOS the ps command does not recognize the cmd keyword, use comm or command instead, e.g.: ps -eo pid,lstart,comm
Use ps command to check running process.
For checking only shell scripts , You an do something like this:-
ps -eaf |grep .sh
This will give you all the information about running shell scripts only, easily you can distinguish b/w running scripts.
In place of ".sh" you can give file name also, than you will get information only about that running file.
maybe change the command you run to do something like:
nohup command.sh &
echo "$! `date`" >> runlog.txt
wait
i.e. run the command in the background, append its PID to a log (you might want to include more identifying information here or use a format that's more easily machine readable), then wait for it to finish
another variant would be to run tmux (GNU screen) on the server and run commands in an existing session:
tmux new-window command
which would also let you "reconnect" to running scripts later to check output / terminate
Please see this image:
The terminal always keeps displaying number of files 1. the frequency is about several minutes each. Restarting OS(to me it's centos) doesn't help because I've been seen it for months. Though it doesn't affect other processes, it harasses the terminal and I have to press CTRL+C to stop it tempararily, and I'm worried some background process is always in wrong state. Does it have anything to do with my command to dispaly gui folders needed at work?
nautilus -q &> /dev/null
nautilus dir1 dir2 .. dirn &> /dev/null &
#can prevent the 'number of files 1'.
I've googled the keyword 'number of files 1' but none of the results seem to be related to this question and so I'm wondering if others met the same issue before.
Could you give some suggestions on how to debug and resolve this issue?
[root#localhost cp2vm]# whoami
root
[root#localhost cp2vm]# uname -a
Linux localhost.localdomain 3.10.0-693.el7.x86_64 #1 SMP Tue Aug 22 21:09:27 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
output of strings /usr/bin/nautilus:
http://www.filedropper.com/stringsnautilus
nautilus --version output: GNOME nautilus 3.22.3
Just run nautilus &> /dev/null to avoid nautilus polluting stdout and stderr. This way your terminal won't show those messages.
Edit:
To make it explicit, this should work in your script.
nautilus -q &> /dev/null # Exits all nautilus instances, ignore output
nautilus dir1 dir2 dir3 &> /dev/null # Runs nautilus, ignore output
I doubt the output redirection is useful for nautilus -q, but from your information it's hard to understand when and how often you call that script. So it might be in surplus, but won't harm.
I'm running the following command in Linux:
sudo ./tftpCommand &
where my executable tftpCommand file simply gets/puts a data file which sometimes does not exist.
I want to be able to stop the tftp command that was spawned in the subshell before it automatically times out.
Using something like kill $(jobs -p) shows that the subshell has been terminated but the tftp still runs -- I know this because several seconds later it prints to the shell that it can't find the file to transfer.
QUESTION: How do I ensure that the tftp command is killed alongside the subshell that runs it?
Thanks!
I've found a solution to my problem:
use pkill -c tftp to kill any current tftp commands.
I figured this out by using ps x -o "%p %r %c"
You can use a similar technique for any of the command names in the COMMAND column (corresponding to the %c and -c ) to kill other processes.
Hope that helps anyone else who stumbles upon the same problem!
I have a simple script (below), which needs to spawn 2 ping processes in the background, which will ping 2 hosts continuously.
Without the 'sleep 3' statement, the script spawns hundreds of ping processes until the server locks up. With the sleep statement, the script works properly, only spawning 2 processes.
This issue only occurs on RH5 servers. RH6 seems to work fine, without the sleep statement. My guess on the cause is that the scripts output is being buffered, and is somehow interfering with each other.
My 3 questions are, what is causing the pings to spawn out of control, how exactly does the sleep correct it, and is there a better fix than adding the sleep statement?
Thanks in advance for any info.
Script:
#!/bin/sh
ping 192.168.1.2 -q > /dev/null &
sleep 3
ping 192.168.1.3 -q > /dev/null &
OS Info:
Linux wppra01a0326 2.6.18-402.el5 #1 SMP Thu Jan 8 06:22:34 EST 2015 x86_64 x86_64 x86_64 GNU/Linux