XMonad Logs | How to check if a key binding did fire or not? - xmonad

I am using Xmonad for a while and it is working fine but some of the key bindings are not working and I want to see the log to diagnose the problem. BUT I am not able to find the log files for this? Any idea where these are located?
UPDATE :
I have a binding like this :
, ((myModMask, xK_l), spawn "scrot -s 'Selected_%Y-%m-%d_$wx$h.png' -e 'mv $f ~/Pictures/screenshots/'")
But the key combination does not produce anything, and I am not able to figure out if the command is spawned or not. copy pasting on terminal works but through key combination it does not....How should I diagnose this?
Linux archlinux 5.2.9-arch1-1-ARCH #1 SMP PREEMPT Fri Aug 16 11:29:43 UTC 2019 x86_64 GNU/Linux
Thanks

This is a problem with scrot itself. Details :
https://wiki.archlinux.org/index.php/Screen_capture#scrot
Note: In some window managers (dwmAUR, xmonad and possibly others) scrot -s does not work properly when running via window manager's keyboard shortcut, this can be worked around by prepending scrot invocation with a short pause sleep 0.2; scrot -s.

Related

How to distinguish between two running Linux scripts with the same name?

I use SSH to connect to Linux, maybe run a Linux script multiple times, and use nohup to suspend these processes, and then close the SSH connection. After the next SSH connection to Linux, how can I distinguish between different scripts and get different PIDs?
This Linux script will always print content on the screen. I use Python's paramiko library, SSH to Linux, run the script and use nohup to suspend the process and redirect the output to the file. This process may be multiple times. How to distinguish the starting process, find its PID and kill it. It is best not to modify the Linux script because the script is not written by me.
I use the script name to find the process number, get a lot of PIDs, I can't distinguish them.
You could parse the output of ps -eo pid,lstart,cmd which shows the process id, start time and path, e.g.:
PID STARTED CMD
1 Mon Jun 19 21:31:08 2017 /sbin/init
2 Mon Jun 19 21:31:08 2017 [kthreadd]
3 Mon Jun 19 21:31:08 2017 [ksoftirqd/0]
== Edit ==
Be aware that if the remote is macOS the ps command does not recognize the cmd keyword, use comm or command instead, e.g.: ps -eo pid,lstart,comm
Use ps command to check running process.
For checking only shell scripts , You an do something like this:-
ps -eaf |grep .sh
This will give you all the information about running shell scripts only, easily you can distinguish b/w running scripts.
In place of ".sh" you can give file name also, than you will get information only about that running file.
maybe change the command you run to do something like:
nohup command.sh &
echo "$! `date`" >> runlog.txt
wait
i.e. run the command in the background, append its PID to a log (you might want to include more identifying information here or use a format that's more easily machine readable), then wait for it to finish
another variant would be to run tmux (GNU screen) on the server and run commands in an existing session:
tmux new-window command
which would also let you "reconnect" to running scripts later to check output / terminate

gdb cannot attach to process

Here is the OS I am using:
Linux securecluster 4.9.8-moby #1 SMP Wed Feb 8 09:56:43 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
When trying to attach gdb to hanging process as root user, I got the following:
Attaching to process 9636
Could not attach to process. If your uid matches the uid of the target
process, check the setting of /proc/sys/kernel/yama/ptrace_scope, or try
again as the root user. For more details, see /etc/sysctl.d/10-ptrace.conf
ptrace: Operation not permitted.
I modified /etc/sysctl.d/10-ptrace.conf ,resulting in:
kernel.yama.ptrace_scope = 0
However, I got the same error.
I tried changing /proc/sys/kernel/yama/ptrace_scope :
echo 0 | tee /proc/sys/kernel/yama/ptrace_scope
tee: /proc/sys/kernel/yama/ptrace_scope: Read-only file system
Hint is appreciated.
I modified /etc/sysctl.d/10-ptrace.conf
This will only take effect on next reboot.
Until then, just do sudo sysctl -w kernel.yama.ptrace_scope=0
Are you using a container engine? Try attaching to the process from the outside of the container (on the host); it may have a different PID there.
Alternatively, launch the container with the CAP_SYS_PTRACE capability (using --cap-add=SYS_PTRACE, for example). Of course, if you cannot reproduce the hang, then you cannot use this approach.
#Ted #escapecharacter The kernel parameters you are referring to are taken from the host system that is why it is read-only, you cannot edit the actual config file from inside the container. you can override it in the container, just drop the -w flag to #sudo sysctl kernel.yama.ptrace_scope=0 . A permanent solution is to do this on the host node and all containers would inherit this by default.

How to save "mtr --report-wide" output to textfile?

I am having troubles saving the output of "mtr --report-wide" to a textfile. Probably due to the different way both options output their information. I know i could use the "--raw" argument but i would like to avoid that.
Does anybody have a solution?
Linux version:
Linux debian 3.2.0-4-amd64 #1 SMP Debian 3.2.63-2+deb7u1 x86_64 GNU/Linux
Works:
"nohup mtr --report destination --report-cycles=10 > output &"
Does not work(process never stops):
"nohup mtr --report-wide destination --report-cycles=10 > output &"
process never stops
Quite the contrary - the process is stopped immediately due to a SIGTTOU signal, and thus never terminates.
solution?
Just redirect STDERR also by using … >&output& instead of … >output&.

Suppress 'Warning: no access to tty' in ssh

I have a short simple script, that compiles a .c file and runs it on a remote server running tcsh and then just gives back control to my machine (this is for school, I need my programs to work properly on the lab computers but want to edit them etc. on my machine). It runs commands this way:
ssh -T user#server << EOF
cd cs4400/$dest
gcc -o $efile $file
./$efile
EOF
So far it works fine, but it gives this warning every time I do this:
Warning: no access to tty (Bad file descriptor).
Thus no job control in this shell.
I know this technically isn't a problem, but it's SUPER annoying. I'm trying to do school work, checking the output of my program etc., and this clutters everything, and I HATE it.
I'm running this version of ssh on my machine:
OpenSSH_6.1p1 Debian-4, OpenSSL 1.0.1c 10 May 2012
This version of tcsh on the server:
tcsh 6.17.00 (Astron) 2009-07-10 (x86_64-unknown-linux)
And this version of ssh on the server:
OpenSSH_5.3p1, OpenSSL 1.0.0-fips 29 Mar 2010
The message is actually printed by shell, in this case tcsh.
You can use
strings /usr/bin/tcsh | grep 'no access to tty'
to ensure that it belongs to tcsh itself.
It is related to ssh only very loosely, ie ssh in this case is just the trigger, not the cause.
You should either change your approach and not use HERE DOCUMENT. Instead place executable custom_script into /path/custom_script and run it via ssh.
# this will work
ssh user#dest '/path/custom_script'
Or, just run complex command as a oneliner.
# this will work as well
ssh user#dest "cd cs4400/$dest;gcc -o $efile $file;./$efile"
On OS X, I solved a similar problem (for script provisioning on Vagrant) with ssh -t -t (note that -t comes twice).
Advice based on the ssh BSD man page:
-T Disable pseudo-terminal allocation.
-t Force pseudo-terminal allocation. This can be used to execute arbitrary screen-based programs on a remote machine, which can
be very useful, e.g. when implementing menu services. Multiple -t options force tty allocation, even if ssh has no local tty.
If running tcsh is not important for you, specify a different shell and it will work:
ssh -T user#server bash << EOF
cd cs4400/$dest
gcc -o $efile $file
./$efile
EOF

SSH "Login monitor" for Linux

I'm trying to write a script that informs the user when someone has logged in on the machine via ssh.
My current idea is to parse the output of "w" using grep in intervals.
But that's neither elegant nor performant. Has anyone got a better idea how to implement such a program?
Any help would really be appreciated!
Paul Tomblin has the right suggestion.
Set up logging in your sshd_config to point to a syslog facility that you can log separately:
=> see man 3 syslog for more facilities. Choose one like e.g.
# Logging
SyslogFacility local5
LogLevel INFO
Then set up your syslog.conf like this:
local5.info |/var/run/mysshwatcher.pipe
Add the script you're going to write to /etc/inittab so it keeps running:
sw0:2345:respawn:/usr/local/bin/mysshwatcher.sh
then write your script:
#!/bin/sh
P=/var/run/mysshwatcher.pipe
test -p $P || mkfifo $P
while read x <$P; do
# ... whatever, e.g.:
echo "ssh info: $x" | wall
done;
Finally, restart your syslogd and get your inittab reloaded (init q) and it should work. If other variantes of these services are used, you need to configure things accordingly (e.g. newsyslogd => /etc/newsyslog.conf; Ubuntu: /etc/event.d isntead of inittab)
This is very rudimentary and lacking, but should be enough to get you started ...
more info: man sshd_config for more logging options/verbosity.
On Ubuntu (and I'd guess all other Debian distros, if not all Linuces), the file /var/log/auth.log records successful (and unsuccessful) login attempts:
sshd[XXX]: pam_unix(sshd:session): session opened for user XXX
You could set up a very simple monitor using this command (note that you have to be root to see the auth log):
sudo tail -F /var/log/auth.log | grep sshd
If you do not care how they logged in (telnet/ssh), the 'last' Unix command line utility shows you the last few logins in the machine. Remote users will show the IP address
[root#ex02 www]# last
foo pts/1 81.31.x.y Sun Jan 18 07:25 still logged in
foo pts/0 81.31.x.y Sun Jan 18 01:51 still logged in
foo pts/0 81.31.x.y Sat Jan 17 03:51 - 07:52 (04:00)
bar pts/5 199.146.x.y Fri Jan 16 08:57 - 13:29 (04:32
Set up a named pipe, and set up a log file parser to listen to it, and send the ssh messages to it. The log file parser can do what you want, or signal to a daemon to do it.
Redirecting the log file is done in a config file in /etc/ whose name escapes me right now. /etc/syslog.conf, I think.
I have made a program (which i call Authentication Monitor) that solves the task described in the question.
If you wanted to, you are more than welcome to download it to investigate how I solve this problem (using log-files).
You can find Authentication Monitor freely available here: http://bwyan.dk/?p=1744
We had the same problem, so we wrote our own script.
It can be downloaded from the github.
Hope it helps :)
cheers!
Ivan

Resources