I'm having problems sending a system notification upon user login (KDE Plasma) (Arch Linux) - linux

Im trying to send a notification upon login via PAM, but i cant figure out how to send it to the user that is logging in.
I'm configuring my PAM to execute a script every time a user logs in. The problem is i need to send a notification if there have been any login attempts (its part of a bigger security thing im trying to add, where my laptop takes a picture with the webcam upon failed logins, and notifies me when i log in again, since my classmates like to try and guess my password for some reason).
The problem is that the line in my .sh file, which sends a user notification, sends it to root since thats the 'user' that executes the script, i want my script to send the notification to my current user (called "andreas"), but im having problems figuring this out.
Here is the line i added to the end of the PAM file system-login:
auth [default=ignore] pam_exec.so /etc/lockCam/call.sh
And here is the call.sh file:
#!/bin/sh
/etc/lockCam/notifier.sh &
The reason im calling another file is because i want it to run in the background WHILE the login process continues, that way the process doesnt slow down logging in.
Here is the script that is then executed:
#!/bin/sh
#sleep 10s
echo -e "foo" > "/etc/lockCam/test"
#This line is simply to make sure that i know that my script was executed
newLogins=`sed -n '3 p' /etc/lockCam/lockdata`
if [ $newLogins -gt 0 ]
then
su andreas -c ' notify-send --urgency=critical --expire-time=6000 "Someone tried to log in!" "$newLogins new lockCam images!" && exit'
callsInRow=`sed -n '2 p' /etc/lockCam/lockdata`
crntS=$(date "+%S")
crntS=${crntS#0}
crntM=$(date "+%M")
crntM=${crntM#0}
crntH=$(date "+%H")
crntH=${crntH#0}
((crntTime = $crntH \* 60 \* 60 + $crntM \* 60 + $crntS ))
#This whole process is absolutely stupid but i cant figure out a better way to make sure none of the integers are called "01" or something like that, which would trigger an error
echo -e "$crntTime\n$callsInRow\n0" > "/etc/lockCam/lockdata"
fi
exit 0
And this is where i THINK my error is, the line "su andreas -c...." is most likely formatted wrong or im doing something else wrong, everythin is executed upon login EXCEPT the notification doesnt show up. If i execute the script from a terminal when im already logged in there is no notification either, unless i remove the "su andreas -c" part and simply do "notify-send...", but that doesnt send out a notification when i log in, and i think thats because the notification is sent to the root user, and not "andreas".

I think your su needs to be passed the desktop users DBUS session bus address. The bus address can be easily obtained and used for X11 user sessions, but Wayland has tighter security, for Wayland the user session actually has to run up proxy to receive the messages. (Had you considered it might be easier to send an email?)
I have notify-desktop gist on github that works for X11 and should also work on Wayland (provided the proxy is running). For completeness I've appended the source code of the script to this post, it's extensively commented, I think it contains the pieces necessary to get you own code working.
#!/bin/bash
# Provides a way for a root process to perform a notify send for each
# of the local desktop users on this machine.
#
# Intended for use by cron and timer jobs. Arguments are passed straight
# to notify send. Falls back to using wall. Care must be taken to
# avoid using this script in any potential fast loops.
#
# X11 users should already have a dbus address socket at /run/user/<userid>/bus
# and this script should work without requiring any initialisation. Should
# this not be the case, X11 users could initilise a proxy as per the wayland
# instructions below.
#
# Due to stricter security requirments Wayland lacks an dbus socket
# accessable to root. Wayland users will need to run a proxy to
# provide root with the necessary socket. Each user can must add
# the following to a Wayland session startup script:
#
# notify-desktop --create-dbus-proxy
#
# That will start xdg-dbus-proxy process and make a socket available under:
# /run/user/<userid>/proxy_dbus_<desktop_sessionid>
#
# Once there is a listening socket, any root script or job can pass
# messages using the syntax of notify-send (man notify-send).
#
# Example messages
# notify-desktop -a Daily-backup -t 0 -i dialog-information.png "Backup completed without error"
# notify-desktop -a Remote-rsync -t 6000 -i dialog-warning.png "Remote host not currently on the network"
# notify-desktop -a Daily-backup -t 0 -i dialog-error.png "Error running backup, please consult journalctl"
# notify-desktop -a OS-Upgrade -t 0 -i dialog-warning.png "Update in progress, do not shutdown until further completion notice."
#
# Warnings:
# 1) There has only been limited testing on wayland
# 2) There has only been no testing for multiple GUI sessions on one desktop
#
if [ $1 == "--create-dbus-proxy" ]
then
if [ -n "$DBUS_SESSION_BUS_ADDRESS" ]
then
sessionid=$(cat /proc/self/sessionid)
xdg-dbus-proxy $DBUS_SESSION_BUS_ADDRESS /run/user/$(id -u)/proxy_dbus_$sessionid &
exit 0
else
echo "ERROR: no value for DBUS_SESSION_BUS_ADDRESS environment variable - not a wayland/X11 session?"
exit 1
fi
fi
function find_desktop_session {
for sessionid in $(loginctl list-sessions --no-legend | awk '{ print $1 }')
do
loginctl show-session -p Id -p Name -p User -p State -p Type -p Remote -p Display $sessionid |
awk -F= '
/[A-Za-z]+/ { val[$1] = $2; }
END {
if (val["Remote"] == "no" &&
val["State"] == "active" &&
(val["Type"] == "x11" || val["Type"] == "wayland")) {
print val["Name"], val["User"], val["Id"];
}
}'
done
}
count=0
while read -r -a desktop_info
do
if [ ${#desktop_info[#]} -eq 3 ]
then
desktop_user=${desktop_info[0]}
desktop_id=${desktop_info[1]}
desktop_sessionid=${desktop_info[2]}
proxy_bus_socket="/run/user/$desktop_id/proxy_dbus_$desktop_sessionid"
if [ -S $proxy_bus_socket ]
then
bus_address="$proxy_bus_socket"
else
bus_address="/run/user/$desktop_id/bus"
fi
sudo -u $desktop_user DBUS_SESSION_BUS_ADDRESS="unix:path=$bus_address" notify-send "$#"
count=$[count + 1]
fi
done <<<$(find_desktop_session)
# If no one has been notified fall back to wall
if [ $count -eq 0 ]
then
echo "$#" | wall
fi
# Don't want this to cause a job to stop
exit 0

Related

Run script in a new screen if true

I have a script where it will check if background_logging is true, if it is then I want the rest of the script to run in a new detached screen.
I have tried using this: exec screen -dmS "alt-logging" /bin/bash "$0";. This will sometimes create the screen, etc. but other times nothing will happen at all. When it does create a screen, it doesn't run the rest of the script file and when I try to resume the screen it says it's (Dead??).
Here is the entire script, I have added some comments to explain better what I want to do:
#!/bin/bash
# Configuration files
config='config.cfg'
source "$config"
# If this is true, run the rest of the script in a new screen.
# $background_logging comes from the configuration file declared above (config).
if [ $background_logging == "true" ]; then
exec screen -dmS "alt-logging" /bin/bash "$0";
fi
[ $# -eq 0 ] && { echo -e "\nERROR: You must specify an alt file!"; exit 1; }
# Logging script
y=0
while IFS='' read -r line || [[ -n "$line" ]]; do
cmd="screen -dmS alt$y bash -c 'exec $line;'"
eval $cmd
sleep $logging_speed
y=$(( $y + 1 ))
done < "$1"
Here are the contents of the configuration file:
# This is the speed at which alts will be logged, set to 0 for fast launch.
logging_speed=5
# This is to make a new screen in which the script will run.
background_logging=true
The purpose of this script is to loop through each line in a text file and execute the line as a command. It works perfectly fine when $background_logging is false so there are no issues with the while loop.
As described, it's not entirely possible. Specifically what is going on in your script: when you exec you replace your running script code with that of screen.
What you could do though is to start screen, figure out few details about it and redirect your console scripts in/output to it, but you won't be able to reparent your running script to the screen process as if started there. Something like for instance:
#!/bin/bash
# Use a temp file to pass cat's parent pid out of screen.
tempfile=$(tempfile)
screen -dmS 'alt-logging' /bin/bash -c "echo \$\$ > \"${tempfile}\" && /bin/cat"
# Wait to receive that information on the outside (it may not be available
# immediately).
while [[ -z "${child_cat_pid}" ]] ; do
child_cat_pid=$(cat "${tempfile}")
done
# point stdin/out/err of the current shell (rest of the script) to that cat
# child process
exec 0< /proc/${child_cat_pid}/fd/0
exec 1> /proc/${child_cat_pid}/fd/1
exec 2> /proc/${child_cat_pid}/fd/2
# Rest of the script
i=0
while true ; do
echo $((i++))
sleep 1
done
Far from perfect and rather messy. It could probably be helped by using a 3rd party tool like reptyr to grab console of the script from inside the screen. But cleaner/simpler yet would be to (when desired) start the code that should be executed in that screen session after it has been established.
That said. I'd actually suggest to take a step back and ask, what exactly is it that you're trying to achieve and why exactly would you like to run your script in screen. Are you planning to attach/detach to/from it? Because if running a long term process with a detached console is what you are after, nohup might be a bit simpler route to go.

How to have a script trigger in my script after it ssh's into a DC's Time clock server?

So, I have a script which it's intended purpose is to:
Ask for the DC number and Time clock number
log in to the Time clock server for the DC stated above
After log in, it is intended to run a seperate script inside my script which updates the time clock number also stated above.
My issue is that once I trigger the script, it logs into the server as intended, prompts me for my user ID, and then I have to press "enter" when "xterm" comes up after that. After this, the update script is supposed to run, however, it doesn't, and sits at the command line.
After I exit the server, THEN it runs the update script, but fails, because the update script doesn't exist in the jump box.
My question is, after the script logs in to the server, how can I get it to trigger the script inside the Time clock server, as I am wanting it to? Thanks.
Script is below:
#!/bin/bash -x
export LANG="C"
####
####
## This script is intended to speed up the process to setup timeclocks from DC tickets
## Created by Blake Smreker | b0s00dg | bsmreker#walmart.com
####
####
#Asks for DC number
echo "What is the four digit DC number?"
read DC #User input
#Asks for Timeclock number
echo "What is the two digit Timeclock number?"
read TMC #User input
#Defines naming convention of tna server
tnaserver="cs-tna.s0${DC}.us.wal-mart.com"
#creating variable to define the update script
tcupd="/u/applic/tna/shell/tc_software_update.sh tmc${TMC}.s0${DC}.us REFURBISHED"
#Logging in to the cs-tna package at the specified DC
/usr/bin/dzdo -u osedc /bin/ssh -qo PreferredAuthentications=publickey root#$tnaserver
echo "Preforming Timeclock update on Timeclock=$TMC, at DC=${DC}"
echo ""
echo "-----------------------------------------------------------------------------------------------------------------------------------------"
$tcupd #Runs update script
echo "-----------------------------------------------------------------------------------------------------------------------------------------"
echo ""
sleep 2
echo "If prompted to engage NOC due to Timeclock not being on the network, send the ticket to DC Networking"
echo ""
echo "OR"
echo ""
echo "If the script completed successfully, and the Timeclock was updated, you can now resolve the ticket"
You must run the command inside ssh session, not after it:
echo "Preforming Timeclock update on Timeclock=$TMC, at DC=${DC}"
echo ""
echo "-----------------------------------------------------------------------------------------------------------------------------------------"
###### $tcupd #Runs update script
/usr/bin/dzdo -u osedc /bin/ssh -qo PreferredAuthentications=publickey root#$tnaserver /bin/bash -c /u/applic/tna/shell/tc_software_update.sh tmc${TMC}.s0${DC}.us REFURBISHED
echo "-----------------------------------------------------------------------------------------------------------------------------------------"
echo ""
sleep 2
echo "If prompted to engage NOC due to Timeclock not being on the network, send the ticket to DC Networking"
echo ""
echo "OR"
echo ""
echo "If the script completed successfully, and the Timeclock was updated, you can now resolve the ticket"
From man ssh you see ssh [-46AaCfGgKkMNnqsTtVvXxYy] ....... destination [command]. If [command] is not given ssh runs remote login command scripts, for example xterm. You read more here or here or just browse google.
You need to think how and which environment variable you want to pass to the remote machine and remember about properly enclosing the variables, so they get expanded on your or the remote machine.

How do I pass a set of bash commands through SSH? [duplicate]

This question already has answers here:
Pass commands as input to another command (su, ssh, sh, etc)
(3 answers)
Closed 4 years ago.
I'm writing a simple bash server health check that runs on a local machine and asks the user which server they are interested in looking at. When provided with the name, the script runs a set of health check commands on that server and returns the output to the user. Currently, the script will just log the user into the server but won't run the health check until the user exits that ssh session, then it runs those checks locally, not on the remote server as intended. I don't want the user to actually log on to the server, I just need the output from that server forwarded to the users local console. Any clue what I'm doing wrong here? Thanks in advance!
#!/bin/bash
echo -n "Hello "$USER""
echo "which server would you like to review?"
read var1
ssh -tt $var1
echo ">>> SYSTEM PERFORMANCE <<<"
top -b -n1 | head -n 10
echo ">>> STORAGE STATISTICS <<<"
df -h
echo ">>> USERS CURRENTLY LOGGED IN <<<"
w
echo ">>> USERS PREVIOUSLY LOGGED IN <<<"
lastlog -b 0 -t 100
Using a here-doc :
#!/bin/bash
echo -n "Hello "$USER""
echo "which server would you like to review?"
read var1
ssh -t $var1<<'EOF'
echo ">>> SYSTEM PERFORMANCE <<<"
top -b -n1 | head -n 10
echo ">>> STORAGE STATISTICS <<<"
df -h
echo ">>> USERS CURRENTLY LOGGED IN <<<"
w
echo ">>> USERS PREVIOUSLY LOGGED IN <<<"
lastlog -b 0 -t 100
EOF
There is a tool that is used extensively for all classes of remote access (via ssh and/or telnet, http, etc) that is called expect(1) It allows you to send commands and wait for the responses, allowing even use of interactive commands (like vi(1) in screen mode) or even to supply passwords over the line. Do some googling on it and you'll see how useful it is.

Check for ftp authentication output for bash script

I run an automated backup shell script, it works great, but for some reason the FTP blocks me for a few minutes. I would like to add a retry and wait feature. below is sample of my code.
echo "Moving to external server"
cd /root/backup/
/usr/bin/ftp -n -i $FTP_SERVER <<END_SCRIPT
user $FTP_USERNAME $FTP_PASSWORD
mput $FILE
bye
END_SCRIPT
after a failed login i get the message below
Authentication failed. Blocked.
Login failed.
Incorrect sequence of commands: PASS required after USER
i need to capture such output and make the code atempt to sleep for few minutes before trying again.
ideas?
If it's possible for you to install additional programs onto the system of interest i encourage you to take a look at lftp.
With lftp it is possible to set paramters like the time between reconnects etc. manually.
To achieve your aim with lftp you have to invoke the following
lftp -u user,password ${FTP_SERVER} <<END
set ftp:retry-530 "Authentication failed"
set net:reconnect-interval-base 60
set net:reconnect-interval-multiplier 10
set net:max-retries 10
<some more custom commands>
END
If the pattern after ftp:retry-530 matches the 530 reply of the server lftp tries to reconnect every 60*10 seconds.
The message below is probably going to stderr instead of stdout so you will need to capture the stderr output first:
while true
do
if ( script 2>&1 |grep -q 'Authentication failed' )
then
echo "authentication failed, sleeping for a while before trying again"
sleep 60
else
#everything worked, break out of the while loop
break
fi
done

Save password between bash script execution

I want my script to prompt for a password, but I only want it to do so once per day session (let's say half an hour). Is it possible to save the login credentials of a user between script executions securely? I need this to be a bash script, because it has to run on several different types of UNIX, on which I am not authorized to install anything.
I was thinking of encrypting a text file to which I would write the login credentials, but where would I keep the password to that file? Seems like I just re-create the problem.
I know about utilities which run an enrypted script, and I am very against using them, because I do not like the idea of keeping a master password inside a script that people might need to debug later on.
EDIT: This is not a server logon script, but authenticates with a web server that I have no control over.
EDIT 2: Edited session duration
Depending on what the "multiple invocations" of the script are doing, you could do this using 2 scripts, a server and a client, using a named pipe to communicate. Warning: this may be unportable.
Script 1 "server":
#!/bin/bash
trigger_file=/tmp/trigger
read -s -p "Enter password: " password
echo
echo "Starting service"
mknod $trigger_file p
cmd=
while [ "$cmd" != "exit" ]; do
read cmd < $trigger_file
echo "received command: $cmd"
curl -u username:$password http://www.example.com/
done
rm $trigger_file
Script 2 "client":
#!/bin/bash
trigger_file=/tmp/trigger
cmd=$1
echo "sending command: $cmd"
echo $cmd > $trigger_file
Running:
$ ./server
Enter password: .....
Starting service
received command: go
other window:
$ ./client go
sending command: go
EDIT:
Here is a unified self-starting server/client version.
#!/bin/bash
trigger_file=/tmp/trigger
cmd=$1
if [ -z "$cmd" ]; then
echo "usage: $0 cmd"
exit 1
fi
if [ "$cmd" = "server" ]; then
read -s password
echo "Starting service"
mknod $trigger_file p
cmd=
while [ "$cmd" != "exit" ]; do
read cmd < $trigger_file
echo "($$) received command $cmd (pass: $password)"
curl -u username:$password http://www.example.com/
done
echo exiting
rm $trigger_file
exit
elif [ ! -e $trigger_file ]; then
read -s -p "Enter password: " password
echo
echo $password | $0 server &
while [ ! -e $trigger_file ]; do
sleep 1
done
fi
echo "sending command: $cmd"
echo $cmd > $trigger_file
You are correct that saving the password anywhere that is accessible re-creates the problem. Also asking for credentials once per day instead of each time the program runs is essentially the same as not having an authentication system at all from the point of view of system security. Having the password anywhere that is easily readable (whether as plain text or encrypted by a plain text key) eliminates any security you gained by having a password to anyone with decent system knowledge/scanning tools.
The traditional way of solving this problem (and one of the more secure mechanisms) is to use SSH keys in lieu of passwords. Once a user has the key they don't need to ever re-enter their authentication manually. For even better security you can make the SSH key login as a user who only has execute privileges to the script/executable. Thus they wouldn't be able to change what the script does nor reproduce it by reading the file. Then the actual owner of the file can easily edit/run the script with no authentication required while keeping other users in a restricted use mode.
Usually, passwords are not stored (for security) reasons, rather the password hash is stored. Everytime the user enters the password the hash is compared for authentication. However, your requirement is something like 'remember password' feature (like on a web browser, or windows apps). In this case there is no other way to store the password in a flat file and then use something like gpg to encrypt the file, but then you end up having a key for the encryption.
The entire design of asking the user of his credentials once per day is as good as not asking for any credentials. A tightly secure system should have appropriate time outs set to log the user off due to in-activity especially on back end server operations.

Resources