How to prevent orphaned ssh-agent processes from cronjob script? - linux

I've setup a cron job on my RHEL server to securely transfer some backups to another environment every hour. It seems like the first few lines create an ssh-agent that sticks around indefinitely (I've discovered hundreds of orphaned ssh-agent processes at times). I've read a bit about different thoughts on how to manage this sort of thing but I'm not really understanding how I should change this to prevent orphaned agents. Any help is appreciated (I'm a front-end/javascript person so this is new to me).
Cron job;
#hourly /var/www/html/scripts/backups_transfer_sftp/run.sh >> /var/log/backup-transfers.log 2>&1
/var/www/html/scripts/backups_transfer_sftp/run.sh
#!/bin/sh
if [ -z "$SSH_AUTH_SOCK" ] ; then
eval `ssh-agent -s`
ssh-add ~/.ssh/remote_backups
fi
printf "\n======\n"
date
printf "|| Beginning backup replication...\n"
cd /var/www/html/wp-content/uploads/backups
echo "|| Transferring daily backups..."
lftp sftp://remote_backups <<EOF
cd /home/web/
mirror --exclude=.* --include=./*.zip --include=./*dat ./ backups --delete --reverse --only-newer -v
bye
EOF
cd ../backups-weekly
echo "|| Transferring weekly backups..."
lftp sftp://remote_backups <<EOF
cd /home/web/
mirror --exclude=.* --include=./*.zip ./ backups-weekly --delete --reverse --only-newer -v
bye
EOF
now=$(date)
echo "|| Backup replication complete!"
echo "|| Completed at: ${now}"

Related

What causes multiple Mails when using Cron with Bash Script

I've made a little bash script to backup my nextcloud files including my database from my ubuntu 18.04 server. I want the backup to be executed every day. When the job is done I want to reseive one mail if the job was done (additional if it was sucessful or not). With the current script I reseive almost 20 mails and I can't figure out why. Any ideas?
My cronjob looks like this:
* 17 * * * "/root/backup/"backup.sh >/dev/null 2>&1
My bash script
#!/usr/bin/env bash
LOG="/user/backup/backup.log"
exec > >(tee -i ${LOG})
exec 2>&1
cd /var/www/nextcloud
sudo -u www-data php occ maintenance:mode --on
mysqldump --single-transaction -h localhost -u db_user --password='PASSWORD' nextcloud_db > /BACKUP/DB/NextcloudDB_`date +"%Y%m%d"`.sql
cd /BACKUP/DB
ls -t | tail -n +4 | xargs -r rm --
rsync -avx --delete /var/www/nextcloud/ /BACKUP/nextcloud_install/
rsync -avx --delete --exclude 'backup' /var/nextcloud_data/ /BACKUP/nextcloud_data/
cd /var/www/nextcloud
sudo -u www-data php occ maintenance:mode --off
echo "###### Finished backup on $(date) ######"
mail -s "BACKUP" name#domain.com < ${LOG}
Are you sure about the CRON string? For me this means "At every minute past hour 17".
Should be more like 0 17 * * *, right?

Retain root privileges during long processes

I have a bash script that makes a backup of my data files (~50GB). The script is basically something like this:
sudo tar /backup/mydata1 into old-backup-1.tar
sudo tar /backup/mydata2 into old-backup-2.tar
sudo rsync /mydata1 to /backup/mydata1
sudo rsync /mydata2 to /backup/mydata2
(I use sudo because some of the files are owned by root).
The problem is that after every command (because it takes a long time) I loose root privileges and if I'm not present at the computer then the su prompt gets timed out and the script ends in the middle of the job.
Is there a way to retain su privileges during the entire script? What is the best way to approach this situation? I prefer to run the script under my user.
With a second shell:
sudo bash -c "command1; command2; command3; command4"
Perhaps like this:
#!/bin/bash -eu
exec sudo /bin/bash <<'EOF'
echo I am $UID
whoami
#^the script
EOF
Alternatively, you could put something like:
if ! [ $(id -u) -eq 0 ]; then
exec sudo "$0" "$#"
fi
at the top.

Remote to Local rolling backup script

I'm trying to create a bash script that runs through crontab to execute a backup remote to local. Everything works but my rolling backup part, where it only keeps 4 backups.
#!/bin/bash
dateForm=`date +%m-%d-%Y`
fileName=[redacted]-"$dateForm"
echo backup started for [redacted] on: $dateForm >> /home/backups/backLog.log
ls -tQ /home/backups/[redacted] | tail -n+5 | xargs -r rm
ssh root#[redacted] "tar jcf - -C /home/[redacted]/[redacted] ." > "/home/backups/[redacted]/$fileName".tar.bz2
if [ ! -f "/home/backups/[redacted]/$fileName.tar.bz2" ]
then
echo "something went wrong with the backup for $fileName!" >> /home/backups/backLog.log
else
echo "Backup completed for $fileName" >> /home/backups/backLog.log
fi
the ls line will work if executed in the directory just fine, but because crontab is executing it and I need the script to be outside of the folder it's targeting. I can't get it to target the rm to the correct directory utilizing the piped ls
I was able to come up with an interesting solution after studying the man page for ls a little more and utilizing find to grab the full paths.
ls -tQ $(find /home/backups/[redacted] -type f -name "*") | tail -n+5 | xargs -r rm
just posting an answer for someone that didn't want to create a rolling backup script that completely depended on date formatting, as there would ALWAYS be at least 4 backups in the folder targeted.

How to check if ssh-agent is already running in bash?

I have a sample sh script on my Linux environment, which basically run's the ssh-agent for the current shell, adds a key to it and runs two git commands:
#!/bin/bash
eval "$(ssh-agent -s)"
ssh-add /home/duvdevan/.ssh/id_rsa
git -C /var/www/duvdevan/ reset --hard origin/master
git -C /var/www/duvdevan/ pull origin master
Script actually works fine, but every time I run it I get a new process so I think it might become a performance issue and I might end up having useless processes out there.
An example of the output:
Agent pid 12109
Identity added: /home/duvdevan/.ssh/custom_rsa (rsa w/o comment)
Also, along with all this, is it possible to find an existing ssh-agent process and add my keys into it?
No, really, how to check if ssh-agent is already running in bash?
Answers so far don't appear to answer the original question...
Here's what works for me:
if ps -p $SSH_AGENT_PID > /dev/null
then
echo "ssh-agent is already running"
# Do something knowing the pid exists, i.e. the process with $PID is running
else
eval `ssh-agent -s`
fi
This was taken from here
Also, along with all this, is it possible to find an existing ssh-agent process and add my keys into it?
Yes. We can store the connection info in a file:
# Ensure agent is running
ssh-add -l &>/dev/null
if [ "$?" == 2 ]; then
# Could not open a connection to your authentication agent.
# Load stored agent connection info.
test -r ~/.ssh-agent && \
eval "$(<~/.ssh-agent)" >/dev/null
ssh-add -l &>/dev/null
if [ "$?" == 2 ]; then
# Start agent and store agent connection info.
(umask 066; ssh-agent > ~/.ssh-agent)
eval "$(<~/.ssh-agent)" >/dev/null
fi
fi
# Load identities
ssh-add -l &>/dev/null
if [ "$?" == 1 ]; then
# The agent has no identities.
# Time to add one.
ssh-add -t 4h
fi
This code is from pitfalls of ssh agents which describes both the pitfalls of what you're currently doing, of this approach, and how you should use ssh-ident to do this for you.
If you only want to run ssh-agent if it's not running and do nothing otherwise:
if [ $(ps ax | grep [s]sh-agent | wc -l) -gt 0 ] ; then
echo "ssh-agent is already running"
else
eval $(ssh-agent -s)
if [ "$(ssh-add -l)" == "The agent has no identities." ] ; then
ssh-add ~/.ssh/id_rsa
fi
# Don't leave extra agents around: kill it on exit. You may not want this part.
trap "ssh-agent -k" exit
fi
However, this doesn't ensure ssh-agent will be accessible (just because it's running doesn't mean we have $SSH_AGENT_PID for ssh-add to connect to).
If you want it to be killed right after the script exits, you can just add this after the eval line:
trap "kill $SSH_AGENT_PID" exit
Or:
trap "ssh-agent -k" exit
$SSH_AGENT_PID gets set in the eval of ssh-agent -s.
You should be able to find running ssh-agents by scanning through /tmp/ssh-* and reconstruct the SSH_AGENT variables from it (SSH_AUTH_SOCK and SSH_AGENT_PID).
ps -p $SSH_AGENT_PID > /dev/null || eval "$(ssh-agent -s)"
Single line command. Run for the first time will start ssh-agent. Run for the second time will not start the ssh-agent. Simple and Elegant Mate !!!
Using $SSH_AGENT_PID can only test the ssh-agent but miss identities when it is not yet added
$ eval `ssh-agent`
Agent pid 9906
$ echo $SSH_AGENT_PID
9906
$ ssh-add -l
The agent has no identities.
So it would be save to check it with ssh-add -l with an expect script like example below:
$ eval `ssh-agent -k`
Agent pid 9906 killed
$ ssh-add -l
Could not open a connection to your authentication agent.
$ ssh-add -l &>/dev/null
$ [[ "$?" == 2 ]] && eval `ssh-agent`
Agent pid 9547
$ ssh-add -l &>/dev/null
$ [[ "$?" == 1 ]] && expect $HOME/.ssh/agent
spawn ssh-add /home/user/.ssh/id_rsa
Enter passphrase for /home/user/.ssh/id_rsa:
Identity added: /home/user/.ssh/id_rsa (/home/user/.ssh/id_rsa)
$ ssh-add -l
4096 SHA256:XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX /home/user/.ssh/id_rsa (RSA)
So when both ssh-agent and ssh-add -l are put to run on a bash script:
#!/bin/bash
ssh-add -l &>/dev/null
[[ "$?" == 2 ]] && eval `ssh-agent`
ssh-add -l &>/dev/null
[[ "$?" == 1 ]] && expect $HOME/.ssh/agent
then it would always check and assuring that the connection is running:
$ ssh-add -l
4096 SHA256:XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX /home/user/.ssh/id_rsa (RSA)
You can also emulate the repeating of commands on above script with do while
The accepted answer did not work for me under Ubuntu 14.04.
The test to check if the ssh-agent is running I have to use is:
[[ ! -z ${SSH_AGENT_PID+x} ]]
And I am starting the ssh-agent with:
exec ssh-agent bash
Otherwise the SSH_AGENT_PID is not set.
The following seems to work under both Ubuntu 14.04 and 18.04.
#!/bin/bash
sshkey=id_rsa
# Check ssh-agent
if [[ ! -z ${SSH_AGENT_PID+x} ]]
then
echo "[OK] ssh-agent is already running with pid: "${SSH_AGENT_PID}
else
echo "Starting new ssh-agent..."
`exec ssh-agent bash`
echo "Started agent with pid: "${SSH_AGENT_PID}
fi
# Check ssh-key
if [[ $(ssh-add -L | grep ${sshkey} | wc -l) -gt 0 ]]
then
echo "[OK] SSH key already added to ssh-agent"
else
echo "Need to add SSH key to ssh-agent..."
# This should prompt for your passphrase
ssh-add ~/.ssh/${sshkey}
fi
Thanks to all the answers here. I've used this thread a few times over the years to tweak my approach. Wanted to share my current ssh-agent.sh checker/launcher script that works for me on Linux and OSX.
The following block is my $HOME/.bash.d/ssh-agent.sh
function check_ssh_agent() {
if [ -f $HOME/.ssh-agent ]; then
source $HOME/.ssh-agent > /dev/null
else
# no agent file
return 1
fi
if [[ ${OSTYPE//[0-9.]/} == 'darwin' ]]; then
ps -p $SSH_AGENT_PID > /dev/null
# gotcha: does not verify the PID is actually an ssh-agent
# just that the PID is running
return $?
fi
if [ -d /proc/$SSH_AGENT_PID/ ]; then
# verify PID dir is actually an agent
grep ssh-agent /proc/$SSH_AGENT_PID/cmdline > /dev/null 2> /dev/null;
if [ $? -eq 0 ]; then
# yep - that is an agent
return 0
else
# nope - that is something else reusing the PID
return 1
fi
else
# agent PID dir does not exist - dead agent
return 1
fi
}
function launch_ssh_agent() {
ssh-agent > $HOME/.ssh-agent
source $HOME/.ssh-agent
# load up all the pub keys
for I in $HOME/.ssh/*.pub ; do
echo adding ${I/.pub/}
ssh-add ${I/.pub/}
done
}
check_ssh_agent
if [ $? -eq 1 ];then
launch_ssh_agent
fi
I launch the above from my .bashrc using:
if [ -d $HOME/.bash.d ]; then
for I in $HOME/.bash.d/*.sh; do
source $I
done
fi
Hope this helps others get up and going quickly.
Created a public gist if you want to hack/improve this with me: https://gist.github.com/dayne/a97a258b487ed4d5e9777b61917f0a72
cat /usr/local/bin/ssh-agent-pro << 'EOF'
#!/usr/bin/env bash
SSH_AUTH_CONST_SOCK="/var/run/ssh-agent.sock"
if [[ x$(wc -w <<< $(pidof ssh-agent)) != x1 ]] || [[ ! -e ${SSH_AUTH_CONST_SOCK} ]]; then
kill -9 $(pidof ssh-agent) 2>/dev/null
rm -rf ${SSH_AUTH_CONST_SOCK}
ssh-agent -s -a ${SSH_AUTH_CONST_SOCK} 1>/dev/null
fi
echo "export SSH_AUTH_SOCK=${SSH_AUTH_CONST_SOCK}"
echo "export SSH_AGENT_PID=$(pidof ssh-agent)"
EOF
echo "eval \$(/usr/local/bin/ssh-agent-pro)" >> /etc/profile
. /etc/profile
then you can ssh-add xxxx once, you can use ssh-agent everytime when you login.
I've noticed that having a running agent is not enough because sometimes, the SSH_AUTH_SOCK variable is set or pointing to a socket file that does not exist anymore.
Therefore, to connect to an already running ssh-agent on your machine, you can do this :
$ pgrep -u $USER -n ssh-agent -a
1906647 ssh-agent -s
$ ssh-add -l
Could not open a connection to your authentication agent.
$ test -z "$SSH_AGENT_PID" && export SSH_AGENT_PID=$(pgrep -u $USER -n ssh-agent)
$ test -z "$SSH_AUTH_SOCK" && export SSH_AUTH_SOCK=$(ls /tmp/ssh-*/agent.$(($SSH_AGENT_PID-1)))
$ ssh-add -l
The agent has no identities.
Regarding finding running ssh-agents, previous answers either don't work or rely on a magic file like $HOME/.ssh_agent. These approaches require us to believe that user never run agents without saving their output to this file.
My approach instead relies on a rarely changed default UNIX domain socket template to find an accessible ssh-agent among available possibilities.
# (Paste the below code to your ~/.bash_profile and ~/.bashrc files)
C=$SSH_AUTH_SOCK
R=n/a
unset SSH_AUTH_SOCK
for s in $(ls $C /tmp/ssh-*/agent.* 2>/dev/null | sort -u) ; do
if SSH_AUTH_SOCK=$s ssh-add -l >/dev/null ; then R=$? ; else R=$? ; fi
case "$R" in
0|1) export SSH_AUTH_SOCK=$s ; break ;;
esac
done
if ! test -S "$SSH_AUTH_SOCK" ; then
eval $(ssh-agent -s)
unset SSH_AGENT_PID
R=1
fi
echo "Using $SSH_AUTH_SOCK"
if test "$R" = "1" ; then
ssh-add
fi
In this approach, SSH_AGENT_PID remains unknown, since it is hard to deduce it for non-roots. I assume it is actually not required for users since they don't normally want to stop agents. On my system, setting SSH_AUTH_SOCK is enough to communicate with agent for e.g. passwordless authentication.
The code should work with any shell-compatible shell.
You can modify line #1 to:
PID_SSH_AGENT=`eval ssh-agent -s | grep -Po "(?<=pid\ ).*(?=\;)"`
And then at the end of the script you can do:
kill -9 $PID_SSH_AGENT
I made this bash function to count and return the number of running ssh-agent processes... it searches ssh-agent process using procfs instead of using $ ps -p $SSH_AGENT_PID:cmd or $SSH_AUTH_SOCK:var ... (these ENV-var. can still be set with old values while ssh-agent's process is already killed: if $ ssh-agent -k or $ $(ssh-agent -k) instead of $ eval $(ssh-agent -k))
function count_agent_procfs(){
declare -a agent_list=( )
for folders in $(ls -d /proc/*[[:digit:]] | grep -v /proc/1$);do
fichier="${folders}/stat"
pid=${folders/\/proc\//}
[[ -f ${fichier} ]] && [[ $(cat ${fichier} | cut -d " " -f2) == "(ssh-agent)" ]] && agent_list+=(${pid})
done
return ${#agent_list[#]}
}
..and then if there is a lot of ssh-agent process running you get their PID with this list..."${agent_list[#]}"
Very simple command to check how many processes are running for ssh-agent (or any other program): pidof ssh-agent
or:
pgrep ssh-agent
And very simple command to kill all processes of ssh-agent (or any program):
kill $(pidof ssh-agent)

delete backups older than 7 days from remote ftp using ncftp

I`m currently using this script from cyberciti
#!/bin/sh
# System + MySQL backup script
# Full backup day - Sun (rest of the day do incremental backup)
# Copyright (c) 2005-2006 nixCraft <http://www.cyberciti.biz/fb/>
# This script is licensed under GNU GPL version 2.0 or above
# Automatically generated by http://bash.cyberciti.biz/backup/wizard-ftp-script.php
# ---------------------------------------------------------------------
### System Setup ###
DIRS="/home /etc /var/www"
BACKUP=/tmp/backup.$$
NOW=$(date +"%d-%m-%Y")
INCFILE="/root/tar-inc-backup.dat"
DAY=$(date +"%a")
FULLBACKUP="Sun"
### MySQL Setup ###
MUSER="admin"
MPASS="mysqladminpassword"
MHOST="localhost"
MYSQL="$(which mysql)"
MYSQLDUMP="$(which mysqldump)"
GZIP="$(which gzip)"
### FTP server Setup ###
FTPD="/home/vivek/incremental"
FTPU="vivek"
FTPP="ftppassword"
FTPS="208.111.11.2"
NCFTP="$(which ncftpput)"
### Other stuff ###
EMAILID="admin#theos.in"
### Start Backup for file system ###
[ ! -d $BACKUP ] && mkdir -p $BACKUP || :
### See if we want to make a full backup ###
if [ "$DAY" == "$FULLBACKUP" ]; then
FTPD="/home/vivek/full"
FILE="fs-full-$NOW.tar.gz"
tar -zcvf $BACKUP/$FILE $DIRS
else
i=$(date +"%Hh%Mm%Ss")
FILE="fs-i-$NOW-$i.tar.gz"
tar -g $INCFILE -zcvf $BACKUP/$FILE $DIRS
fi
### Start MySQL Backup ###
# Get all databases name
DBS="$($MYSQL -u $MUSER -h $MHOST -p$MPASS -Bse 'show databases')"
for db in $DBS
do
FILE=$BACKUP/mysql-$db.$NOW-$(date +"%T").gz
$MYSQLDUMP -u $MUSER -h $MHOST -p$MPASS $db | $GZIP -9 > $FILE
done
### Dump backup using FTP ###
#Start FTP backup using ncftp
ncftp -u"$FTPU" -p"$FTPP" $FTPS<<EOF
mkdir $FTPD
mkdir $FTPD/$NOW
cd $FTPD/$NOW
lcd $BACKUP
mput *
quit
EOF
### Find out if ftp backup failed or not ###
if [ "$?" == "0" ]; then
rm -f $BACKUP/*
else
T=/tmp/backup.fail
echo "Date: $(date)">$T
echo "Hostname: $(hostname)" >>$T
echo "Backup failed" >>$T
mail -s "BACKUP FAILED" "$EMAILID" <$T
rm -f $T
fi
It works nice, but my backups take up too much space on remote server, so I would like to modify this script so the ones that are older that 7 days are deleted.
Can someone tell me what to edit? I have no knowledge of shell scripting or ncftp commands though.
I don't have a practical method of trying what I type below - I would suspect that this works, but if not it will at least show the right idea. Please don't use these mods without thorough testing first - if I have stuffed up it could delete your data, but here goes:
Underneath the line:NOW=$(date +"%d-%m-%Y")
add:
DELDATE=$(date -d "-7 days" +"%d-%m-%Y")
after the line: ncftp -u"$FTPU" -p"$FTPP" $FTPS<<EOF
add:
cd $FTPD/$DELDATE
rm *
cd $FTPD
rmdir $DELDATE
The theory behind these changes is as follows:
The first addition calculates what the date was 7 days ago.
The second addition attempts to delete the old information.

Resources