How to limit concurrent SSH or Dropbear Tunnel connections - linux

I need to limit concurrent SSH/Dropbear Tunnel connections to 1 login per user.
I have a script that takes care of that.
But it doesn't work for me because when there are many users it becomes saturated and it takes a long time to kick the users.
Another problem with this script is that if the user logs out and logs back in it is detected as multilogin.
Maxlogins and MaxSessions does not work on Dropbear.
Below is the script I am using:
#!/bin/bash
# This script locates all users who have multiple active dropbear
# processes and kills processes in excess of one for each user.
if [ "$EUID" -ne 0 ]; then
printf "Please run as root.\n"
exit
fi
IFS=+
while true; do
PIDFILE=$(mktemp)
AUTHFILE=$(mktemp)
USERS=$(mktemp)
ps aux | grep dropbear | grep -v grep | awk 'BEGIN{} {print $2}' > $PIDFILE
journalctl -r | grep dropbear | grep auth > $AUTHFILE
while read LINE; do
USER=$(printf "%s" $LINE | sed "s/^.* '//" | sed "s/'.*$//" -)
PID=$(printf "%s" $LINE | sed "s/^.*\[//" | sed "s/].*$//" -)
if grep -Fxq $(printf "%s" $USER) $USERS; then
:
else
printf "%s\n" $USER >> $USERS
fi
USERFILE=$(printf "/tmp/%s" $USER)
if [ ! -f $USERFILE ]; then
touch $USERFILE
fi
if grep -Fxq $(printf "%s" $PID) $PIDFILE; then
printf "%s\n" $PID >> $USERFILE
else
:
fi
done < $AUTHFILE
while read USER; do
i=1
while read PID; do
if [ $i -gt 1 ]; then
printf "Kill PID %s of user %s\n" $PID $USER
kill -9 $(printf "%s" $PID)
curl -k "https://redesprivadasvirtuales.com/modules/servers/openvpn/vega.php?secret=DD8sPD&user=$USER"
else
:
fi
((i++))
done < $(printf "/tmp/%s" $USER)
rm $(printf "/tmp/%s" $USER)
done < $USERS
rm $PIDFILE
rm $AUTHFILE
rm $USERS
done

Suggestions:
journalctl -r is very expensive. Limit journalctl to time since last search.
Line with USER=$(...) and PID=$(...). Replace printf and sed commands, with single awk command.
Research pgrep and pkill commaonds.
Replace file $PIDFILE $AUTHFILE $USERS with array variables (research readarray command).
While loop over $AUTHFILE could be implemented as loop over bash array.
While loop over $USERS (including internal loop) could be implemented as loop over bash array.
curl command might be very expensive. You do not check the response from each curl request. Run curl in background and if possible in parallel for all users.
Kind SO members could assist more, if you put sample lines from $AUTHFILE in the questions as sample input line.

Related

Bash script: read 30000 records from file and run multiple process in parallel

txt with more than 30000 records.
All records are one for line and is an IP like this:
192.168.0.1
192.168.0.2
192.168.0.3
192.168.0.4
192.168.0.5
192.168.0.6
192.168.0.7
192.168.0.8
192.168.0.9
192.168.0.10
I read each row in a bash script, and I need to run a curl like this:
while IFS= read -r line || [[ -n "$line" ]]; do
#check_site "$line"
resp=$(curl -i -m1 http://$line 2>&1)
echo "$resp" | grep -Eo "$ok" > /dev/null
if [ $? -ne 0 ]; then
#echo -e "failed: $line" >> "${logfile}"
echo -e "Command: curl -i -m1 http://$line 2>&1" >> "${outfile}"
echo -e "failed: $line:\n\n \"$resp\"\n\n" >> "${outfile}"
echo "$line" >> "${faillog}"
fi
done < "${FILE}"
Is there a method to run multiple lines simultaneously in my file to reduce the execution time?
I solved for the multiprocess in this way:
#export variable to be used into function
export outlog="/tmp/out.log"
export faillog="/tmp/fail.log"
export ok="(curl: \(7\) Failed to connect to)" # acceptable responses
# create function:
check_site() {
ip=$1
resp=$(curl -i -m1 http://$ip 2>&1)
echo "$resp" | grep -Eo "$ok" > /dev/null
if [ $? -ne 0 ]; then
echo -e "Command: curl -i -m1 http://$ip 2>&1" >> "${outlog}"
echo -e "Block failed: $ip:\n\n \"$resp\"\n\n" >> "${outlog}"
echo "$ip" >> "${faillog}"
fi
}
# call the function:
export -f check_site
parallel -j 252 -a "${FILE}" check_site
Xargs will do the trick. Wikipedia
This article describe approach to resolve parallel execution, it may help you:
Parallel execution in Bash
Example from the article:
#!/bin/bash
RANDOM=10
JOBS_COUNTER=0
MAX_CHILDREN=10
MY_PID=$$
for i in {1..100}
do
echo Cycle counter: $i
JOBS_COUNTER=$((`ps ax -Ao ppid | grep $MY_PID | wc -l`))
while [ $JOBS_COUNTER -ge $MAX_CHILDREN ]
do
JOBS_COUNTER=$((`ps ax -Ao ppid | grep $MY_PID | wc -l`))
echo Jobs counter: $JOBS_COUNTER
sleep 1
done
sleep $(($RANDOM % 30)) &
done
echo Finishing children ...
# wait for children here
while [ $JOBS_COUNTER -gt 1 ]
do
JOBS_COUNTER=$((`ps ax -Ao ppid | grep $MY_PID | wc -l`))
echo Jobs counter: $JOBS_COUNTER
sleep 1
done
echo Done

bash script run to send process to background

Hi Im making a script to do some rsync process, for the rsync process, Sys admin has created the script, when it run it is asking select options, so i want to create a script to pass that argument from script and run it from cron.
list of directories to rsync take from file.
filelist=$(cat filelist.txt)
for i in filelist;do
echo -e "3\nY" | ./rsync.sh $i
#This will create a rsync log file
so i check the some value of log file and if it is empty i moving to the second file. if the file is not empty, i have to start rsync process as below that will take more that 2 hours.
if [ a != 0 ];then
echo -e "3\nN" | ./rsync.sh $i
above rsync process need to send to the background and take next file to loop. i check with the screen command, but screen is not working with server. also i need to get the duration that take to run process and passing to the log, when i use the time command i am unable to pass the echo variable. Also need to send this to background and take next file. appreciate any suggestions to success this task.
Question
1. How to send argument with Time command
echo -e "3\nY" | time ./rsync.sh $i
above one not working
how to send this to background and take next file to rsync while running previous rsync process.
Full Code
#!/bin/bash
filelist=$(cat filelist.txt)
Lpath=/opt/sas/sas_control/scripts/Logs/rsync_logs
date=$(date +"%m-%d-%Y")
timelog="time_result/rsync_time.log-$date"
for i in $filelist;do
#echo $i
b_i=$(basename $i)
echo $b_i
echo -e "3\nY" | ./rsync.sh $i
f=$(cat $Lpath/$(ls -tr $Lpath| grep rsync-dry-run-$b_i | tail -1) | grep 'transferred:' | cut -d':' -f2)
echo $f
if [ $f != 0 ]; then
#date=$(date +"%D : %r")
start_time=`date +%s`
echo "$b_i-start:$start_time" >> $timelog
#time ./rsync.sh $i < echo -e "3\nY" 2> "./time_result/$b_i-$date" &
time { echo -e "3\nY" | ./rsync.sh $i; } 2> "./time_result/$b_i-$date"
end_time=`date +%s`
s_time=$(cat $timelog|grep "$b_i-start" |cut -d ':' -f2)
duration=$(($end_time-$s_time))
echo "$b_i duration:$duration" >> $timelog
fi
done
Your question is not very clear, but I'll try:
(1) If I understand you correctly, you want to time the rsync.
My first attempt would be to use echo xxxx | time rsycnc. On my bash, this was however broken (or not supposed to work?). I'm normally using Zsh instead of bash, and on zsht, this indeed runs fine.
If it is important for you to use bash, an alternative (since the time for the echo can likely be neglected) would be to time the whole pipe, i.e. time (echo xxxx | time rsync), or even simpler time rsync <(echo xxxx)
(2) To send a process to the background, add an & to the line. However, the time command produces of course output (that's it purpose), and you don't want to receive output from a program in background. The solution is to redirect the output:
(time rsync <(echo xxxx) >output.txt 2>error.txt) &
If you want to time something, you can use:
time sleep 3
If you want to time two things, you can do a compound statement like this (note semicolon after second sleep):
time { sleep 3; sleep 4; }
So, you can do this to time your echo (which will take no time at all) and your rsync:
time { echo "something" | rsync something ; }
If you want to do that in the background:
time { echo "something" | rsync something ; } &
Full Code
#!/bin/bash
filelist=$(cat filelist.txt)
Lpath=/opt/sas/sas_control/scripts/Logs/rsync_logs
date=$(date +"%m-%d-%Y")
timelog="time_result/rsync_time.log-$date"
for i in $filelist;do
#echo $i
b_i=$(basename $i)
echo $b_i
echo -e "3\nY" | ./rsync.sh $i
f=$(cat $Lpath/$(ls -tr $Lpath| grep rsync-dry-run-$b_i | tail -1) | grep 'transferred:' | cut -d':' -f2)
echo $f
if [ $f != 0 ]; then
#date=$(date +"%D : %r")
start_time=`date +%s`
echo "$b_i-start:$start_time" >> $timelog
#time ./rsync.sh $i < echo -e "3\nY" 2> "./time_result/$b_i-$date" &
time { echo -e "3\nY" | ./rsync.sh $i; } 2> "./time_result/$b_i-$date"
end_time=`date +%s`
s_time=$(cat $timelog|grep "$b_i-start" |cut -d ':' -f2)
duration=$(($end_time-$s_time))
echo "$b_i duration:$duration" >> $timelog
fi
done

Shell Script ssh $SERVER >> EOF

I have a handy script here that can return accounts that will expire in 7 Days or have expired. I wanted to allow this to run on multiple hosts with out putting the script on each individual host, I added the for loop and the ssh $SERVER >> EOF part but it will just run the commands off they system that is running the script.
I believe the error is with ssh $SERVER >> EOF but I am unsure as the syntax looks correct.
#!/bin/bash
for SERVER in `cat /lists/testlist`
do
echo $SERVER
ssh $SERVER >> EOF
sudo cat /etc/shadow | cut -d: -f1,8 | sed /:$/d > /tmp/expirelist.txt
totalaccounts=`sudo cat /tmp/expirelist.txt | wc -l`
for((i=1; i<=$totalaccounts; i++ ))
do
tuserval=`sudo head -n $i /tmp/expirelist.txt | tail -n 1`
username=`sudo echo $tuserval | cut -f1 -d:`
userexp=`sudo echo $tuserval | cut -f2 -d:`
userexpireinseconds=$(( $userexp * 86400 ))
todaystime=`date +"%s"`
if [[ $userexpireinseconds -ge $todaystime ]] ;
then
timeto7days=$(( $todaystime + 604800 ))
if [[ $userexpireinseconds -le $timeto7days ]];
then
echo $username "is going to expire in 7 Days"
fi
else
echo $username "account has expired"
fi
done
sudo rm /tmp/expirelist.txt
EOF
done
Here documents are started by << EOF (or, better, << 'EOF' to prevent the body of the here document being expanded by the (local) shell) and the end marker must be in column 1.
What you're doing is running ssh and appending standard output to a file EOF (>> is an output redirection; << is an input redirection). It is then (locally) running sudo, etc. It probably fails to execute the local file EOF (not executable, one hopes), and likely doesn't find any other command for that either.
I think what you're after is this (where I've now replaced the back-ticks in the script with $(...) notation, and marginally optimized the server list generation for use with Bash):
#!/bin/bash
for SERVER in $(</lists/testlist)
do
echo $SERVER
ssh $SERVER << 'EOF'
sudo cat /etc/shadow | cut -d: -f1,8 | sed '/:$/d' > /tmp/expirelist.txt
totalaccounts=$(sudo cat /tmp/expirelist.txt | wc -l)
for ((i=1; i<=$totalaccounts; i++))
do
tuserval=$(sudo head -n $i /tmp/expirelist.txt | tail -n 1)
username=$(sudo echo $tuserval | cut -f1 -d:)
userexp=$(sudo echo $tuserval | cut -f2 -d:)
userexpireinseconds=$(( $userexp * 86400 ))
todaystime=$(date +"%s")
if [[ $userexpireinseconds -ge $todaystime ]]
then
timeto7days=$(( $todaystime + 604800 ))
if [[ $userexpireinseconds -le $timeto7days ]]
then
echo $username "is going to expire in 7 Days"
fi
else
echo $username "account has expired"
fi
done
sudo rm /tmp/expirelist.txt
EOF
done
Very close, but the differences really matter! Note, in particular, that the end marker EOF is in column 1 and not indented at all.

Multi-threaded ping script

I have this
#! /bin/bash
cd ~
hostname=`hostname`
cat /opt/ip.txt | while read line;
do
# do something with $line here
RES=`ping -c 2 -q $line | grep "packet loss"`
echo "---" >> /opt/os-$hostname.txt
echo "---"
echo "$line $RES" >> /opt/os-$hostname.txt
echo "$line $RES"
done
How I can make the script multi-threaded? I would like to speed up the performance.
You can use the <(...) notation for starting a subprocess and then cat all the outputs together:
myping() {
ping -c 2 -q "$1" | grep "packet loss"
}
cat <(myping hostname1) <(myping hostname2) ...
To use a loop for this, you will need to build the command first:
cat /opt/ip.txt | {
command='cat'
while read line
do
command="$command "'<'"(myping $line)"
done
eval "$command"
}
If you really want the delimiting --- of your original, I propose to add an echo "---" in the myping.
If you want to append the output to a file as well, use tee:
eval "$command" | tee -a /opt/os-$hostname.txt
DELETED.
WAS UN USEFUL ? NO THREAD IN BASH.

Functionality of a start-stop-restart shell script

I am a shell scripting newbie trying to understand some code, but there are some lines that are too complexe for me. The piece of code I'm talking about can be found here: https://gist.github.com/447191
It's purpose is to start, stop and restart a server. That's pretty standard stuff, so it's worth taking some time to understand it. I commented those lines where I am unsure about the meaning or that I completely don't understand, hoping that somone could give me some explanation.
#!/bin/bash
#
BASE=/tmp
PID=$BASE/app.pid
LOG=$BASE/app.log
ERROR=$BASE/app-error.log
PORT=11211
LISTEN_IP='0.0.0.0'
MEM_SIZE=4
CMD='memcached'
# Does this mean, that the COMMAND variable can adopt different values, depending on
# what is entered as parameter? "memcached" is chosen by default, port, ip address and
# memory size are options, but what is -v?
COMMAND="$CMD -p $PORT -l $LISTEN_IP -m $MEM_SIZE -v"
USR=user
status() {
echo
echo "==== Status"
if [ -f $PID ]
then
echo
echo "Pid file: $( cat $PID ) [$PID]"
echo
# ps -ef: Display uid, pid, parent pid, recent CPU usage, process start time,
# controling tty, elapsed CPU usage, and the associated command of all other processes
# that are owned by other users.
# The rest of this line I don't understand, especially grep -v grep
ps -ef | grep -v grep | grep $( cat $PID )
else
echo
echo "No Pid file"
fi
}
start() {
if [ -f $PID ]
then
echo
echo "Already started. PID: [$( cat $PID )]"
else
echo "==== Start"
# Lock file that indicates that no 2nd instance should be started
touch $PID
# COMMAND is called as background process and ignores SIGHUP signal, writes it's
# output to the LOG file.
if nohup $COMMAND >>$LOG 2>&1 &
# The pid of the last background is saved in the PID file
then echo $! >$PID
echo "Done."
echo "$(date '+%Y-%m-%d %X'): START" >>$LOG
else echo "Error... "
/bin/rm $PID
fi
fi
}
# I don't understand this function :-(
kill_cmd() {
SIGNAL=""; MSG="Killing "
while true
do
LIST=`ps -ef | grep -v grep | grep $CMD | grep -w $USR | awk '{print $2}'`
if [ "$LIST" ]
then
echo; echo "$MSG $LIST" ; echo
echo $LIST | xargs kill $SIGNAL
# Why this sleep command?
sleep 2
SIGNAL="-9" ; MSG="Killing $SIGNAL"
if [ -f $PID ]
then
/bin/rm $PID
fi
else
echo; echo "All killed..." ; echo
break
fi
done
}
stop() {
echo "==== Stop"
if [ -f $PID ]
then
if kill $( cat $PID )
then echo "Done."
echo "$(date '+%Y-%m-%d %X'): STOP" >>$LOG
fi
/bin/rm $PID
kill_cmd
else
echo "No pid file. Already stopped?"
fi
}
case "$1" in
'start')
start
;;
'stop')
stop
;;
'restart')
stop ; echo "Sleeping..."; sleep 1 ;
start
;;
'status')
status
;;
*)
echo
echo "Usage: $0 { start | stop | restart | status }"
echo
exit 1
;;
esac
exit 0
1)
COMMAND="$CMD -p $PORT -l $LISTEN_IP -m $MEM_SIZE -v" — -v in Unix tradition very often is a shortcut for --verbose. All those dollar signs are variable expansion (their text values are inserted into the string assigned to new variable COMMAND).
2)
ps -ef | grep -v grep | grep $( cat $PID ) - it's a pipe: ps redirects its output to grep which outputs to another grep and the end result is printed to the standard output.
grep -v grep means "take all lines that do not contain 'grep'" (grep itself is a process, so you need to exclude it from output of ps). $( $command ) is a way to run command and insert its standard output into this place of script (in this case: cat $PID will show contents of file with name $PID).
3) kill_cmd.
This function is an endless loop trying to kill the LIST of 'memcached' processes' PIDs. First, it tries to send TERM signal (politely asking each process in $LIST to quit, saving its work and shutting down correctly), gives them 2 seconds (sleep 2) to do their shutdown job and then tries to make sure that all processes are killed using signal KILL (-9), which slays the process immediately using OS facilities: if a process has not done its shutdown work in 2 seconds, it's considered hung). If slaying with kill -9 was successful, it removes the PID file and quits the loop.
ps -ef | grep -v grep | grep $CMD | grep -w $USR | awk '{print $2}' prints all PIDs of processes with name $CMD ('memcached') and user $USR ('user'). -w option of grep means 'the Whole word only' (this excludes situations where the sought name is a part of another process name, like 'fakememcached'). awk is a little interpreter most often used to take a word number N from every line of input (you can consider it a selector for a column of a text table). In this case, it prints every second word in ps output lines, that means every PID.
If you have any other questions, I'll add answers below.
Here is an explanation of the pieces of code you do not understand:
1.
# Does this mean, that the COMMAND variable can adopt different values, depending on
# what is entered as parameter? "memcached" is chosen by default, port, ip address and
# memory size are options, but what is -v?
COMMAND="$CMD -p $PORT -l $LISTEN_IP -m $MEM_SIZE -v"
In the man, near -v:
$ man memcached
...
-v Be verbose during the event loop; print out errors and warnings.
...
2.
# ps -ef: Display uid, pid, parent pid, recent CPU usage, process start time,
# controling tty, elapsed CPU usage, and the associated command of all other processes
# that are owned by other users.
# The rest of this line I don't understand, especially grep -v grep
ps -ef | grep -v grep | grep $( cat $PID )
Print all processes details (ps -ef), exclude the line with grep (grep -v grep) (since you are running grep it will display itself in the process list) and filter by the text found in the file named $PID (/tmp/app.pid) (grep $( cat $PID )).
3.
# I don't understand this function :-(
kill_cmd() {
SIGNAL=""; MSG="Killing "
while true
do
## create a list with all the pid numbers filtered by command (memcached) and user ($USR)
LIST=`ps -ef | grep -v grep | grep $CMD | grep -w $USR | awk '{print $2}'`
## if $LIST is not empty... proceed
if [ "$LIST" ]
then
echo; echo "$MSG $LIST" ; echo
## kill all the processes in the $LIST (xargs will get the list from the pipe and put it at the end of the kill command; something like this < kill $SIGNAL $LIST > )
echo $LIST | xargs kill $SIGNAL
# Why this sleep command?
## some processes might take one or two seconds to perish
sleep 2
SIGNAL="-9" ; MSG="Killing $SIGNAL"
## if the file $PID still exists, delete it
if [ -f $PID ]
then
/bin/rm $PID
fi
## if list is empty
else
echo; echo "All killed..." ; echo
## get out of the while loop
break
fi
done
}
This function will kill all the processes related to memcached slowly and painfully (actually quite the opposite).
Above are the explanations.

Resources