Remove all but the latest X files from sftp via bash-script - linux

I have a working bash script to create backups and upload them as a tar archive to a remote sftp server.
After the upload, the script should remove all but the latest 20 backup files. I can't use any, pipe, grep, whatever on the sftp. Also I don't get the file-listing result handled in my bash-script.
export SSHPASS=$(cat /etc/backup/pw)
SFTPCONNECTION=$(cat /etc/backup/sftp-connection)
sshpass -e sftp $SFTPCONNECTION - << SOMEDELIMITER
ls -lt backup-*.tar
quit
SOMEDELIMITER
There is this nice oneliner, but I did not figure out how the use it in my case (sftp).

This script deletes all tar files in the given directory except the last 20 ones. The -t flag sorts by time & date. The <<< redirect expands $RESULT feed's it into the stdin of the while loop. I'm not entirely pleased with it as it has to create multiple connections, but with sftp I don't believe there is another way.
RESULT=`echo "ls -t path/to/old_backups/" | sftp -i ~/.ssh/your_ssh_key user#server.com | grep tar`
i=0
max=20
while read -r line; do
(( i++ ))
if (( i > max )); then
echo "DELETE $i...$line"
echo "rm $line" | sftp -i ~/.ssh/your_ssh_key user#server.com
fi
done <<< "$RESULT"

Thanks to codelitt I went with this solution:
export SSHPASS=$(cat /etc/backup/pw)
SFTPCONNECTION="username#host"
RESULT=`echo "ls -tl backup*.tar" | sshpass -e sftp $SFTPCONNECTION | grep -oP "backup.*\.tar" `
i=0
max=24
while read -r line; do
# echo "$line "
(( i++ ))
if (( i > max )); then
echo "DELETE $i...$line"
echo "rm $line" | sshpass -e sftp $SFTPCONNECTION
fi
done <<< "$RESULT"
It's a slight modification of his version:
it counts/removes only files named backup*.tar
it uses ls -l (for line based listings)
I had to use sshpass instead of a certificate-based authentication. The sftp password is inside /etc/backup/pw

Related

I have to read config file and after reading it will run scp command to fetch all details from the available servers in config

I have a config file that has details like
#pem_file username ip destination
./test.pem ec2-user 00.00.00.11 /Desktop/new/
./test1.pem ec2-user 00.00.00.22 /Desktop/new/
Now I need to know how can I fix the below script to get all the details using scp
while read "$(cat $conf | awk '{split($0,array,"\n")} END{print array[]}')"; do
scp -i array[1] array[2]#array[3]:/home/ubuntu/documents/xyz.xml array[4]
done
please help me.
Build your while read like this:
#!/bin/bash
while read -r file user ip destination
do
echo $file
echo $user
echo $ip
echo $destination
echo ""
done < <(grep -Ev "^#" "$conffile")
Use these variables to build your scp command.
The grep is to remove commented out lines.
If you prefer using an array, you can do this:
#!/bin/bash
while read -a line
do
echo ${line[0]}
echo ${line[1]}
echo ${line[2]}
echo ${line[3]}
echo ""
done < <(grep -Ev "^#" "$conffile")
See https://mywiki.wooledge.org/BashFAQ/001 for looping on files and commands output using while.

WHOIS BASH Script sometimes not fetching data

My whois bash script works for a few domains and doesn't for others.
When I run the command directly in my terminal for the same domain, I am able to see output. Also, sometimes the script will not run properly and gets stuck, then I need to interrupt that.
Why is that, and how can I fix it?
Let's say the domain.txt file contains: gmail.com, zoom.us, facebook.com, bank.com etc.
The script is:
#!/bin/bash
echo "Please enter the full path of txt file"
read path
filename=$path
while read line
do
echo "Checking domain $line"
a=$(whois $line | grep -i -e "Creation Date" | head -1)
b=$(whois $line | grep -i -e "no match" | head -1)
echo "$a"$line >> /root/outputdomain.csv
done <$filename
echo "file has been processed successfully."
A sample input txt file is:
linkedin.com
zoom.us
sbi.co.in
facebook.com
sap.com
hsbc.com
Expected Output is:
Creation Date: 2002-11-02T15:38:11Z linkedin.com
Creation Date: 2002-04-24T15:03:39Z zoom.us
Whats is working for me currently:
Creation Date: 2002-11-02T15:38:11Z linkedin.com
Creation Date: 1997-03-29T05:00:00Z facebook.com
But no output for zoom.us, sbi.co.in.
If I run the command below, I am able to fetch the required data:
$ whois zoom.us | grep -E "Creation Date" | head -1
Creation Date: 2002-04-24T15:03:39Z
I don't use/know whois but base on your post this is what I came up with.
#!/usr/bin/env bash
shopt -s extglob
echo "Please enter the full path of txt file"
read -r path
filename=$path
while read -r line; do
printf 'Processing %s\n' "$line"...
if a=$(whois "$line" | grep --line-buffer -Fi -m1 "creation date"); then
printf '%s %s\n' "${a##*+([[:blank:]])}" "$line" >> outputdomain.csv
else
printf '%s No domain match\n' "$line" >> outputdomain.csv
fi
sleep 5
done < "$filename"
-m Might not be POSIX but it is in GNU and BSD grep.

bash unable to export the variable to script

i am stuck with my piece of code any help is appreciated. This is the piece of code i am executing from jenkins.
#!/bin/bash
aws ec2 describe-instances --query 'Reservations[*].Instances[*].[Tags[?Key==`Name`].Value|[0],PrivateIpAddress]' --output text | column | grep devtools > devtools
ip=`awk '{print $2}' devtool`
echo $ip
ssh ubuntu#$ip -n "aws s3 cp s3://bucket/$userlistlocation . --region eu-central-1"
cd builds/${BUILD_NUMBER}/
scp * ubuntu#$ip:/home/ubuntu
if [ $port_type == "normal" ]; then
if [ $duplicate_value == "no" ]; then
if [ $userlist == "uuid" ]; then
ssh ubuntu#$ip -n "export thread_size='"'$thread_size'"'; filename=$(echo $userlistlocation | sed -E 's/.*\/(.*)$/\1/') ; echo $filename ; echo filename='"'$filename'"'; chmod +x uuidwithduplicate.sh; ./uuidwithduplicate.sh"
fi
fi
fi
fi
userlistlocation --> is an user input it can be in any format /rahul/december/file.csv or simply it can be file.csv.
Through sed command i am able to get the output and stored in "filename" variable.
But when i try to echo $filename it's printing as echo $filename it should print as file.csv
this file.csv will be the source file for one more script to run i.e for uuidwithduplicate.sh
both userlistlocation and thread_size are specified through Jenkins job parameters.
I am not facing issues while exporting thread_size, only issue is with filename.
It's just printing echo $filename --> it should print file.csv
Breaking down the ssh command:
ssh ubuntu#$ip -n "export thread_size='"'$thread_size'"'; filename=$(echo $userlistlocation | sed -E 's/.*\/(.*)$/\1/') ; echo $filename ; echo filename='"'$filename'"'; chmod +x uuidwithduplicate.sh; ./uuidwithduplicate.sh"
Into segments of single/double quoted items
"export thread_size='"
'$thread_size'
"#'; filename=$(echo $userlistlocation | sed -E 's/./(.)$/\1/') ; echo $filename ; echo filename='#"
'$filename'
"'; chmod +x uuidwithduplicate.sh; ./uuidwithduplicate.sh"
Note: On the 3rd token, an '#' was added between double quotes and single quote to make it more readable. Not part of the command.
On surface few issues:
The '$thread_size' should be "$thread_size" to enable expansion
The 'echo $filename' is in double quote, resulting in expansion on the local host, where as setting filename=$(echo ...) is executed on the remote host.
There are two echo for filename, not sure why
Proposed solution is to move the setting of filename to the local host (simplify command), and move the thread_size into double quotes. It is possible to put complete command into single double-quoted item:
filename=$(echo $userlistlocation | sed -E 's/.*\/(.*)$/\1/')
ssh localhost -n "export thread_size='$thread_size'; echo '$filename' ; echo filename='$filename'; chmod +x uuidwithduplicate.sh; ./uuidwithduplicate.sh"

grep filenames from an exclude log

I have a problem with my bash script. I want to excluding from processing files that are listed in the exclude.log. After a file is processed it is written in to the exclude log.
for I in `ls $1 | grep ./exclude.log -v`
do
echo "Procesing ...."
echo $I >> ./exclude.log
done
$I is not assigned a value.
Also your grep is not right formulated.
You possibly want
LIST=$( grep -v -f /path/to/exclude.log * )
for I in $LIST
do
echo "Procesing ...."
echo $I >> /path/to/exclude.log
done
Make sure you don't have any empty lines in exclude.log
You can use this while loop:
while read -r l; do
echo "$l";
done < <(fgrep -v -wf exclude.log <(printf "%s\n" "$1"/*))

Shell Script to for remote copy and then processing the file

The below script works fine. But when I try to add a command to remote copy and then assign the variable FILENAME with the file received from the remote copy, the while loop doesn't work. I am quite new to scripting so I'm not able to find out what I'm missing. Please help!
#!/bin/sh
#SCRIPT: File processing
#PURPOSE: Process a file line by line with redirected while-read loop.
SSID=$1
ASID=$2
##rcp server0:/oracle/v11//dbs/${SSID}_ora_dir.lst /users/global/rahul/${ASID}_clone_dir.lst
##FILENAME=/users/global/rahul/${ASID}_clone_dir.lst
count=0
while read LINE
do
echo $LINE | sed -e "s/${SSID}/${ASID}/g"
count=`expr $count + 1`
done < $FILENAME
echo -e "\nTotal $count Lines read"
grep -v -e "pattern3" -e "pattern5" -e "pattern6" -e "pattern7" -e "pattern8" -e "pattern9" -e "pattern10" -e "pattern11" -e "
pattern12" ${ASID}_.lst > test_remote.test
When you say, "the while loop doesn't work", if you get an error message you should include that in your question to give us a clue.
Are you sure the rcp command is successful? The file /users/global/rahul/${ASID}_clone_dir.lst exists after the rcp is completed?
Btw your while loop is inefficient. This should be equivalent:
sed -e "s/${SSID}/${ASID}/g" < "$FILENAME"
count=$(wc -l "$FILENAME" | awk '{print $1}')
echo -e "\nTotal $count Lines read"

Resources