Shell script to automate sftp transfer - linux

I'm trying to create a shell script to automate the sftp transfer from source server to remote server without prompting for password. But when I run the script I get the output that no file is found for transfer (I have created files in the source directory). Here is my code
tempfile="/tmp/sftpsync.$$"
count=0
trap "/bin/rm -f $tempfile" 0 1 15
if [ $# -eq 0 ] ; then
echo "Usage: $0 user host path_to_src_dir remote_dir" >&2
exit 1
fi
# Collect User Input
user="$1"
server="$2"
remote_dir="$4"
source_dir="$3"
timestamp="$source_dir/.timestamp"
# Without source and remote dir, the script cannot be executed
if [[ -z $remote_dir ]] || [[ -z $source_dir ]]; then
echo "Provide source and remote directory both"
exit 1
fi
echo "cd $remote_dir" >> $tempfile
# timestamp file will not be available when executed for the very first time
if [ ! -f $timestamp ] ; then
# no timestamp file, upload all files
for filename in $source_dir/*
do
if [ -f "$filename" ] ; then
# Place the command to upload files in sftp batch file
echo "put -P \"$filename\"" >> $tempfile
# Increase the count value for every file found
count=$(( $count + 1 ))
fi
done
else
# If timestamp file found then it means it is not the first execution so look out for newer files only
# Check for newer files based on the timestamp
for filename in $(find $source_dir -newer $timestamp -type f -print)
do
# If found newer files place the command to upload these files in batch file
echo "put -P \"$filename\"" >> $tempfile
# Increase the count based on the new files
count=$(( $count + 1 ))
done
fi
# If no new files found the do nothing
if [ $count -eq 0 ] ; then
echo "$0: No files require uploading to $server" >&2
exit 1
fi
# Place the command to exit the sftp connection in batch file
echo "quit" >> $tempfile
echo "Synchronizing: Found $count files in local folder to upload."
# Main command to use batch file with SFTP shell script without prompting password
sftp -b $tempfile "$user#$server"
echo "Done. All files synchronized up with $server"
# Create timestamp file once first set of files are uploaded
touch $timestamp
# Remove the sftp batch file
rm -f $tempfile
exit 0
But I always get the output as "No files require uploading to $server". Would like to check if there is any issue with my code or how can this be fixed?

Related

Need a solution for grep command for .gz files

I have file names starts with RACHEL_20180814_092356.csv.gz
and need to grep in format like RACHEL_20180814*.gz nd unzip it, but am unable too. Here is sample code I have been working on.need to also insert a date parameter, which changes with each day. tried using zgrep but I am out of luck! Any help please
Process GMRA file
echo "Starting file get for Rachel.gz files"
for SUBDIR in prices; do
set -A getlist `/bin/ls ${ENV_DIR_SCR}/bat/prices`
if [ ! -z ${getlist[0]} ]; then
for FILENAME in ${getlist[*]}; do
echo "Found File ${ENV_DIR_SCR}/bat/prices/${FILENAME}"
if [ `echo $FILENAME | grep "RACHEL*.gz"` ]; then
$FILENAME = gunzip $FILENAME
GETFILES="$GETFILES ${FILENAME}"
break
fi
done
fi
done echo "Completed file_get for RACHEL.gz files"

Linux/sh: How to list files one by one, compress each (by p7zip without save file on disk) and upload to ftp server (by curl/ncftp)?

Linux/sh: How to list all files one by one in specific folder,
compress each (by p7zip without save file on disk) and
upload to ftp server (by curl/ncftp) with same folder structure?
This script below work perfect but
I don't want to save 7z file on a disk each time. Because I always need to delete them all after uploaded.
I prefer stio from 7zip to curl, how to do that?
#!/bin/sh
FOLDER="/volume3/backup_3/kopia_nas/tmp"
BACKUP_DIR="/volume3/backup_3/kopia_nas/tmp2"
FTP_HOST=""
FTP_USER=""
FTP_PASS=""
FTP_PORT="21"
PASSWORD="abc123"
FTP_FOLDER="/backup2"
#####################################################################
echo "[$(date +'%d-%m-%Y %H:%M:%S')] starting..."
echo ""
/usr/bin/find "${FOLDER}" -type f | while read line; do
# echo "$line" #path+file
# echo "${line##*/}" #file
# echo "${line%/*}" #path
#
/usr/bin/p7zip/7za a "${BACKUP_DIR}${line}.7z" "${line}" -t7z -ms=off -m0=Copy -mhe -mmt -mx0 -p"${PASSWORD}"
curl -s --disable-epsv -v -T "${BACKUP_DIR}${line}.7z" -u "${FTP_USER}:${FTP_PASS}" "ftp://${FTP_HOST}/${FTP_FOLDER}${line%/*}/" --ftp-create-dirs;
#-S -show errors
#-s -silent mode
#-an - no file name
#v- verbose
#/usr/bin/ncftp/ncftpput -m -u -c "${FTP_USER}" -p "${FTP_PASS}" -P "${FTP_PORT}" "${FTP_HOST}" "${FTP_FOLDER}${line%/*}/" "${line##*/}.7z"
# if [ $? -ne 0 ]; then echo "[$(date +'%d-%m-%Y %H:%M:%S')] Upload failed"; fi
done
#rm -rf "${BACKUP_DIR}/" #delete temporary folder
echo ""
echo "[$(date +'%d-%m-%Y %H:%M:%S')] completed..."
exit 0
I try this but it doesn't work for me...
/usr/bin/p7zip/7za a -an -t7z -ms=off -m0=Copy -mhe -mmt -mx0 -so -p"${PASSWORD}" | curl -S --disable-epsv -v -T - -u "${FTP_USER}:${FTP_PASS}" "ftp://${FTP_HOST}/${FTP_FOLDER}${line}/" --ftp-create-dirs;

restoring a file to their original location from a log file with paths

I have made 1 script to move user specified files to the dustbin and create a log file with their original paths. Now I want to create a script that the user would only have to input the name of the file to restore it to where it was before and I cannot figure that out.
Here is the code so far:
delete script:
#!/bin/sh
#checks if the user has entered anything.
#if not displays message.
if [ $# -eq 0 ]; then #reads the number of characters
echo "Usage: del <pathname>" >&2
exit 2;
fi
#location of the dustbin
dustbin="$HOME/dustbin"
paths="$HOME/Paths"
if [[ ! -d $dustbin ]]; then #checks if the dustbin exists
mkdir $dustbin
else
[[ -p $dustbin ]] #if dustbin exists does nothing
fi
#creates a Paths folder to store the original location of file(s)
if [[ ! -d $paths ]]; then #checks if the Paths folder exists
mkdir $paths
else
[[ -p $paths ]] #if Paths folder exists does nothing
fi
#gets just the file name
for file in "$#"; do #reads all the arguments
if [[ -e $file ]]; then #checks if the file name exists
#moves the file(s) to the dustbin and writes the orginal file path to the paths.txt file.
find $file >> $HOME/Paths/paths.txt && mv "$file" "$dustbin"
echo "Deleting file(s) $file"
else
echo "The file $file doesn't exist."
fi
done
restore script:
With this I need to search for the file in the dustbin, match the file name to the paths text file that has the files original path and move to the said path.
#!/bin/sh
#checks if the user has entered anything.
#if not displays message.
if [ $# -eq 0 ]; then
echo "Usage: restore <File name>" >&2
exit 2;
fi
#checks if the file paths.txt exist
paths="$HOME/Paths/paths.txt"
if [[ ! -f $paths ]]; then #checks if the Paths file exists
echo "The log file paths.txt doesn't exist. Nothing to restore"
fi
#takes the user input checks if the dustbin exists.
for file in "$#"; do
if [[ ! -d dustbin ]]; then
echo "dustbin doesn't exist"
else
cd $HOME/dustbin
fi
#looks for the user specified file.
if [[ ! -e $file ]]; then
echo "File $file doesn't exist"
else
#restores the file to the original location
restore="grep -n '$file' $paths" #no idea how to do it
mv $file $restore
fi
done
this part I have no idea how to do. I need it to read the user input in $file from the paths.txt and use that stored path to move the $file from the dustbin to the file path stored in the paths.txt file.
#restores the file to the original location
restore="grep -n '$file' $paths" #no idea how to do it
mv $file $restore
So, I think you will want to move the file back to where it originally was using mv.
mv "dustbinPath/$file" "orginalPath/$file"
This will move it from the dustbin path to the originalPath.
EDIT:
If you want to grep the path file for it, you can set a variable to the output of a command like:
originalPath=$(grep 'what_to_grep' file_to_grep.txt)
After you do that that use it in the mv above appropriately (whether the text file contains that file name or not) to move it out.
You can read more about setting a variable to the output from a command here. You might have problems if there are multiple lines that have it however.
In the del script I changed find to realpath so it looks like this:
#gets just the file name
for file in "$#"; do #reads all the arguments
if [[ -e $file ]]; then #checks if the file name exists
realpath $file >> $HOME/Paths/paths.txt && mv "$file" "$dustbin"
echo "Deleting file 'basename $file'"
else
echo "The file $file doesn't exist."
fi
done
and the restore script I added a new variable
rest="$(grep -e $file $paths)" #looking in to the paths.txt file for filename match
#looks for the user specified file in the dustbin.
if [[ ! -e $dustbin/$file ]]; then
echo "File $file doesn't exist"
elif [[ -e $rest ]]; then #if the $file exists in the original path adds an extension .bak
mv $dustbin/$file $rest.bak
else
mv $dustbin/$file $rest #restores the file to the original location
echo "$file restored"
fi

Check that two file exists in UNIX Directory

Good Morning,
I am trying to write a korn shell script to look inside a directory that contains loads of files and check that each file also exists with .orig on the end.
For example if a file inside the directory is called 'mercury_1' there must also be a file called 'mercury_1.orig'
If there isn't, it needs to move the mercury_1 file to another location. However if the .orig file exists do nothing and move onto the next file.
I am sure it is really simple but I am not that experienced in writing Linux scripts and help would be greatly appreciated!!
Here's a small ksh snippet to check if a file exists in the current directory
fname=mercury_1
if [ -f $fname ]
then
echo "file exists"
else
echo "file doesn't exit"
fi
Edit:
The updated script that does the said functionality
#/usr/bin/ksh
if [ ! $# -eq 1 ]
then
echo "provide dir"
exit
fi
dir=$1
cd $dir
#process file names not ending with orig
for fname in `ls | grep -v ".orig$"`
do
echo processing file $fname
if [ -d $fname ] #skip directory
then
continue
fi
if [ -f "$fname.orig" ] #if equiv. orig file present
then
echo "file exist"
continue
else
echo "moving"
mv $fname /tmp
fi
done
Hope its of help!
You can use the below script
script.sh :
#!/bin/sh
if [ ! $# -eq 2 ]; then
echo "error";
exit;
fi
for File in $1/*
do
Tfile=${File%%.*}
if [ ! -f $Tfile.orig ]; then
echo "$File"
mv $File $2/
fi
done
Usage:
./script.sh <search directory> <destination dir if file not present>
Here, for each file with extension stripped check if "*.orig" is present, if not then move file to different directory, else do nothing.
Extension is stripped because you don't want to repeat the same steps for *.orig files.
I tested this on OSX (basically mv should not differ to much from linux). My test directory is zbar and destination is /tmp directory
#!/bin/bash
FILES=zbar
cd $FILES
array=$(ls -p |grep -v "/") # we search for file without extension so put them in array and ignore directory
echo $array
for f in $array #loop in array and find .orig file
do
#echo $f
if [ -e "$f.orig" ]
then
echo "found $f.orig"
else
mv -f "$f" "/tmp"
fi
done

Do you know how I can create backup files automatically?

I backup files a few times a day on Ubuntu/Linux with the command tar -cpvzf ~/Backup/backup_file_name.tar.gz directory_to_backup/. I want to create a script that will create the file name automatically - check:
~/Backup/backup_file_name_`date +"%Y-%m-%d"`_a.tar.gz
if it exists, if it exists then replace "_a" with "_b" and then checks all the letters up to z. Create the first backup file that doesn't exist. If all the files up to z exist then add "_1" to the file name (with "_z") and check all the numbers until the file doesn't exist. Never change an existing file but only create new backup files. Do you know how to create such a script?
You can do something like
for l in {a..z} ; do
[[ -f ~/Backup/backup_file_name_`date +"%Y-%m-%d"`_${l}.tar.gz ]] && continue
export backupname=-f ~/Backup/backup_file_name_`date +"%Y-%m-%d"`_${l}.tar.gz && break
done
# test if $backupname is properly set, what if `z` is used? I'm leaving this to you
# then backup as usual
tar -cpvzf $backupname directory_to_backup/
This iterates over the letters and if the required file exists skips setting the backupname variable.
OK, I found a solution. I created the file ~/scripts/backup.sh:
#!/bin/bash
working_dir=`pwd`
backupname=""
if [ -z "$backupname" ]; then
for l in {a..z} ; do
if [ ! -f ~/Backup/backup_file_name_`date +"%Y-%m-%d"`_${l}.tar.gz ]; then
backupname=~/Backup/backup_file_name_`date +"%Y-%m-%d"`_${l}.tar.gz
break
fi
done
fi
if [ -z "$backupname" ]; then
l="z"
for (( i = 1 ; i <= 1000; i++ )) do
if [ ! -f ~/Backup/backup_file_name_`date +"%Y-%m-%d"`_${l}_${i}.tar.gz ]; then
backupname=~/Backup/backup_file_name_`date +"%Y-%m-%d"`_${l}_${i}.tar.gz
break
fi
done
fi
if [ ! -z "$backupname" ]; then
cd ~/projects/
~/scripts/tar.sh $backupname directory_to_backup/
cd $working_dir
else
echo "Oops! can't create backup file name."
fi
exit
The file ~/scripts/tar.sh contains this script:
#!/bin/bash
if [ -f $1 ]; then
echo "Oops! backup file was already here."
exit
fi
tar -cpvzf $1 $2 $3 $4 $5
Now I just have to type ~/scripts/backup.sh and the script backs up my files.
Create a script which saves file with date like,
~/Backup/backup_file_name_${date}.tar.gz
and run that script a cron job if you want to take backup after some specific interval or run it manually if you dont have such requirement.

Resources