Loading the 2 most recent files from a folder in an ftp from Linux - linux

Good afternoon everyone,
We have the following :
on server X we have some files spawning every 15 minutes (normally). This files are 2 type - .csv and .log
we need to grab the most recent 2 files and bring them in the server Q, parsing them in order to remove ":" and replace it with "-" for example;
load them in an ftp server;
The code is something like this at the moment :
`#!/bin/bash
X=ip.address
USER=myuser
SRC_DIR=/home/user/GETFILES/temp/
DEST_DIR=/home/user/GETFILES/input
date1=`date +%Y%m%d%H%M`
echo "======================================="
echo "======================================="
echo "======================================="
echo "Here we go $date1 !"
echo "======================================="
echo "======================================="
echo "======================================="
rsync -av --ignore-existing --include="patternforfiles\*.*" --exclude="*" -e "ssh -p port -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no" $USER#$X:$SRC_DIR\* $DEST_DIR
\#Removing the ":"
cd temp/
for i in *:*; do
mv -- "$i" "${i//:/\-}"
done
**for i in `ls -Art SRC_DIR | grep filename | tail -n 2`;do**
**mv $SRC_DIR$i $DEST_DIR;**
**done**
sshpass -p "password" sftp -oPort=port user#ftp.address \<\< !
lcd /source/directory/
put filename\*.\*
bye
!`
bla bla
Cleaning task
If I run as a separate command the ls -Art part works fine, it bring me the last 2 files from the rsync path. If I run in under a script it brings 2 other files then the ones obtained with the previous command...and no idea why.
What am I missing with this for in ls?
Thanks.

Suggesting there are more then one file per line in the ls -Art command.
dudi#IL105567WIN:~$ ls -Art
.profile .landscape .sudo_as_admin_successful dev .motd_shown .lesshsQ
.bashrc .vimrc .local .selected_editor .hushlogin .viminfo
.bash_logout .vim .cache .config .lesshst .bash_history
Using ls command with option -1 or -l solves this by printing each file in separate line.
dudi#IL105567WIN:~$ ls -1Art
.profile
.bashrc
.bash_logout
.landscape
.vimrc
.vim
.sudo_as_admin_successful
.local
.cache
dev
.selected_editor
.config
.motd_shown
.hushlogin
.lesshst
.lesshsQ
.viminfo
.bash_history

Related

Remote to Local rolling backup script

I'm trying to create a bash script that runs through crontab to execute a backup remote to local. Everything works but my rolling backup part, where it only keeps 4 backups.
#!/bin/bash
dateForm=`date +%m-%d-%Y`
fileName=[redacted]-"$dateForm"
echo backup started for [redacted] on: $dateForm >> /home/backups/backLog.log
ls -tQ /home/backups/[redacted] | tail -n+5 | xargs -r rm
ssh root#[redacted] "tar jcf - -C /home/[redacted]/[redacted] ." > "/home/backups/[redacted]/$fileName".tar.bz2
if [ ! -f "/home/backups/[redacted]/$fileName.tar.bz2" ]
then
echo "something went wrong with the backup for $fileName!" >> /home/backups/backLog.log
else
echo "Backup completed for $fileName" >> /home/backups/backLog.log
fi
the ls line will work if executed in the directory just fine, but because crontab is executing it and I need the script to be outside of the folder it's targeting. I can't get it to target the rm to the correct directory utilizing the piped ls
I was able to come up with an interesting solution after studying the man page for ls a little more and utilizing find to grab the full paths.
ls -tQ $(find /home/backups/[redacted] -type f -name "*") | tail -n+5 | xargs -r rm
just posting an answer for someone that didn't want to create a rolling backup script that completely depended on date formatting, as there would ALWAYS be at least 4 backups in the folder targeted.

Script is trying to move a directory instead of a file - need assistance

The script appears to be correct. However, after FTP'ing all the files in the directory, it gives me the error that it is trying to move a directory into a directory of itself.
Any ideas on why this is occurring?
mysql -u ????? -p????? -h ????? db < $SCRIPT_FOLDER/script.sql > script.xls
echo "###############################################################################"
echo "FTP the files"
#for FILE in `ls $SOURCE_FOLDER/`
for FILE in $SOURCE_FOLDER/*.xls
do
echo "# Uploading $SOURCE_FOLDER/$FILE" >> /tmp/CasesReport.copy.out
sshpass -p ???? sftp -oBatchMode=no -b - user#ftp << END
cd /source/directory/
put $SOURCE_FOLDER/$FILE
bye
END
echo "Moving $FILE to $SOURCE_FOLDER/history/"
mv $SOURCE_FOLDER/$FILE $SOURCE_FOLDER/history/$FILE
$FILE already contains $SOURCE_FOLDER, so you put command is doubling the path.
Example
$ cd /tmp
$ touch foo.txt bar.txt
$ cd
$ SOURCE_FOLDER=/tmp
$ for FILE in $SOURCE_FOLDER/*.txt; do echo "put $SOURCE_FOLDER/$FILE"; done
put /tmp//tmp/bar.txt
put /tmp//tmp/foo.txt
Inside the for loop, just use "$FILE"

Cat changes owner?

I have to edit a file owned by root via ssh. I add a entry in the file, preserve the first 9 lines and reorder the rest to a temporary file. I know that > overwrittes what's in the file (and that's what i want) but I need to preserve the root as owner of file. How can I do this? Thanks!
#!/bin/bash
user=""
echo "User:"
read user
ssh xxxx#xxxx "
sed -i '\$a$user' file;
(head -n 9 file ; tail -n +10 file | sort) > temp;
cat temp > file;
rm -f temp
"
It's not cat that's changing the owner, it's sed. When you use sed -i, it does something like:
mv file file.bak
sed '\$a$user' file.bak > file
rm file.bak
As you can see, this creates a new file with the original file's name, and it's owned by the user that creates the file.
If you want to avoid this, make a copy of the original file instead of using the -i option.
cp file /tmp/file.$$
sed '\$a$user' /tmp/file.$$ > file
rm /tmp/file.$$
Or you could just put sed into your pipeline:
sed '\$a$user' file | head -n 9 file ; tail -n +10 file | sort > temp
cat temp > file
rm temp
Been a while since I wrote in BASH but i think a starting point would be
chown root $file // if you have a variable with the file name in
or
chown root thefile.txt //if you want it hard coded;
another variable in the equation would be who has ownership of the application cat? i think whoever is the owner of the running application, thats how the ownership of the files its out put is decided
maybe you could also try
$ sudo cat temp > file
because the session would then belong to the root and therefore the ouput would belong to the root???
I take it that the user who logs in cannot become root?
THen your best bet is to use dd
dd if=tmpfile of=outfile
of course, do all the ordering, seding, awking and greping on your tmp file. dd in this usage is equivalent > without creating a new file
#!/bin/bash
user=""
echo "User:"
read user
ssh xxxx#xxxx "
sed -i '\$a$user' file;
(head -n 9 file ; tail -n +10 file | sort) > temp;
sudo mv temp file;
sudo chown root file
"
This will work better if on the xxxx machine the xxxx user you're logging in as has password-less access to sudo. You can do this by having this entry in your /etc/sudoers file there:
xxxx ALL=NOPASSWD: ALL

bash tar error doesn't create tar.gz

I have the following bash script:
#DIR is something like: /home/foo/foobar/test/ without any whitespace but can also include whitespace
DIR="$( cd "$( dirname "$0" )" && pwd )"
#backup_name is read from a file
backup_name=FOOBAR
date=`date +%Y%m%d_%H%M_%S`
#subdirs is also read from the same file
subdirs=etc/ sbin/ bin/
filename="$DIR/Backup_$backup_name"_"$date.tar.gz"
cd /
echo "filename: $filename"
echo "subdirs $subdirs"
cmd='tar czvf "'$filename'" '$subdirs
echo "cmd tar: $cmd"
$cmd
But I get following output:
filename: /home/foo/foobar/test/Backup_FOOBAR_20120322_1529_35.tar.gz
subdirs: etc/ sbin/ bin/
cmd tar: tar cfvz "/home/foo/foobar/test/Backup_FOOBAR_20120322_1529_35.tar.gz" etc/ sbin/ bin/
etc/
# ... list of files in etc
# but no files from sbin or bin directory
tar: "/home/foo/foobar/test/Backup_FOOBAR_20120322_1529_35.tar.gz": can open not execute: File or directory not found
tar: not recoverable error: abortion.
However, when I copy the echo output of the tar command, make a cd to / and paste it into the bash shell it is working:
tar cfvz "/home/foo/foobar/test/Backup_FOOBAR_20120322_1529_35.tar.gz" etc/ sbin/ bin/
etc/
Every variable is defined and there is no trailing newline
I also tried $cmd with backticks
the two variables: backup_name and subdirs are read from a file (I did not include the reading process in the code)
edit: I just copied my script to a dir with no whitespace and changed the line:
cmd='tar czvf "'$filename'" '$subdirs
#to
cmd="tar czvf $filename $subdirs"
and it's working now but when I do the same in a dir which also contents whitespaces I get still the same error.
edit2: reading from file (the file is read before anything else happens)
config="config.txt"
local line
while read line
do
#points to next free element and declares it
config_lines[${#config_lines[#]}]=$line
done <$config
backup_name=${config_line[0]}
subdirs=${config_line[1]}
What is wrong with my bash script?
Short answer: see BashFAQ #050: I'm trying to put a command in a variable, but the complex cases always fail!.
Long answer: embedding quotes in a variable doesn't do anything useful, because when you use it (i.e. $cmd), bash parses quotes before replacing variables; by the time the quotes are there, it's too late for them to do any good. You do, however, have several options:
Don't bother with putting the command in a variable in the first place, just use it directly:
echo "filename: $filename"
echo "subdirs $subdirs"
tar czvf "$filename" $subdirs
If you really need to put it in a variable first, use an array rather than a plain text variable (and ideally, do the same with the subdirs list):
subdirs=(etc/ sbin/ bin/)
...
echo "filename: $filename"
echo "subdirs ${subdirs[*]}"
cmd=(tar czvf "$filename" "${subdirs[#]}")
printf "cmd tar:"
printf " %q" "${cmd[#]}" # Have to do some trickery to get it printed right
printf "\n"
"${cmd[#]}"
Instead of mucking about with messy quoting issues you could get the results you want a different way and, perhaps, save some time. How about something like this?
#!/usr/bin/env bash
# abusing set -v for fun and profit
tar_output=/tmp/$$.tarout
tar_command=/tmp/$$.tarcmd
tmp_script=/tmp/$$.script
dir="$(cd "$(dirname "$0")"; pwd)"
cat>"${tmp_script}"<<-'END'
datestamp=$(date +%Y%m%d_%H%M_%S)
subdirs=(etc sbin bin)
backup_name=FOOBAR
filename="$1/Backup_${backup_name}_${date}.tar.gz"
printf 'tar cmd: '
set -v
tar czvf "$filename" "${subdirs[#]}" 2>"$2"
set +v
END
bash "${tmp_script}" "$dir" "${tar_output}" 2>"${tar_command}"
cat "${tar_command}" | head -n 1 | sed -e 's/2>"\$2"$//'
cat "${tar_output}"
rm -f "${tmp_script}" "${tar_command}" "${tar_output}"
I apologize for nothing, but in the real world note that you'd want to make proper temp files.
If you execute the string $cmd, it won't work if "filename" embeds spaces
You have to let bash creates the arguments.
like this:
tar czvf "${filename}" $subdirs
You don't even need to put '\' in filename
OK, your original script did not work because file/path determination happens before variable expansion, so the filename is wrong: tar thinks that it's supposed to write to a file in the current directory named "/home/foo/foobar/test/Backup_FOOBAR_20120322_1529_35.tar.gz" i.e. the file name contains slashes and double quotes!
tar cfz /this/file/does/nopt/exist .
tar: /this/file/does/nopt/exist: Cannot open: No such file or directory
tar: Error is not recoverable: exiting now
See the difference? There no double quotes around the file name/path in tar's error message.
It worked when you copy and paste the line because then, the doublequotes are intepreted by the shell.
Witness:
ls -l /tmp/screen-exchange
-rw-rw-rw- 1 aqn users 0 Mar 21 07:29 /tmp/screen-exchange
cmd='ls -l "'/tmp/screen-exchange'"'
$cmd
/bin/ls: "/tmp/screen-exchange": No such file or directory
eval $cmd
-rw-rw-rw- 1 aqn users 0 Mar 21 07:29 /tmp/screen-exchange
Of course, using eval won't guard against filenames with whitespaces in them. To guard against that, your tar command needs to be like so:
date>'file name with spaces'
file='file name with spaces' # this is the equivalent of your $filename
cmd='ls -l "$file"'
$cmd
ls: "$file": No such file or directory
eval $cmd
-rw-r--r-- 1 andyn SPICE\Domain Users 1083 Mar 22 15:28 a b
I would suggest you separate $cmd from $filename and $subdirs. I think the induced error comes from when you join these strings. Also, using multiple variables in one variable without proper quoting will also induce errors.
This should work for you:
cmd="tar -zcvf"
subdirs="etc/ sbin/ bin/"
filename="${DIR}/Backup_${backup_name}_${date}.tar.gz"
$cmd $filename $subdirs
#DIR is something like: /home/foo/foobar/test/ without any whitespace but can also include whitespace
DIR="$( cd "$( dirname "$0" )" && pwd )"
backup_name=FOOBAR
date=`date +%Y%m%d_%H%M_%S`
subdirs="etc/ sbin/ bin/"
filename="$DIR/Backup_$backup_name"_"$date.tar.gz"
cd /
echo "filename: $filename"
echo "subdirs $subdirs"
cmd="tar zcvf $filename $subdirs"
echo "cmd tar: $cmd"
$cmd

rsync prints "skipping non-regular file" for what appears to be a regular directory

I back up my files using rsync. Right after a sync, I ran it expecting to see nothing, but instead it looked like it was skipping directories. I've (obviously) changed names, but I believe I've still captured all the information I could. What's happening here?
$ ls -l /source/backup/myfiles
drwxr-xr-x 2 me me 4096 2010-10-03 14:00 foo
drwxr-xr-x 2 me me 4096 2011-08-03 23:49 bar
drwxr-xr-x 2 me me 4096 2011-08-18 18:58 baz
$ ls -l /destination/backup/myfiles
drwxr-xr-x 2 me me 4096 2010-10-03 14:00 foo
drwxr-xr-x 2 me me 4096 2011-08-03 23:49 bar
drwxr-xr-x 2 me me 4096 2011-08-18 18:58 baz
$ file /source/backup/myfiles/foo
/source/backup/myfiles/foo/: directory
Then I sync (expecting no changes):
$ rsync -rtvp /source/backup /destination
sending incremental file list
backup/myfiles
skipping non-regular file "backup/myfiles/foo"
skipping non-regular file "backup/myfiles/bar"
And here's the weird part:
$ echo 'hi' > /source/backup/myfiles/foo/test
$ rsync -rtvp /source/backup /destination
sending incremental file list
backup/myfiles
backup/myfiles/foo
backup/myfiles/foo/test
skipping non-regular file "backup/myfiles/foo"
skipping non-regular file "backup/myfiles/bar"
So it worked:
$ ls -l /source/backup/myfiles/foo
-rw-r--r-- 1 me me 3126091 2010-06-15 22:22 IMGP1856.JPG
-rw-r--r-- 1 me me 3473038 2010-06-15 22:30 P1010615.JPG
-rw-r--r-- 1 me me 3 2011-08-24 13:53 test
$ ls -l /destination/backup/myfiles/foo
-rw-r--r-- 1 me me 3126091 2010-06-15 22:22 IMGP1856.JPG
-rw-r--r-- 1 me me 3473038 2010-06-15 22:30 P1010615.JPG
-rw-r--r-- 1 me me 3 2011-08-24 13:53 test
but still:
$ rsync -rtvp /source/backup /destination
sending incremental file list
backup/myfiles
skipping non-regular file "backup/myfiles/foo"
skipping non-regular file "backup/myfiles/bar"
Other notes:
My actual directories "foo" and "bar" do have spaces, but no other strange characters. Other directories have spaces and have no problem. I 'stat'-ed and saw no differences between the directories that don't rsync and the ones that do.
If you need more information, just ask.
Are you absolutely sure those individual files are not symbolic links?
Rsync has a few useful flags such as -l which will "copy symlinks as symlinks". Adding -l to your command:
rsync -rtvpl /source/backup /destination
I believe symlinks are skipped by default because they can be a security risk. Check the man page or --help for more info on this:
rsync --help | grep link
To verify these are symbolic links or pro-actively to find symbolic links you can use file or find:
$ file /path/to/file
/path/to/file: symbolic link to `/path/file`
$ find /path -type l
/path/to/file
Are you absolutely sure that it's not a symbolic link directory?
try a:
file /source/backup/myfiles/foo
to make sure it's a directory
Also, it could very well be a loopback mount
try
mount
and make sure that /source/backup/myfiles/foo is not listed.
You should try the below command, most probably it will work for you:
rsync -ravz /source/backup /destination
You can try the following, it will work
rsync -rtvp /source/backup /destination
I personally always use this syntax in my script and works a treat to backup the entire system (skipping sys/* & proc/* nfs4/*)
sudo rsync --delete --stats --exclude-from $EXCLUDE -rlptgoDv / $TARGET/ | tee -a $LOG
Here is my script run by root's cron daily:
#!/bin/bash
#
NFS="/nfs4"
HOSTNAME=`hostname`
TIMESTAMP=`date "+%Y%m%d_%H%M%S"`
EXCLUDE="/home/gcclinux/Backups/root-rsync.excludes"
TARGET="${NFS}/${HOSTNAME}/SYS"
LOGDIR="${NFS}/${HOSTNAME}/SYS-LOG"
CMD=`/usr/bin/stat -f -L -c %T ${NFS}`
## CHECK IF NFS IS MOUNTED...
if [[ ! $CMD == "nfs" ]];then
echo "NFS NOT MOUNTED"
exit 1
fi
## CHECK IF LOG DIRECTORY EXIST
if [ ! -d "$LOGDIR" ]; then
/bin/mkdir -p $LOGDIR
fi
## CREATE LOG HEADER
LOG=$LOGDIR/"rsync_result."$TIMESTAMP".txt"
echo "-------------------------------------------------------" | tee -a $LOG
echo `date` | tee -a $LOG
echo "" | tee -a $LOG
## START RUNNING BACKUP
/usr/bin/rsync --delete --stats --exclude-from $EXCLUDE -rlptgoDv / $TARGET/ | tee -a $LOG
In some cases just copy file to another location (like home) then try again

Resources