Linux script to create new directory with name as datestamp - linux

The current script is functional, but the output is slightly off. This is what I have so far.
echo "Which client are we backing up today? Choose one below."
ls -la /usr/local/nagios/etc/objects/Clients | awk '{print $9}'
read varname
cd /usr/local/nagios/etc/objects/Clients/$varname
while true; do
read -p "Backup files located in nagtech/backup to current client directory? (y/n) " yn
case $yn in
[Yy]* ) cp -r /home/nagtech/backup $varname > mkdir$(date +m%-%d-%y); break;;
[Nn]* ) exit;;
* ) echo "Please anwser yes or no.";;
esac
done
My intention is to have a new DIRECTORY created and NAMED with the current date stamp if input is y. However its not quite there. Below is sample output when "y" is entered and $varname is set to "HELP".
drwxr-xr-x 4 root root 4096 Aug 24 17:45 .
drwxr-xr-x 16 root root 4096 Aug 22 18:36 ..
drwxr-xr-x 2 root root 4096 Aug 22 18:38 08.22.18
-rw-r--r-- 1 root root 0 Aug 24 17:45 mkdirm%d-18
drwxr-xr-x 3 root root 4096 Aug 24 17:45 HELP

The destination of cp is the second argument to the command. You're using the date as the name of a file to redirect the output, but cp doesn't produce any output.
You need to execute the mkdir command to create the directory, and then use that as the destination of the cp command.
[Yy]* ) newdir=$(date +m%-%d-%y)
mkdir "$newdir"
cp -r /home/nagtech/backup "$newdir"
;;

Related

Rsync Incremental Backup still copies all the files

I am currently writing a bash script for rsync. I am pretty sure I am doing something wrong. But I can't tell what it is. I will try to elaborate everything in detail so hopefully someone can help me.
The goal of script is to do full backups and incremental ones using rsync. Everything seems to work perfectly well, besides one crucial thing. It seems like even though using the --link-dest parameter, it still copies all the files. I have checked the file sizes with du -chs.
First here is my script:
#!/bin/sh
while getopts m:p: flags
do
case "$flags" in
m) mode=${OPTARG};;
p) prev=${OPTARG};;
*) echo "usage: $0 [-m] [-p]" >&2
exit 1 ;;
esac
done
date="$(date '+%Y-%m-%d')";
#Create Folders If They Do Not Exist (-p paramter)
mkdir -p /Backups/Full && mkdir -p /Backups/Inc
FullBackup() {
#Backup Content Of Website
mkdir -p /Backups/Full/$date/Web/html
rsync -av user#IP:/var/www/html/ /Backups/Full/$date/Web/html/
#Backup All Config Files NEEDED. Saving Storage Is Key ;)
mkdir -p /Backups/Full/$date/Web/etc
rsync -av user#IP:/etc/apache2/ /Backups/Full/$date/Web/etc/
#Backup Fileserver
mkdir -p /Backups/Full/$date/Fileserver
rsync -av user#IP:/srv/samba/private/ /Backups/Full/$date/Fileserver/
#Backup MongoDB
ssh user#IP /usr/bin/mongodump --out /home/DB
rsync -av root#BackupServerIP:/home/DB/ /Backups/Full/$date/DB
ssh user#IP rm -rf /home/DB
}
IncrementalBackup(){
Method="";
if [ "$prev" == "full" ]
then
Method="Full";
elif [ "$prev" == "inc" ]
then
Method="Inc";
fi
if [ -z "$prev" ]
then
echo "-p Parameter Empty";
else
#Get Latest Folder - Ignore the hacky method, it works.
cd /Backups/$Method
NewestBackup=$(find . ! -path . -type d | sort -nr | head -1 | sed s#^./##)
IFS='/'
read -a strarr <<< "$NewestBackup"
Latest_Backup="${strarr[0]}";
cd /Backups/
#Incremental-Backup Content Of Website
mkdir -p /Backups/Inc/$date/Web/html
rsync -av --link-dest /Backups/$Method/"$Latest_Backup"/Web/html/ user#IP:/var/www/html/ /Backups/Inc/$date/Web/html/
#Incremental-Backup All Config Files NEEDED
mkdir -p /Backups/Inc/$date/Web/etc
rsync -av --link-dest /Backups/$Method/"$Latest_Backup"/Web/etc/ user#IP:/etc/apache2/ /Backups/Inc/$date/Web/etc/
#Incremental-Backup Fileserver
mkdir -p /Backups/Inc/$date/Fileserver
rsync -av --link-dest /Backups/$Method/"$Latest_Backup"/Fileserver/ user#IP:/srv/samba/private/ /Backups/Inc/$date/Fileserver/
#Backup MongoDB
ssh user#IP /usr/bin/mongodump --out /home/DB
rsync -av root#BackupServerIP:/home/DB/ /Backups/Full/$date/DB
ssh user#IP rm -rf /home/DB
fi
}
if [ "$mode" == "full" ]
then
FullBackup;
elif [ "$mode" == "inc" ]
then
IncrementalBackup;
fi
The command i used:
Full-Backup
bash script.sh -m full
Incremental
bash script.sh -m inc -p full
Executing the script is not giving any errors at all. As I mentioned above, it just seems like it's still copying all the files. Here are some tests I did.
Output of du -chs
root#Backup:/Backups# du -chs /Backups/Full/2021-11-20/*
36K /Backups/Full/2021-11-20/DB
6.5M /Backups/Full/2021-11-20/Fileserver
696K /Backups/Full/2021-11-20/Web
7.2M total
root#Backup:/Backups# du -chs /Backups/Inc/2021-11-20/*
36K /Backups/Inc/2021-11-20/DB
6.5M /Backups/Inc/2021-11-20/Fileserver
696K /Backups/Inc/2021-11-20/Web
7.2M total
Output of ls -li
root#Backup:/Backups# ls -li /Backups/Full/2021-11-20/
total 12
1290476 drwxr-xr-x 4 root root 4096 Nov 20 19:26 DB
1290445 drwxrwxr-x 6 root root 4096 Nov 20 18:54 Fileserver
1290246 drwxr-xr-x 4 root root 4096 Nov 20 19:26 Web
root#Backup:/Backups# ls -li /Backups/Inc/2021-11-20/
total 12
1290506 drwxr-xr-x 4 root root 4096 Nov 20 19:28 DB
1290496 drwxrwxr-x 6 root root 4096 Nov 20 18:54 Fileserver
1290486 drwxr-xr-x 4 root root 4096 Nov 20 19:28 Web
Rsync Output when doing the incremental backup and changing/adding a file
receiving incremental file list
./
lol.html
sent 53 bytes received 194 bytes 164.67 bytes/sec
total size is 606 speedup is 2.45
receiving incremental file list
./
sent 33 bytes received 5,468 bytes 11,002.00 bytes/sec
total size is 93,851 speedup is 17.06
receiving incremental file list
./
sent 36 bytes received 1,105 bytes 760.67 bytes/sec
total size is 6,688,227 speedup is 5,861.72
*Irrelevant MongoDB Dump Text*
sent 146 bytes received 2,671 bytes 1,878.00 bytes/sec
total size is 2,163 speedup is 0.77
I suspect that the ./ has something to do with that. I might be wrong, but it looks suspicious. Though when executing the same command again, the ./ are not in the log, probably because I did it on the same day, so it was overwriting in the /Backup/Inc/2021-11-20 Folder.
Let me know for more information. I have been trying around for a long time now. Maybe I am simply wrong and there are links made and disk space economized.
I didn't read the entire code because the main problem didn't seem to lay there.
Verify the disk usage of your /Backups directory with du -sh /Backups and then compare it with the sum of du -sh /Backups/Full and du -sh /Backups/Inc.
I'll show you why with a little test:
Create a directory containing a file of 1 MiB:
mkdir -p /tmp/example/data
dd if=/dev/zero of=/tmp/example/data/zerofile bs=1M count=1
Do a "full" backup:
rsync -av /tmp/example/data/ /tmp/example/full
Do an "incremental" backup
rsync -av --link-dest=/tmp/example/full /tmp/example/data/ /tmp/example/incr
Now let's see what we got:
with ls -l
ls -l /tmp/example/*
-rw-rw-r-- 1 user group 1048576 Nov 21 00:24 /tmp/example/data/zerofile
-rw-rw-r-- 2 user group 1048576 Nov 21 00:24 /tmp/example/full/zerofile
-rw-rw-r-- 2 user group 1048576 Nov 21 00:24 /tmp/example/incr/zerofile
and with du -sh
du -sh /tmp/example/*
1.0M /tmp/example/data
1.0M /tmp/example/full
0 /tmp/example/incr
Oh? There was a 1 MiB file in /tmp/example/incr but du missed it ?
Actually no. As the file wasn't modified since the previous backup (referenced with --link-dest), rsync created a hard-link to it instead of copying its content. — Hard-links connect a same memory space to different files
And du can detect hard-links and show you the real disk usage, but only when the hard-linked files are included (even in sub-dirs) in its arguments. For example, if you use du -sh independently for /tmp/example/incr:
du -sh /tmp/example/incr
1.0M /tmp/example/incr
How do you detect that there is hard-links to a file ?
ls -l actually showed it to us:
-rw-rw-r-- 2 user group 1048576 Nov 21 00:24 /tmp/example/full/zerofile
^
HERE
This number means that there are two existing hard-links to the file: this file itself and another one in the same filesystem.
about your code
It doesn't change anything but I would replace:
#Get Latest Folder - Ignore the hacky method, it works.
cd /Backups/$Method
NewestBackup=$(find . ! -path . -type d | sort -nr | head -1 | sed s#^./##)
IFS='/'
read -a strarr <<< "$NewestBackup"
Latest_Backup="${strarr[0]}";
cd /Backups/
with:
#Get Latest Folder
glob='20[0-9][0-9]-[0-1][0-9]-[0-3][0-9]' # match a timestamp (more or less)
NewestBackup=$(compgen -G "/Backups/$Method/$glob/" | sort -nr | head -n 1)
glob makes sure that the directories/files found by compgen -G will have the right format.
Adding / at the end of a glob makes sure that it matches directories only.

Get clean list of file sizes and names using SFTP in unix

I want to fetch list of files from a server using SFTP one by one only if their size is less than 1 GB.
I am running the following command :
$sftp -oIdentityFile=/home/user/.ssh/id_rsa -oPort=22 user#hostname >list.txt <<EOF
cd upload/Example
ls -l iurygify*.zip
EOF
This results in:
$cat list.txt
sftp> cd upload/Example
sftp> ls -l iurygify*.zip
-rwxrwx--- 0 300096661 300026669 0 Mar 11 16:38 iurygify1.zip
-rwxrwx--- 0 300096661 300026669 0 Mar 11 16:38 iurygify2.zip
I could then use awk to calculate get the size and filename which I can save into logs for reference and then download only those files which meet the 1 GB criteria.
Is there any simpler approach to accomplish getting this file list and size? I want to avoid the junk entires of the prompt and commands in the list.txt and do not want to do this via expect command.
We are using SSH key authentication
You could place your sftp commands in a batch file and filter the output - no need for expect.
echo 'ls -l' > t
sftp -b t -oIdentityFile=/home/user/.ssh/id_rsa -oPort=22 user#hostname | grep -v 'sftp>' >list.txt
Or take it a step further and filter out the "right size" in the same step:
sftp -b t -oIdentityFile=/home/user/.ssh/id_rsa -oPort=22 user#hostname | awk '$1!~/sftp>/&&$5<1000000000' >list.txt
Maybe using lftp instead of sftp ?
$ lftp sftp://xxx > list.txt <<EOF
> open
> ls -l
> EOF
$ cat list.txt
drwxr-xr-x 10 ludo users 4096 May 24 2019 .
drwxr-xr-x 8 root root 4096 Dec 20 2018 ..
-rw------- 1 ludo users 36653 Mar 31 19:28 .bash_history
-rw-r--r-- 1 ludo users 220 Mar 21 2014 .bash_logout
-rw-r--r-- 1 ludo users 362 Aug 16 2018 .bash_profile
...

Identify the latest file from a file list

I have a pretty tricky task (at least for me).
I have an sftp access to a server which I need to get ONLY the latest file in the directory. Since sftp interface is very limited I have come up to list the files in the directory to a text file first.
This is the code
sftp -b - hostname >list.txt <<EOF
ls -l *.xls
EOF
My concern now is from list.txt, how do I identify the latest file?
Sample content of list.txt
cat list.txt
-rw-r--r-- 0 16777221 16777216 52141 Mar 29 08:06 samplefile1.xls
-rw-r--r-- 0 16777221 16777216 2926332 Mar 28 09:48 samplefile2.xls
-rw-r--r-- 0 16777221 16777216 40669 Mar 26 04:38 samplefile3.xls
-rw-r--r-- 0 16777221 16777216 8640 Mar 19 08:02 samplefile4.xls
-rw-r--r-- 0 16777221 16777216 146331 Mar 25 07:27 samplefile5.xls
-rw-r--r-- 0 16777221 16777216 18988 Mar 19 03:53 samplefile6.xls
-rw-r--r-- 0 16777221 16777216 36640 Apr 2 12:52 samplefile7.xls
Use ls -lt
sftp -b - hostname >list.txt <<EOF
ls -lt
EOF
Now the first line in your file will be latest file.
You can manage it like below:-
Maintain a history file in your server like history.txt
Before transferring file create a list of files that you are creating at the moment.
For the first time generate a history.txt file manually and add all the files that you have already transferred. For example samplefile6.xls and samplefile7.xls
sftp -b - hostname >list.txt <<EOF
ls -l *.xls
EOF
Now add a while loop to your above existing script
while read line
do
file=$(echo "$line" | awk '{print $9}')
if grep "$file" history.txt; then
echo "File already existed in history file -- No need to transfer"
else
sftp server_host <<EOF
cd /your/dropLocation
put $file
quit
EOF
echo "$file" >> history.txt
#add the transferred file to history file
fi
done < list.txt
With this approach, even if you have more than one latest files you can transfer them very easily.
Hope this will help you.

Linux - Sum total of files in different directories

How do I calculate the sum total size of multiple files located in different directories?
I have a text file containing the full path and name of the files.
I figure a simple script using while read line and du -h might do the trick...
Example of text file (new2.txt) containing list of files to sum:
/mount/st4000/media/A/amediafile.ext
/mount/st4000/media/B/amediafile.ext
/mount/st4000/media/C/amediafile.ext
/mount/st4000/media/D/amediafile.ext
/mount/st4000/media/E/amediafile.ext
/mount/st4000/media/F/amediafile.ext
/mount/st4000/media/G/amediafile.ext
/mount/st4000/media/H/amediafile.ext
/mount/st4000/media/I/amediafile.ext
/mount/st4000/media/J/amediafile.ext
/mount/st4000/media/K/amediafile.ext
Note: the folder structure is not necessarily consecutive as in A..K
Based on the suggestion from AndreaT, adapting it slightly, I tried
while read mediafile;do du -b "$mediafile"|cut -f -1>>subtotals.txt;done<new2.txt
subtotals.txt looks like
733402685
944869798
730564608
213768
13332480
366983168
6122559750
539944960
735039488
1755005744
733478912
To add all the subtotals
sum=0; while read num; do ((sum += num)); done < subtotals.txt; echo $sum
Assuming that file input is like this
/home/administrator/filesum/cliprdr.c
/home/administrator/filesum/cliprdr.h
/home/administrator/filesum/event.c
/home/administrator/filesum/event.h
/home/administrator/filesum/main.c
/home/administrator/filesum/main.h
/home/administrator/filesum/utils.c
/home/administrator/filesum/utils.h
and the result of command ls -l is
-rw-r--r-- 1 administrator administrator 13452 Oct 4 17:56 cliprdr.c
-rw-r--r-- 1 administrator administrator 1240 Oct 4 17:56 cliprdr.h
-rw-r--r-- 1 administrator administrator 8141 Oct 4 17:56 event.c
-rw-r--r-- 1 administrator administrator 2164 Oct 4 17:56 event.h
-rw-r--r-- 1 administrator administrator 32403 Oct 4 17:56 main.c
-rw-r--r-- 1 administrator administrator 1074 Oct 4 17:56 main.h
-rw-r--r-- 1 administrator administrator 5452 Oct 4 17:56 utils.c
-rw-r--r-- 1 administrator administrator 1017 Oct 4 17:56 utils.h
the simplest command to run is:
cat filelist.txt | du -cb | tail -1 | cut -f -1
with following output (in bytes)
69370
Keep in mind that du prints actual disk usage rounded up to a multiple of (usually) 4kb instead of logical file size.
For small files this approximation may not be acceptable.
To sum one directory, you will have to do a while, and export the result to the parent shell.
I used an echo an the subsequent eval :
eval ' let sum=0$(
ls -l | tail -n +2 |\
while read perms link user uid size date day hour name ; do
echo -n "+$size" ;
done
)'
It produces a line, directly evaluated, which looks like
let sum=0+205+1201+1201+1530+128+99
You just have to reproduce twice this command on both folders.
The du command doesn't have a -b option on the unix systems I have available. And there are other ways to get file size.
Assuming you like the idea of a while loop in bash, the following might work:
#!/bin/bash
case "$(uname -s)" in
Linux) stat_opt=(-c '%s') ;;
*BSD|Darwin) stat_opt=(-f '%z') ;;
*) printf 'ERROR: I don'\''t know how to run on %s\n' "$(uname -s)" ;;
esac
declare -i total=0
declare -i count=0
declare filename
while read filename; do
[[ -f "$filename" ]] || continue
(( total+=$(stat "${stat_opt[#]}" "$filename") ))
(( count++ ))
done
printf 'Total: %d bytes in %d files.\n' "$total" "$count"
This would take your list of files as stdin. You can run it in BSD unix or in Linux -- the options to the stat command (which is not internal to bash) are the bit that are platform specific.

Using sed within "while read" expression

I am pretty stuck with that script.
#!/bin/bash
STARTDIR=$1
MNTDIR=/tmp/test/mnt
find $STARTDIR -type l |
while read file;
do
echo Found symlink file: $file
DIR=`sed 's|/\w*$||'`
MKDIR=${MNTDIR}${DIR}
mkdir -p $MKDIR
cp -L $file $MKDIR
done
I passing some directory to $1 parameter, this directory have three symbolic links. In while statement echoed only first match, after using sed I lost all other matches.
Look for output below:
[artyom#LBOX tmp]$ ls -lh /tmp/imp/
total 16K
lrwxrwxrwx 1 artyom adm 19 Aug 8 10:33 ok1 -> /tmp/imp/sym3/file1
lrwxrwxrwx 1 artyom adm 19 Aug 8 09:19 ok2 -> /tmp/imp/sym2/file2
lrwxrwxrwx 1 artyom adm 19 Aug 8 10:32 ok3 -> /tmp/imp/sym3/file3
[artyom#LBOX tmp]$ ./copy.sh /tmp/imp/
Found symlink file: /tmp/imp/ok1
[artyom#LBOX tmp]$
Can somebody help with that issue?
Thanks
You forgot to feed something to sed. Without explicit input, it reads nothing in this construction. I wouldn't use this approach anyway, but just use something like:
DIR=`dirname "$file"`

Resources