Does anyone know of a way to (via bash) setup a "versioning" copy of a file into another? For example: I am copying file into file.bak. If file.bak exists, I am currently overwriting. What I'd like to do is set it up so that it creates multiple files: file, file.bak, file.bak.1, file.bak.2, etc...
Right now, I'm using:
cp -rf file file.bak
This currently overwrites the file(as expected)
or:
cp --backup=t file1 file2
repeat few times to see the result...
see https://www.gnu.org/software/coreutils/manual/html_node/cp-invocation.html
Simply use a test
[ -e file.bak ] && cp -r file file.bak.$(date +%s) || cp -r file file.bak
This will create a unique backup if file.bak already exists in the form file.bak.1411505497
There are many ways to skin this cat.
Since you're using Linux, it's likely you've got the GNU mv command, which may include a --backup option. You could wrap this in a shell function:
bkp() {
file="$1"
if [ -f "$file" ]; then
/bin/mv -v --backup=numbered "$(mktemp ${file}XXX)" "$file"
#/bin/rm "$file"
fi
}
You can put this in your .bashrc, for example. Then you can use this as follows:
# bkp foo
This will copy foo to numbered backup files. You can uncomment the rm if this is, for example, a log file that you're rotating.
Another option, which is more portable to operating systems that don't use GNU tools (i.e. FreeBSD, OSX) might be something like this quick-and-dirty solution might work:
bkp() {
file="$1"
if [ -f "$file" ]; then
# increment existing files up to 10
for n in {9..1}; do
if [ -f $file.$n ]; then
# remove -v if you want less noise.
mv -v "${file}.$n" "${file}.$[n+1]"
fi
done
# move the original to first backup position
mv "$file" "$file.1"
else
echo "Not found: $file" >&2
fi
}
It suffers in that it won't compact your list of files (and will throw errors) if some numbers are missing, but that's stuff you can add if it's important. You'd use it pretty much the same way, changing the final mv to a cp if you need to keep the original in place.
Final option I'll mention is in comments as well. Since you've said that you're using this solution to back up "system files" (which I assume you mean to be things in /etc/) you should consider using an actual version control system to control your versions of these files.
Many options exist, but I'd recommend RCS for its simplicity and low overhead. Simply install the package, mkdir /etc/RCS to keep your /etc directory clean, read the man pages for rcs, ci, co, rlog, rcsdiff and perhaps rcsintro, and you're good to go. You'll get better control of diffs and history, opportunity for comments, none of the overhead of a repository for a large VCS like SVN or Git. I've been using this on various servers for years, as RCS is still built in to the base system in FreeBSD. :)
Related
I am making a custom command that moves or duplicates a file to a wastebasket directory instead of deleting it. I am trying to make a directory if it already isn't there, make a duplicate if a file has already been executed on, and simply move it if it doesn't. The issue is that I keep getting a no such file or directory error regardless of where I place the wastebasket directory. Do note that simply moving or copying the file with base linux commands work fine, and that being in root doesn't fix the issue. What steps should I take?
#!/bin/bash
set -x
mkdir -p /home/WASTEBASKIT #This makes a wastebasket directory if it doesn't already exist.
if test -e "$1"; then
if test -e /home/WASTEBASKIT/"$1"; then #Checking for duplicate files.
cp "$1" "/home/WASTEBASKIT/$1.$$"
else
mv "$1" "/home/WASTEBASKIT"
fi
else
printf '%s\n' "File not found." #Error if a file is not there.
fi
Here are the results: ++ mkdir -p /home/WASTEBASKIT
++ test -e config.sh
++ test -e /home/WASTEBASKIT/config.sh
++ cp config.sh.945 ' /home/WASTEBASKIT'
cp: cannot stat 'config.sh.945': No such file or directory
cp config.sh.945 ' /home/WASTEBASKIT'
cp: cannot stat 'config.sh.945': No such file or directory
The problem is on this line:
cp "$1" "$1.$$" "/home/WASTEBASKIT"
You try to copy two files into /home/WASTEBASKIT, namely $1 and $1.$$. The latter does not exist.
Change it to:
cp "$1" "/home/WASTEBASKIT/$1.$$"
I suggest that you instead create a unique file since process numbers aren't unique, so instead of the copy above, do something like:
newfile=$(mktemp "WASTEBASKIT/$1.XXXXXXX")
cp -p "$1" "$newfile"
You can then list all the copies with ls -t WASTEBASKIT to get them in historical order, newest first - or with ls -tr WASTEBASKIT to get the oldest first.
Also note: printf'%s\n' "File not found." will likely generate an error like printf%s\n: command not found.... You need to insert a space between the command printf and the argument '%s\n'.
The moving part is also wrong since you have a space before /home. It should be:
mv "$1" /home/WASTEBASKIT
mv "$1" " /home/WASTEBASKIT"
First issue: spaces matter. If you have previously created the /home/WASTEBASKIT directory, and then execute that copy command above, it will not copy the file into that directory - you will most likely end up with a file in your home directory called spaceWASTEBASKIT (unless you already have a directory of that name, including the leading space) in which case it will go into that directory.
Either way, it won't go where you want it to go.
Secondly, the command below is not doing what you seem to think. It will try to copy two files to the directory, the second of which probably does not even exist (config.sh.945 in your case):
cp "$1" "$1.$$" "/home/WASTEBASKIT"
If you want to create a "uniquely" versioned file so as to not overwrite an existing one, that would be:
mv "$1" "/home/WASTEBASKIT/$1.$$"
Note the quotes around the word "uniquely" since there's no guarantee $1.$$ may not also exist in the wastebasket - the PIDs do eventually wrap around at some point, and also do so on reboot.
I suspect a better approach (though still not bullet-proof) would be just to prefix every file with the date and time so that:
you can sort duplicates to establish the order of creation; and
sans date changes, the date/time won't give you possible duplicates (unless you're doing it more then once per second).
That approach would be something like:
mv "$1" "/home/WASTEBASKIT/$(date -u +%Y%m%d_%H%M%S)$1"
or, making duplicates even less likely:
mv "$1" "/home/WASTEBASKIT/$(date -u +%Y%m%d_%H%M%S)_${RANDOM}_$1"
So in bash, if I want I can simply do(where foo is a list of paths to files):
for i in `cat foo`; do ls -lah $i; done
I have a list of files I need to search for. My thought is; why not run them through a for loop with mlocate? I could do:
for i in `cat foo`; do locate $i; done
...but is that the best way to do what I'm trying to do?
Find is SLOW and takes forever when there are millions of files and directories whereas mlocate is super quick.
If files.txt contains a list of absolute paths with a newline line terminator you can do this to ensure they all exist:
set -o errexit
mapfile -t < files.txt
for path in "${MAPFILE[#]}"
do
[[ -e "$path" ]]
done
You can then expand on this if you want to do certain things with existing/non-existing files:
if [[ -e "$path" ]]
then
…
else
…
fi
If files.txt is so huge that the list does not fit in memory you can use a much slower while read loop:
while read -r -u 9 path
do
[[ -e "$path" ]]
done 9< files.txt
If speed really is of the essence you probably want to do this in a different language, like Java or Rust.
On a technical note, mlocate is fast because it queries a pre-generated list of files on your system, but its database does not stay in sync with the actual filesystem contents automatically. Instead you need to run updatedb to populate the database with the current filesystem contents. This is usually done by a root cron job daily.
In terms of style, backticks are deprecated for $(COMMAND) and Use More Quotes™.
I have a folder with regular pictures, and another with resized ones.
The goal is to check if a picture is not resized, do the resizing and save in another folder.
I'm using an echo for simplicity, because I don't have the comparison working.
for file in ../regular/*.jpg;
do
img=`basename "$file"`
FILE=./resized/$img
if [ ! -f "$FILE" ]; then
echo "$img NOT RESIZED"
fi
done
This code just echoes NOT RESIZED for all the pictures in the regular folder i.e. it doesn't seem to make the comparison at all.
Where is my mistake?
for file in ../regular/*.jpg;
FILE=./resized/$img
Try to use absolute path, You can also add echo $FILE to see what scripts tries to verify
If this directory contains a huge amount of files, you can exceed command line length limit (usually ~4kb-32kb)
You are using quotas in basename command, why? If your images could contain spaces, you should use quotas also in "if" command, check script below
for file in ../regular/*.jpg;
do
img=$(basename "$file")
if [ ! -f "./resized/$img" ]; then
echo "$img NOT RESIZED"
fi
done
You should try to use diff command to compare directories:
diff -r "$PATH1" "$PATH2"
I am trying to find all files in a directory and sub-directories and then copy them to a different directory. However some of them have the same name, so I need to copy the files over and then if there are two files have the same name, rename one of those files.
So far I have managed to copy all found files with a unique name over using:
#!/bin/bash
if [ ! -e $2 ] ; then
mkdir $2
echo "Directory created"
fi
if [ ! -e $1 ] ; then
echo "image source does not exists"
fi
find $1 -name IMG_****.JPG -exec cp {} $2 \;
However, I now need some sort of if statement to figure out if a file has the same name as another file that has been copied.
Since you are on linux, you are probably using cp from coreutils. If that is the case, let it do the backup for you by using cp --backup=t
Try this approach: put the list of files in a variable and copy each file looking if the copy operation succeeds. If not, try a different name.
In code:
FILES=`find $1 -name IMG_****.JPG | xargs -r`
for FILE in $FILES; do
cp -n $FILE destination
# Check return error of latest command (i.e. cp)
# through the $? variable and, in case
# choose a different name for the destination
done
Inside the for statement, you can also put some incremental integer to try different names incrementally (e.g., name_1, name_2 and so on, until the cp command succeeds).
You can do:
for file in $1/**/IMG_*.jpg ; do
target=$2/$(basename "$file")
SUFF=0
while [[ -f "$target$SUFF" ]] ; do
(( SUFF++ ))
done
cp "$file" "$target$SUFF"
done
in your script in place of the find command to append integer suffixes to identically-named files
You can use rsync with the following switches for more control
rsync --backup --backup-dir=DIR --suffix=SUFFIX -az <source dire> <destination dir>
Here (from man page)
-b, --backup
With this option, preexisting destination files are renamed as each file is transferred or deleted. You can control where the backup file goes and what (if any) suffix gets appended using the --backup-dir and --suffix options.
--backup-dir=DIR
In combination with the --backup option, this tells rsync to store all backups in the specified directory on the receiving side. This can be used for incremental backups. You can additionally specify a backup suffix using the --suffix option (otherwise the files backed up in the specified directory will keep their original filenames).
--suffix=SUFFIX
This option allows you to override the default backup suffix used with the --backup (-b) option. The default suffix is a ~ if no --backup-dir was specified, otherwise it is an empty string.
You can use rsycn to either sync two folders on local file system or on a remote file system. You can even do syncing over ssh connection.
rsync is amazingly powerful. See the man page for all the options.
I have a set of files that come in pairs:
/var/log/messages-20111001
/var/log/messages-20111001.hash
I've had several of these rotate away and now I'm left with a ton of /var/log/messages-201110xx.hash files with no associated log. I'd like to clean up the mess, but I'm uncertain how to remove a file that isn't part of a "pair". I can use bash or zsh (or any LSB tool, really). I need to remove all the .hash files that don't have an associated log.
Example
/var/log/messages-20111001.hash
/var/log/messages-20111002.hash
/var/log/messages-20111003.hash
/var/log/messages-20111004.hash
/var/log/messages-20111005
/var/log/messages-20111005.hash
/var/log/messages-20111006
/var/log/messages-20111006.hash
Should be reduced to:
/var/log/messages-20111005
/var/log/messages-20111005.hash
/var/log/messages-20111006
/var/log/messages-20111006.hash
for file in *.hash; do test -f "${file%.hash}" || rm -- "$file"; done
Something like this?
for f in /var/log/messages-????????.hash ; do
[[ -e "${f%.hash}" ]] || rm "$f"
done