shell script to increment file names when a directory contents changes (centos) - linux

I have a folder containing 100 pictures from a webcam. When the webcam sends a new picture, I want this one to replace number 0 and have all the other jpg's move up one number. I've set up a script where inotify monitors a directory. When a new file is put into this directory the script renumbers all the files in the picture directory, renames the new uploaded picture and puts it in the folder with the rest.
This script 'sort of' works. 'Sort of', because sometimes it does what it's supposed to do and sometimes it complains about missing files:
mv: cannot stat `webcam1.jpg': No such file or directory
Sometimes it complains about only one file, sometimes 4 or 5. Of course I made sure all 100 files were there, properly named before the script was run. After the script is run, the files it complains about are indeed missing.
This is the script, in the version I tested the full paths to the directories are used of course.
#!/bin/bash
dir1= /foo # directory to be watched
while inotifywait -qqre modify "$dir1"; do
cd /f002 #directory where the images are
for i in {99..1}
do
j=$(($i+1))
f1a=".jpg"
f1="webcam$i$f1a"
f2="test"
f2="webcam$j$f1a"
mv $f1 $f2
done
rm webcam100.jpg
mv dir1/*.jpg /f002/webcam0.jpg
done
I also need to implement some error checking, but for now I don't understand why it is missing files that are there.

You are executing the following mv commands:
mv webcam99.jpg webcam100.jpg
...
mv webcam1.jpg webcam2.jpg
The mv webcam0.jpg to webcam1.jpg is missing. With the first change to "$dir" you have the following files in /foo2:
webcam99.jp
...
webcam2.jpg
webcam0.jpg
With subsequent "$dir" change you will have the following:
webcam99.jp
...
webcam3.jpg
webcam0.jpg
In other words -- you are forgetting to move webcam0.jpg to webcam1.jpg. I would modify your script like this:
rm webcam99.jpg
for i in {98..0}
do
j=$(($i+1))
f1a=".jpg"
f1="webcam$i$f1a"
f2="test"
f2="webcam$j$f1a"
mv $f1 $f2
done
mv dir1/*.jpg /f002/webcam0.jpg

Related

Move files between directories using shell script

I'm new to linux and shell script in general. I'm using a distribution of Debian on the WSL (Windows Subsystem for Linux). I'm trying to write a very simple bash script that will do the following:
create a file in a directory (child-directory-a)
move to the directory it is in
move the file to another directory (child-directory-b)
move to that directory
move the file to the parent directory
This is what I have so far (trying to keep things extremely simple for now)
touch child-directory-a/test.txt
cd child-directory-a
mv child-directory-a/test.txt home/username/child-directory-b
The first two lines work, but I keep getting a 'no such directory exists' error with the last one. The directory exists and that is the correct path (checked with pwd). I have also tried using different paths (i.e. child-directory-b, username/child-directory-b etc.) but to no avail. I can't understand why it's not working.
I've looked around forums/documentation and it seems that these commands should work as they do in the command line, but I can't seem to do the same in the script.
If anyone could explain what I'm missing/not understanding that would be brilliant.
Thank you.
You could create the script like this:
#!/bin/bash
# Store both child directories on variables that can be loaded
# as environment variables.
CHILD_A=${CHILD_A:=/home/username/child-directory-a}
CHILD_B=${CHILD_B:=/home/username/child-directory-b}
# Create both child folders. If they already exist nothing will
# be done, and no error will be emitted.
mkdir -p $CHILD_A
mkdir -p $CHILD_B
# Create a file inside CHILD_A
touch $CHILD_A/test.txt
# Change directory into CHILD_A
cd $CHILD_A
# Move the file to CHILD_B
mv $CHILD_A/test.txt $CHILD_B/test.txt
# Move to CHILD_B
cd $CHILD_B
# Move the file to the parent folder
mv $CHILD_B/test.txt ../test.txt
Take into account the following:
We make sure that all the folders exists and are created.
Use variables to avoid typos, with the ability to load dynamic values from environment variables.
Use absolute paths to simplify the movement between folders.
Use relative paths to move files relatives to where we are.
Another command that might be of use is pwd. It will tell you the directory you are on.
with your second line, you change the current directory to child-directory-a
so, in your third line there is an error because there is no subdirectory child-directory-a into subdirectory child-directory-a
Your third line should be instead :
mv test.txt ../child-directory-b
The point #4 of your script should be:
cd ../child-directory-b
(before that command the current directory is home/username/child-directory-a and after this command it becomes home/username/child-directory-b)
Then the point #5 and final point of your script should be:
mv test.txt ..
NB: you can display the current directory at any line of your script by using the command pwd (print working directory) in your script, it that helps
#!/bin/sh
# Variables
WORKING_DIR="/home/username/example scripts"
FILE_NAME="test file.txt"
DIR_A="${WORKING_DIR}/child-directory-a"
DIR_B="${WORKING_DIR}/child-directory-b"
# create a file in a directory (child-directory-a)
touch "${DIR_A}/${FILE_NAME}"
# move to the directory it is in
cd "${DIR_A}"
# move the file to another directory (child-directory-b)
mv "${FILE_NAME}" "${DIR_B}/"
# move to that directory
cd "${DIR_B}"
# move the file to the parent directory
mv "${FILE_NAME}" ../

How can I execute a command anywhere if certain required files are different directories?

Let say the command be my_command
And this command has to be prepared specific files (file1, file2, and file3) in the current working directory.
Because I often use my_command in many different directories, I'd like to keep the certain files in a certain directory and execute my_command without those three files in the working directory.
I mean I don't want to copy those three files to every working directory.
For example:
Directory containing the three files /home/chest
Working directory: /home/wd
If I execute command my_command, it automatically recognizes the three files in /home/chest/
I've thought the way is similar to add $PATH and not the executable files but just files.
It seems like the files needs to be in the current working directory for the vasp_std command to work as expected, I am thinking that you could simply add all files in a include folder in you home directory and then create a symbolic link to this folder from your script. In the end of your script the symbolic link will then be deleted:
#!/bin/bash
# create a symbolic link to our resource folder
ln -s ~/include src
# execute other commands here
# finally remove the symbolic link from the current directory
unlink src
If the vasp_std command require that the files are placed directly under the current working directory you could instead create a symbolic link for each file:
#!/bin/bash
# create link for to all resource files
for file in ~/include/*
do
ln -s $file `basename $file`
done
# execute other commands here
# remove any previously created links
for file in ~/include/*
do
unlink `basename $file`
done

Moving only old files to another dir

I have /files dir with lots of files. I want to move them to /archive dir. Moving process takes few minutes since there are huge numer of files.
During moving process I created a new file named new-file.jpg in /files dir.
Will new-file.jpg be moved to /archive dir or will not?
I need to have only all old files (files existed when I started zip process) in /archived dir. How to achieve that?
If you are using a glob pattern as in tar cf old.tar.gz /files/*.dat then this glob pattern will be resolved by bash before the command is actually called. So the tar command would be called as tar cf old.tar.gz /files/1.dat /files/2.dat ..., which means files created while tar is executing will not be included.
This can be visualized:
files=(/files/*.dat)
touch /files/new.dat
printf '%s\n' ${files[#]} | grep -P '^/files/new.dat$'

Recursively copy contents of directory to all target directories

I have a directory containing a set of subdirectories and files. I need to recursively copy all the content of this directory to all the subdirectories of another directory, also recursively.
How do I achieve this, preferably without using a script and only with the cp command?
You can write this in a script but you don't have to. Just write it line by line in the terminal:
# $TARGET is the directory containing subdirectories where you want to STORE the copies
# $SOURCE is the directory containing the subdirectories you want to COPY
for dir in $(ls $TARGET); do
cp -r $SOURCE/* $TARGET/$dir
done
Only uses cp and runs on both bash and zsh.
You can't. cp can copy multiple sources but will only copy to a single destination. You need to arrange to invoke cp multiple times - once per destination - for what you want to do; using, as you say, a loop or some other tool.
The first part of the command before the pipe instruct tar to create an archive of everything in the current directory and write it to standard output (the – in place of a file-name frequently indicates stdout).
tar cf - * | ( cd /target; tar xfp -)
The commands within parentheses cause the shell to change directory to the target directory and untar data from standard input. Since the cd and tar commands are contained within parentheses, their actions are performed together.
The -p option in the tar extraction command directs tar to preserve permission and ownership information, if possible given the user executing the command. If you are running the command as superuser, this option is turned on by default and can be omitted.
Also you can use the following command, but it seems to be quite slower than tar;
cp -a * /target

Modifying files nested in tar archive

I am trying to do a grep and then a sed to search for specific strings inside files, which are inside multiple tars, all inside one master tar archive. Right now, I modify the files by
First extracting the master tar archive.
Then extracting all the tars inside it.
Then doing a recursive grep and then sed to replace a specific string in files.
Finally packaging everything again into tar archives, and all the archives inside the master archive.
Pretty tedious. How do I do this automatically using shell scripting?
There isn't going to be much option except automating the steps you outline, for the reasons demonstrated by the caveats in the answer by Kimvais.
tar modify operations
The tar command has some options to modify existing tar files. They are, however, not appropriate for your scenario for multiple reasons, one of them being that it is the nested tarballs that need editing rather than the master tarball. So, you will have to do the work longhand.
Assumptions
Are all the archives in the master archive extracted into the current directory or into a named/created sub-directory? That is, when you run tar -tf master.tar.gz, do you see:
subdir-1.23/tarball1.tar
subdir-1.23/tarball2.tar
...
or do you see:
tarball1.tar
tarball2.tar
(Note that nested tars should not themselves be gzipped if they are to be embedded in a bigger compressed tarball.)
master_repackager
Assuming you have the subdirectory notation, then you can do:
for master in "$#"
do
tmp=$(pwd)/xyz.$$
trap "rm -fr $tmp; exit 1" 0 1 2 3 13 15
cat $master |
(
mkdir $tmp
cd $tmp
tar -xf -
cd * # There is only one directory in the newly created one!
process_tarballs *
cd ..
tar -czf - * # There is only one directory down here
) > new.$master
rm -fr $tmp
trap 0
done
If you're working in a malicious environment, use something other than tmp.$$ for the directory name. However, this sort of repackaging is usually not done in a malicious environment, and the chosen name based on process ID is sufficient to give everything a unique name. The use of tar -f - for input and output allows you to switch directories but still handle relative pathnames on the command line. There are likely other ways to handle that if you want. I also used cat to feed the input to the sub-shell so that the top-to-bottom flow is clear; technically, I could improve things by using ) > new.$master < $master at the end, but that hides some crucial information multiple lines later.
The trap commands make sure that (a) if the script is interrupted (signals HUP, INT, QUIT, PIPE or TERM), the temporary directory is removed and the exit status is 1 (not success) and (b) once the subdirectory is removed, the process can exit with a zero status.
You might need to check whether new.$master exists before overwriting it. You might need to check that the extract operation actually extracted stuff. You might need to check whether the sub-tarball processing actually worked. If the master tarball extracts into multiple sub-directories, you need to convert the 'cd *' line into some loop that iterates over the sub-directories it creates.
All these issues can be skipped if you know enough about the contents and nothing goes wrong.
process_tarballs
The second script is process_tarballs; it processes each of the tarballs on its command line in turn, extracting the file, making the substitutions, repackaging the result, etc. One advantage of using two scripts is that you can test the tarball processing separately from the bigger task of dealing with a tarball containing multiple tarballs. Again, life will be much easier if each of the sub-tarballs extracts into its own sub-directory; if any of them extracts into the current directory, make sure you create a new sub-directory for it.
for tarball in "$#"
do
# Extract $tarball into sub-directory
tar -xf $tarball
# Locate appropriate sub-directory.
(
cd $subdirectory
find . -type f -print0 | xargs -0 sed -i 's/name/alternative-name/g'
)
mv $tarball old.$tarball
tar -cf $tarball $subdirectory
rm -f old.$tarball
done
You should add traps to clean up here, too, so the script can be run in isolation from the master script above and still not leave any intermediate directories around. In the context of the outer script, you might not need to be so careful to preserve the old tarball before the new is created (so rm -f $tarbal instead of the move and remove command), but treated in its own right, the script should be careful not to damage anything.
Summary
What you're attempting is not trivial.
Debuggability splits the job into two scripts that can be tested independently.
Handling the corner cases is much easier when you know what is really in the files.
You probably can sed the actual tar as tar itself does not do compression itself.
e.g.
zcat archive.tar.gz|sed -e 's/foo/bar/g'|gzip > archive2.tar.gz
However, beware that this will also replace foo with bar also in filenames, usernames and group names and ONLY works if foo and bar are of equal length

Resources