How can I copy files from one directory to another (which is a subdirectory in the original)? - linux

I'm new to Linux shell script and I'm struggling with a problem. An error pops up telling that the if conditional has too many arguments. What I have to do is basically described on the title, but I've written a code that is not working, what's wrong with it? The original directory is called artists and the subdirectory where the files need to be copied to is called artists_copy.
#!/bin/bash
count=0
elem=$(ls)
for file in $elem; do
let count+=1
done
for i in {$count}; do
if [ -e $elem[$i] ]; then
cp $elem[$i] artists_copy
echo "Copied file $elem[$i] to artists_copy"
fi
done

Related

Using bash to loop through nested folders to run script in current working directory

I've got (what feels like) a fairly simple problem but my complete lack of experience in bash has left me stumped. I've spent all day trying to synthesize a script from many different SO threads explaining how to do specific things with unintuitive commands, but I can't figure out how to make them work together for the life of me.
Here is my situation: I've got a directory full of nested folders each containing a file with extension .7 and another file with extension .pc, plus a whole bunch of unrelated stuff. It looks like this:
Folder A
Folder 1
Folder x
data_01.7
helper_01.pc
...
Folder y
data_02.7
helper_02.pc
...
...
Folder 2
Folder z
data_03.7
helper_03.pc
...
...
Folder B
...
I've got a script that I need to run in each of these folders that takes in the name of the .7 file as an input.
pc_script -f data.7 -flag1 -other_flags
The current working directory needs to be the folder with the .7 file when running the script and the helper.pc file also needs to be present in it. After the script is finished running, there are a ton of new files and directories. However, I need to take just one of those output files, result.h5, and copy it to a new directory maintaining the same folder structure but with a new name:
Result Folder/Folder A/Folder 1/Folder x/new_result1.h5
I then need to run the same script again with a different flag, flag2, and copy the new version of that output file to the same result directory with a different name, new_result2.h5.
The folders all have pretty arbitrary names, though there aren't any spaces or special characters beyond underscores.
Here is an example of what I've tried:
#!/bin/bash
DIR=".../project/data"
for d in */ ; do
for e in */ ; do
for f in */ ; do
for PFILE in *.7 ; do
echo "$d/$e/$f/$PFILE"
cd "$DIR/$d/$e/$f"
echo "Performing operation 1"
pc_script -f "$PFILE" -flag1
mkdir -p ".../results/$d/$e/$f"
mv "results.h5" ".../project/results/$d/$e/$f/new_results1.h5"
echo "Performing operation 2"
pc_script -f "$PFILE" -flag 2
mv "results.h5" ".../project/results/$d/$e/$f/new_results2.h5"
done
done
done
done
Obviously, this didn't work. I've also tried using find with -execdir but then I couldn't figure out how to insert the name of the file into the script flag. I'd appreciate any help or suggestions on how to carry this out.
Another, perhaps more flexible, approach to the problem is to use the find command with the -exec option to run a short "helper-script" for each file found below a directory path that ends in ".7". The -name option allows find to locate all files ending in ".7" below a given directory using simple file-globbing (wildcards). The helper-script then performs the same operation on each file found by find and handles moving the result.h5 to the proper directory.
The form of the command will be:
find /path/to/search -type f -name "*.7" -exec /path/to/helper-script '{}` \;
Where the -f option tells find to only return files (not directories) ending in ".7". Your helper-script needs to be executable (e.g. chmod +x helper-script) and unless it is in your PATH, you must provide the full path to the script in the find command. The '{}' will be replaced by the filename (including relative path) and passed as an argument to your helper-script. The \; simply terminates the command executed by -exec.
(note there is another form for -exec called -execdir and another terminator '+' that can be used to process the command on all files in a given directory -- that is a bit safer, but has additional PATH requirements for the command being run. Since you have only one ".7" file per-directory -- there isn't much benefit here)
The helper-script just does what you need to do in each directory. Based on your description it could be something like the following:
#!/bin/bash
dir="${1%/*}" ## trim file.7 from end of path
cd "$dir" || { ## change to directory or handle error
printf "unable to change to directory %s\n" "$dir" >&2
exit 1
}
destdir="/Result_Folder/$dir" ## set destination dir for result.h5
mkdir -p "$destdir" || { ## create with all parent dirs or exit
printf "unable to create directory %s\n" "$dir" >&2
exit 1
}
ls *.pc 2>/dev/null || exit 1 ## check .pc file exists or exit
file7="${1##*/}" ## trim path from file.7 name
pc_script -f "$file7" -flags1 -other_flags ## first run
## check result.h5 exists and non-empty and copy to destdir
[ -s "result.h5" ] && cp -a "result.h5" "$destdir/new_result1.h5"
pc_script -f "$file7" -flags2 -other_flags ## second run
## check result.h5 exists and non-empty and copy to destdir
[ -s "result.h5" ] && cp -a "result.h5" "$destdir/new_result2.h5"
Which essentially stores the path part of the file.7 argument in dir and changes to that directory. If unable to change to the directory (due to read-permissions, etc..) the error is handled and the script exits. Next the full directory structure is created below your Result_Folder with mkdir -p with the same error handling if the directory cannot be created.
ls is used as a simple check to verify that a file ending in ".pc" exits in that directory. There are other ways to do this by piping the results to wc -l, but that spawns additional subshells that are best avoided.
(also note that Linux and Mac have files ending in ".pc" for use by pkg-config used when building programs from source -- they should not conflict with your files -- but be aware they exists in case you start chasing why weird ".pc" files are found)
After all tests are performed, the path is trimmed from the current ".7" filename storing just the filename in file7. The file7 variabli is then used in your pc_script command (which should also include the full path to the script if not in you PATH). After the pc_script is run [ -s "result.h5" ] is used to verify that result.h5 exists and is non-empty before moving that file to your Result_Folder location.
That should get you started. Using find to locate all .7 files is a simple way to let the tool designed to find the files for you do its job -- rather than trying to hand-roll your own solution. That way you only have to concentrate on what should be done for each file found. (note: I don't have pc_script or the files, so I have not testes this end-to-end, but it should be very close if not right-on-the-money)
There is nothing wrong in writing your own routine, but using find eliminates a lot of area where bugs can hide in your own solution.
Let me know if you have further questions.

Concatenating hardcoded directory and user-created text file adds root-level paths when it shouldn't

I have written a script to allow a restricted user access to deleting files on a production webserver. However, to prevent fat-fingering issues leading to accidental filesystem deletion/problems, I have hard coded the base directory in a variable... But the final result is not properly creating the desired path from hard-coded directory + user paths if they have a * wildcard...
I have an Apache 2.4.6 server that caches web content for a user. They have a jailkit user to SSH into this box. As this is production, they are severely limited in their access, however, I would like to give them the ability to clear specific cache directories on their own terms. In order to prevent this from going horribly wrong, I have hard-coded the base cache directory into a script variable, so that no matter what, the script will only run against that path.
So far, this script works well to iterate through their desired cache clear paths... A user creates a .txt file with a /cachePath defined on each line, and the script will iterate through it and delete those paths. It works just fine for /path and /content/path2/ ... But I cannot for the life of me get it working with wildcards (i.e. /path/, /content/path2/). There's probably a sexier way to handle this than what I've done so far (currently have an if | else statement for handling * or /* not included in the script below), but I am getting all kinds of undesired results trying to handle a user-inputted * or /* on a custom path.
#!/bin/bash
#For this to work, a user must create a paths.txt file in their jailed home directory, based off the /mnt/var/www/html cache location. Each location (or file) must be on a new line, and start with a /
#User-created file with custom cache directories to delete
file="/usr/jail/paths.txt"
#Setting this variable to the contents of the user-created cache file
pathToDelete=$(cat $file)
#Hard-coded cache directory to try to prevent deleting anything important outside cache directory
cacheDir="/mnt/var/www/html"
#Let's delete cache
if [ -f $file ];then
echo "Deleting the following cache directories:"
for paths in $pathToDelete
do
echo $cacheDir"$paths"
#rm command commented out until I get expected echo
output
#rm -rfv $cacheDir"$paths"
done
echo "Cache cleared successfully"
mv $file "$file.`date +"%m%d%Y%H%M"`"
else
echo "Nothing to do"
fi
I've tried double quotes, single quotes, no quotes, tried treating "pathToDelete" as an array, none of it is producing the desired output yet. For example, if paths.txt contains only "*", the result is grabbing all directories under / and adding them to "cacheDir"?
/mnt/var/www/html/testing/backup
/mnt/var/www/html/testing/bin
/mnt/var/www/html/testing/boot
/mnt/var/www/html/testing/data
/mnt/var/www/html/testing/dev
/mnt/var/www/html/testing/etc
/mnt/var/www/html/testing/home
/mnt/var/www/html/testing/lib
/mnt/var/www/html/testing/lib64
...
If paths.txt is "./*" it's adding files from the location of the script itself:
/mnt/var/www/html/testing./cacheClear.sh
/mnt/var/www/html/testing./paths.txt
Ultimately, what I'm looking for is this: if /mnt/var/www/html contains the following directories:
/content/
/content/path/
/content/path/file1.txt
/content/path/file2.txt
/content/path/subdir/
/path2/
/path2/fileA.txt
/path2/fileB.txt
Then a file containing
/content/path/*
should delete /content/path/file1.txt, file2.txt, and /subdir/, and preserve the /content/path/ directory.
If the paths.txt file contains
/content/path
/path2/*
Then /content/path directory and subfiles/directories should be deleted, and the files within /path2/ directory will as well... But right now, the script doesn't see the concatenated $cacheDir + $paths as a real / expected location if it contains a * anywhere in it. Works ok without * symbols.
Got a version that works well enough for my purposes:
#!/bin/bash
file="/usr/jail/paths.txt"
pathToDelete=$(cat $file)
cacheDir="/mnt/var/www/html"
if [ -f $file ]; then
if [ "$pathToDelete" == "*" ] || [ "$pathToDelete" == "/*" ]; then
echo "Full Clear"
rm -rfv /mnt/var/www/html/*
else
echo "Deleting the following cache directories:"
for i in ${pathToDelete};
do
echo ${cacheDir}${i}
rm -rfv ${cacheDir}${i}
done
echo "Cache cleared successfully"
fi
fi
The following code is a working solution:
#!/bin/bash -x
file="/usr/jail/paths.txt"
pathToDelete="$(sed 's/^\///' $file)"
cacheDir="/mnt/var/www/html"
if [ -f $file ];then
echo "Deleting the following cache directories:"
for paths in "$pathToDelete"
do
echo $cacheDir/$paths
rm -rfv $cacheDir/$paths
done
echo "Cache cleared successfully"
else
echo "Nothing to do"
fi

variable part in a variable path in ksh script

I'm sorry if something similar was already answered in the past, but I wasn't able to find it. I'm writing a script to perform some housekeeping tasks, and I get stuck in the step below. To put you in the record, it's a script which reads a config file in order to be able to use it as standard protocol in different environments.
The problem is with this code:
# Check if destination folder exist, if not create it.
if [ ! -d ${V_DestFolder} ]; then # Create folder
F_Log "${IF_ROOT} mkdir -p ${V_DestFolder}"
${IF_ROOT} mkdir -p ${V_DestFolder}
continue
fi
# If movement, check write permissions of destination folder.
V_CheckIfMovement=`echo $1|grep #`
if [ $? -eq 0 ]; then # File will be moved.
V_DestFolder=`echo $1|awk -F"#" {'print $2'}`
if [ ! -w ${V_DestFolder} ]; then # Destination folder IS NOT writable.
F_Log "Destination folder ${V_DestFolder} does not have WRITE permissions. Skipping."
continue
fi
fi
Basically I need to move (in this step) some files from one route to another.
It checks if the folder (name read from config file) exists, if not it will be created, after that check if the folder have write rights and move the files.
Here you can see the part of config file which is read in this step:
app/tom*/instances/*/logs|+9|/.*\.gz)$/|move#/app/archive/tom*/logs
I need to say the files are properly moved when I change the tom* of the destination for anything, as "test" or any word without * (as it should).
What I need to know is how I can use a variable in "tom*" in destination. Variable should contain the same name of tom* in the source, which I use as the name of the cell.
This is because I use different tomcat cells with the reference tom7 or tom8 plus 3 letters to describe each one. as example tom7dog or tom7cat.
You should give the shell a chance to evaluate.
V_DestFolder=`echo $1|awk -F"#" {'print $2'}`
for p in ${V_DestFolder}; do
if [ ! -w ${p} ]; then

In Ubuntu Bash, how do I compare a variable to a stdout value? [duplicate]

This question already has answers here:
How do I compare two string variables in an 'if' statement in Bash? [duplicate]
(12 answers)
Closed 9 years ago.
I attempted to follow the answer on
How do I compare two string variables in an 'if' statement in Bash?,
but the accepted solution did not work. As you can see from the
script below, my syntax follows the solutions on that question which
gives me the error found here
Bash syntax error: "[[: not found".
And yes, I tried their solution too.
I have the following script where I am trying to delete all data from a directory. Before I delete all data, I want to compare a variable to a stdout value to verify I have the correct directory.
To avoid deleting all data from the wrong directory, I am attempting to compare the variable in the script with data stored in a *.ini.php file.
Here is the script:
#!/bin/bash
#--- script variables ---
#base path of the timetrex web folder ending with a / character
timetrex_path=/var/www/timetrex/
timetrex_cache=/tmp/timetrex/
#--- initialize script---
#location of the base path of the current version
ttrexVer_path=$(ls -d ${timetrex_path}*.*.*)/
#the timetrex cache folder
ttrexCache_path=$(sed -n 's/[cache]*dir =*\([^ ]*\)/\1/p' < ${ttrexVer_path}timetrex.ini.php)/
echo $timetrex_cache
echo $ttrexCache_path
#clear the timetrex cache
if [[ "$ttrexCache_path" = "$timetrex_cache" ]]
then
#path is valid, OK to do mass delete
#rm -R $ttrexCache_path*
echo "Success: TimeTrex cache has been cleared."
else
#path could be root - don't delete the whole server
echo "Error: TimeTrex cache was NOT cleared."
fi
The output of the script shows the following:
/tmp/timetrex/
/tmp/timetrex/
Error: Timetrex cache was NOT cleared.
As you can see from the output, both values are the same. However, when the script compares the two variables, it thinks they are different values.
Is this because the values are different types? Am I using the wrong comparison operator in the if statement? Thanks in advance.
After doing some more searching, I found that comparing the directory content was somewhat of an effective way of verifying that both variables pointed to the same directory.
Here is one way to do it:
#clear the timetrex cache
if [ "$(diff -q $timetrex_cache $ttrexCache_path 2>&1)" = "" ]
then
#path is valid, OK to do mass delete
rm -R ${ttrexCache_path}*
echo "Success: TimeTrex cache has been cleared."
else
#path could be root - don't delete the whole server
echo "Error: TimeTrex cache was NOT cleared."
fi
If one of the directories is an invalid path, the condition catches the problem and doesn't try to delete the directory contents.
If the directory paths are different but point to valid directories, the condition statement sees that they have different contents and doesn't try to delete the directory contents.
If both directory paths are different and point to valid directories, and the contents of those directories is the same, then the script will delete everything in one of the directories. SO, this is not a foolproof method.
A second method can be seen at https://superuser.com/questions/196572/check-if-two-paths-are-pointing-to-the-same-file. The problem with this method is that this code does not know the difference between /tmp/timetrex and /tmp/timetrex/ which is important when wanting to append a * at the end of the path.
In the end, the best solution for this problem is quite simple. Changing the syntax of the original code is the only thing that needed to be done.
#clear the timetrex cache
if [ ${timetrex_cache} == ${ttrexCache_path} ] && [[ "${timetrex_cache: -1}" = "/" ]]
then
#path is valid, OK to do mass delete
rm -R ${ttrexCache_path}*
echo "Success: TimeTrex cache has been cleared."
else
#path could be root - don't delete the whole server
echo "Error: TimeTrex cache was NOT cleared."
fi
Hope this is helpful to someone!

create file shell script

I am new to shell scripting in Linux and I am trying to take data from the keyboard and then append the data passed in to a file. Pretty straight forward but I am getting an error when I try to create the file. The error says "you do not have permission to create this file".
I first do a check to make sure the file exists. If it exists, append to the end of the file. If not, create the file. What am I doing wrong?
Thank you!
P.S. In this case, I do not have the file created yet
#!/bin/sh
echo "Please enter your first name";
read first
echo "Please enter your last name";
read last
combine=":$first $last"
file="/testFile.dat"
if [ -f "$file" ]
then
echo "$file found."
echo $combine >> $file
else
echo "$file not found. Will create the file and add entry now."
touch $file
$combine >> $file
fi
You're trying to write to the file /testFile.dat which is located in the root directory /. It is highly likely that as a regular user you would not have write permissions for creating such a file.
But what you wanted I'm guessing is to create the testfile.dat in the current directory.
Replace the following line:
file="/testFile.dat"
with:
file="./testFile.dat"
You are creating the file at the root. Try file="~/testFile.dat" to create the file in your home or just file="./testFile.dat" to create it in the current directory.

Resources