I'm attempting to create a script that will create a folder based on the current time and date. I then need the script to copy the files from a source folder to the newly created folder. I then need it to copy folders from a second source folder to the original source folder, overwriting everything that's in there.
Below is what I've tried, and it's failing in quite an epic fashion.
#!/bin/bash
d="/home/$(date +%d-%m-%y")"
mkdir "$d"
cp /home/test "$d"
cp /home/test2 /home/test
I'm aware that I don't have to define the variable, as the time between copies should be seconds and not lapse a day, but I wanted to make sure and honestly, I'm interested in learning to use variables in scripting.
There is one too many double quote here:
d="/home/$(date +%d-%m-%y")"
Actually no quoting is necessary here at all, write like this:
d=/home/$(date +%d-%m-%y)
In the rest of the script, if you want to copy directories, you will need to use cp -r instead of simply cp.
Finally, note that when you do cp -r dir1 dir2 when dir2 already exists, then dir1 will be copied inside dir2, rather than overwriting its content. That is, it will create dir2/dir1. If dir1 doesn't contain hidden files, then you can write like this to overwrite the content of dir2:
cp -r dir1/* dir2/
Related
I want to mimic copying a directory structure recursively (as in cp -r or rsync -a), but only touch the copied files, i.e. make all the copied files empty.
The specific use case is for a Snakemake pipeline; Snakemake looks for existing files in order to decide whether to re-run a pipeline step, and I want to make it believe the steps have already been run while avoiding fully downloading all the files.
This is a little kludgy, but you could pipe the output of find or rsync -nv into a little bash loop with mkdir -p and touch:
find /some/dir -type f | while read FILE; do
mkdir -p $(dirname $FILE)
touch $FILE
done
Maybe i'm just going about this wrong and making it harder than it has to be.
This is my problem. I have 2 different scripts that download various picture files. the first downloads from email and the downloaded files go into the /attachments/ directory. The second script copies the contents of google drive, all files and folders get copied into ~/gdrive/ directory. i want to be able to move all picture files from both these folders as well as any subfolders to ~/Pictures/$today and prevent any overwriting in the case of duplicate file names. I don't mind having 2 separate scripts to handle the pictures in the 2 different directories, but I do need it to be able to get all files in subdirectories of the starting point. it also needs to be able to handle a variety of file extensions. my current solution adds a numbered extension such as .~1~ after the files normal extension .jpg, .png, .tiff, etc. I dont lose any files this way but any that wind up with a backup number after the extension are rendered useless to my project. This is what I am currently using
TODAY=$(date +"%m-%d-%Y")
mkdir -p ~/Pictures/$TODAY &&
sudo find /attachments -type f -exec mv --backup=numbered -t ~/Pictures/$TODAY {} +
My result if there are duplicate file names looks like this:
DSC07286.JPG
DSC07286.JPG.~1~
Is there a better approach than what i am doing? Is there a way to dissect the filename parts and reorganize them and do it recursively for all files in the directory? Thanks
Something like this should do it (untested; uses standard lowercase variable names and puts the index just before the extension to not mess with sorting):
for path in ~/Pictures/"$today"/*.JPG
do
index=0
for duplicate_path in "$path".~[0-9]*
do
new_path="${duplicate_path%%.*}${index}.JPG"
echo "$duplicate_path" "$new_path"
((++index))
done
done
When you're confident it's doing the right thing, simply replace echo with mv to actually move the files.
Here is my solution.
#!/bin/bash
TODAY=$(date +"%m-%d-%Y")
NOW=$(date +"%D %T")
sudo mkdir -p /home/pi/Pictures/emailpics/$TODAY &&
sudo find /attachments -type f -exec mv --backup=numbered -t /home/pi/Pictures/emailpics/$TODAY/ {} + &&
for f in /home/pi/Pictures/emailpics/$TODAY/*.~?~
do
fullfilename=$f
filepath=$(dirname "$fullfilename")
filename=$(basename "$fullfilename")
fname="${filename%.*}"
bkpnum="${filename##*.}"
file="${fname%.*}"
ext="${fname##*.}"
sudo mv $f $filepath/$file$bkpnum.$ext
done
Can't say i fully understand all the syntax for the parsing bits, but it works. maybe someone else can explain what is going on.
I have a directory that is filled with subdirectories exceeding 450 GBs. Inside of these subdirectories is an instruction file in each subdirectory. I have a script that copies the instruction file in the directory I am currently in and puts it inside every subdirectory via:
#!/bin/bash
for d in */; do cp "INSTALLATION INSTRUCTIONS.rtf" "$d"; done
I need to remove all of these files in the subdirectories and replace them with new instructions. Can I simple write another script that does this:
#!/bin/bash
for d in */; do rm "INSTALLATION INSTRUCTIONS.rtf" "$d"; done
I am very hesitant and wanted to make sue as these files are vitally important and I don't want to accidentally remove anything and making a backup of 450+ GBs is very taxing.
find . -mindepth 2 -name "INSTALLATION INSTRUCTIONS.rtf" -exec rm -f '{}' +
Since this is "vitally important" data, I would first list all files that match the file name you want to delete/overwrite, without taking any action on it (other than listing):
find /folder/ -type f -name "INSTALLATION INSTRUCTIONS.rtf" -print > /tmp/holder
That would create a list of matches on /tmp/holder. Then you could analyze this list before taking any action (either visually or programatically) to make sure that the list does not include anything you don't want to delete (when dealing with big amounts of data, strange things can happen, so be proactive on protecting the data).
If you are happy with what the list shows, then you could delete the old instructions, or if possible, overwrite them with the new file. Here's an example to overwrite the old file with the new one:
while read -r line; do cp --no-preserve=all /folder/newfile "$line"; done < /tmp/holder
The cp --no-preserve=all command (available on GNU bash) would ensure that the new file has permissions that are "adequate" to the folder where they are located. You may change that to a simple cp if you don't want that to happen.
I have a directory that I want to copy all of it, but to a directory with a different name.
Example:
/Home/user/DirA-Web
copy its contents to (but it needs to be created)
/Home/user/version1/DirB-Img
/Home/user/version2/DirB-Img
I could always copy it and the rename it, I suppose.
Edit: I currently rsync the directories to the desired location and them mv in a for loop to rename them. I am looking for something cleaner.
If the directory
/Home/user/version1/
exists, a simple cp will do:
cp -r /Home/user/DirA-Web /Home/user/version1/DirB-Img
If not, you need to use mkdir beforehand, because cp has no option to
create your target directory recursively:
mkdir -p /Home/user/version1/DirB-Img && cp -r /Home/user/DirA-Web /Home/user/version1/DirB-Img
I have a directory with the following structure:
file_1
file_2
dir_1
dir_2
# etc.
new_subdir
I'd like to make a copy of all the existing files and directories located in this directory in new_subdir. How can I accomplish this via the linux terminal?
This is an old question, but none of the answers seem to work (they cause the destination folder to be copied recursively into itself), so I figured I'd offer up some working examples:
Copy via find -exec:
find . ! -regex '.*/new_subdir' ! -regex '.' -exec cp -r '{}' new_subdir \;
This code uses regex to find all files and directories (in the current directory) which are not new_subdir and copies them into new_subdir. The ! -regex '.' bit is in there to keep the current directory itself from being included. Using find is the most powerful technique I know, but it's long-winded and a bit confusing at times.
Copy with extglob:
cp -r !(new_subdir) new_subdir
If you have extglob enabled for your bash terminal (which is probably the case), then you can use ! to copy all things in the current directory which are not new_subdir into new_subdir.
Copy without extglob:
mv * new_subdir ; cp -r new_subdir/* .
If you don't have extglob and find doesn't appeal to you and you really want to do something hacky, you can move all of the files into the subdirectory, then recursively copy them back to the original directory. Unlike cp which copies the destination folder into itself, mv just throws an error when it tries to move the destination folder inside of itself. (But it successfully moves every other file and folder.)
You mean like
cp -R * new_subdir
?
cp take -R as argument which means recursive (so, copy also directories), * means all files (and directories).
Although * includes new_subdir itself, but cp detects this case and ignores new_subdir (so it doesn't copy it into itself!)
Try something like:
cp -R * /path_to_new_dir/