Create directories and download files by reading input from a file - linux

cat paste_output.txt | while read -r file_name path_name file;
do mkdir -p -- "$path_name";
wget "$file_name";
mv "$file" "$path_name";
done;
Hi! I have this piece of code that reads field by field from the file specified. What I am trying to do here is I am creating a directory that is specified in second field and then I am downloading file specified in first field and then after having that file downloaded I am that file in the directory specified in second field.
Output: I am getting the desired directory structure and files downloaded however files are downloading in the directory I am executing the commands from.
How to move files in the desired directories?

You can use the -P flag of wget to put the file in the target directory.
If the directory doesn't exist, it will create it,
so this also let's you save on the mkdir.
while read -r file_name path_name file; do
wget -P "$path_name" "$file_name"
done < paste_output.txt
I made some other improvements to the script:
The cat is useless, input redirection is better
The semicolons at end of lines are unnecessary
It's good to indent the body of loops, for readability

Related

while copying the file to other file, giving error. cp: target is not a directory

When i am trying to run below code it is giving error of cp: target "Featurespath" is not a directory
I have tried multiple option but not working.
Featurespath=/permanent/jag/media-*/*/print/cooked/*Features.xml
for file in $(ls $Featurespath);
do
cat $Featurespath | sed "/pB-/s/Direction=\"unidir\"/Direction=\"bidir\"/" $Featurespath > /permanent/jag/temp.xml
cp -rf /permanent/jag/temp.xml $Featurespath
rm /permanent/jag/temp.xml
done
i want modified xml to be pasted in same xml file.
The error you received was because of the cp line: bash expands$Featurespath into a list of files. When cp sees more than 2 parameters, it assumes the last parameter to be a directory, which is not in this case. Here is my suggested fix:
Featurespath=/permanent/jag/media-*/*/print/cooked/*Features.xml
for file in $Featurespath
do
sed "/pB-/s/Direction=\"unidir\"/Direction=\"bidir\"/" "$file" > /permanent/jag/temp.xml
mv -f /permanent/jag/temp.xml "$file"
done
Notes
Do not use ls: bash can expand the wildcards just fine
Within the loop, you are now dealing with individual files $file, not the list of file $Featurespath
Do not need to use the cat command, the sed command can take a file name
sed has an inline editing option, which eliminate the need for temp file. You might want to look into it.
Replace cp/rm combination with mv
Ultimately, like others have said, sed is not the right tool to edit XML contents, but it might work for simple cases

Bash loop to gunzip file and remove file extension and file prefixes

I have several .vcf.gz files:
subset_file1.vcf.vcf.gz
subset_file2.vcf.vcf.gz
subset_file3.vcf.vcf.gz
I want to gunzip these file and rename them (remove subset_ and redudant .vcf extension in one go and get these files:
file1.vcf
file2.vcf
file3.vcf
This is the script I have tried:
iFILES=/file/path/*.gz
for i in $iFILES;
do gunzip -k $i > /get/in/this/dir/"${i##*/}"
done
Since you have to three operation at your output path name
1.remove the directory part
2.remove prefix subset_
3.remove redudant extension .vcf
It's hard to accomplish with only one command.
Following is a modification version. Be CAREFUL to try it. I didn't test it thorough in my computer.
for i in /file/path/*.gz;
do
# get the output file name
o=$(echo ${i##*/} | sed 's/.*_\(.*\)\(\.[a-z]\{3\}\)\{2\}.*/\1\2/g')
gunzip -k $i > /get/in/this/dir/$o
done

Move files from one dir to another and add to each files name in the new directory

I need to move each *.lis file in its current directory to a new directory and add to the file's existing filename for an application to pickup the file with the new name.
For example:
Move /u01/vista/vmfiles/CompressGens.lis and /u01/vista/vmfiles/DeleteOnline.lis
to
/u01/vista/Migration_Logs/LIS.BHM.P.MIGRATION_LOGS.FBA."$(date '+%m%d%y%H%M%S')"CompressGens.lis
and
/u01/vista/Migration_Logs/LIS.BHM.P.MIGRATION_LOGS.FBA."$(date '+%m%d%y%H%M%S')"DeleteOnline.lis
What I started out with in my script:
cp -f /u01/vista/vmfiles/*.lis /u01/vista/Migration_Logs/LIS.BHM.P.MIGRATION_LOGS.FBA."$(date '+%m%d%y%H%M%S')"*.lis
There are multiple *.lis in the /u01/vista/vmfiles/ directory, and depending on the system and day, the *.lis files will not always be the same. Sometimes it is "DeleteOnline.lis" and CompressGens.lis but not ArchiveGens.lis. Then the next day will be CompressGens.lis and ArchiveGens.lis.
So I will need to get the *.lis filenames in the /u01/vista/vmfiles/ directory, and then move each one.
You need a loop, so that you can do one file at a time.
ls -1tr *.lis | while read File
do
cp -p $File ../Migration_Logs/${File%.lis}.$(date '+%m%d%y%H%M%S').CompressGens.lis &&
mv $File ../Migration_Logs/${File%.lis}.$(date '+%m%d%y%H%M%S').DeleteOnline.lis
done
${File%.lis} is the bash/korn means of stripping that suffix - see ksh or bash man page.
The "&&" idiom is in order only to mv the file to the 2nd archived name if the copy for the 1st archived file works.
#Abe Crabtree, Thanks for the help in pointing me in the right direction. Below is the final code that worked.
ls -1tr *.lis | while read File
do
mv $File /u01/vista/Migration_Logs/LIS.BHM.P.MIGRATION_LOGS.FBA.$(date '+%m%d%y%H%M%S').${File%.lis}.lis
done

copy multiple files from directory tree to new different tree; bash script

I want to write a script that do specific thing:
I have a txt file e.g.
from1/from2/from3/apple.file;/to1/to2/to3;some not important stuff
from1/from2/banana.file;/to1/to5;some not important stuff
from1/from10/plum.file;/to1//to5/to100;some not important stuff
Now i want to copy file from each line (e.g. apple.file), from original directory tree to new, non existing directories, after first semicolon (;).
I try few code examples from similar questions, but nothing works fine and I'm too weak in bash scripting, to find errors.
Please help :)
need to add some conditions:
file not only need to be copy, but also rename. Example line in file.txt:
from1/from2/from3/apple.file;to1/to2/to3/juice.file;some1
from1/from2/banana.file;to1/to5/fresh.file;something different from above
so apple.file need to be copy and rename to juice.file and put in to1/to2/to3/juice.file
I think thaht cp will also rename file but
mkdir -p "$to"
from answer below will create full folder path with juice.file as folder
In addidtion after second semicolon in each line will be something different, so how to cut it off?
Thanks for all help
EDIT: There will be no spaces in input txt file.
Try this code..
cat file | while IFS=';' read from to some_not_important_stuff
do
to=${to:1} # strip off leading space
mkdir -p "$to" # create parent for 'to' if not existing yet
cp -i "$from" "$to" # option -i to get a warning when it would overwrite something
done
Using awk
(run the awk command first and confirm the output is fine, then add |sh to do the copy)
awk -F";" '{printf "cp %s %s\n",$1,$2}' file |sh
Using shell (get updated that need manually create folder, base on alfe's
while IFS=';' read from to X
do
mkdir -p $to
cp $from $to
done < file
I had this same problem and used tar to solve it! Posted here:
tmpfile=/tmp/myfile.tar
files="/some/folder/file1.txt /some/other/folder/file2.txt"
targetfolder=/home/you/somefolder
tar --file="$tmpfile" "$files"​
tar --extract --file="$tmpfile" --directory="$targetfolder"
In this case, tar will automatically create all (sub)folders for you! Best,
Nabi

How to check for an exploding zip file in bash?

I have a bash shell script that unzips a zip file, and manipulates the resulting files. Because of the process, I expect all the content I am interested to be within a single folder like so:
file.zip
/file
/contentFolder1
/contentFolder2
stuff1.txt
stuff2.txt
...
I've noticed users on Windows typically don't create a sub folder but instead submit an exploding zip file that looks like:
file.zip
/contentFolder1
/contentFolder2
stuff1.txt
stuff2.txt
...
How can I detect these exploding zips, so that I may handle them accordingly? Is it possible without unzipping the file first?
If you want to check, unzip -l will print the contents of the zip file without extracting them. You'll have to massage the output a bit, though, since it's printing all sorts of additional crud.
Unzip to a directory first, and then remove the extra layer if the zip is not a bomb.
tempdir=`mktemp -d`
unzip -d $tempdir file.zip
if [ $(ls $tempdir | wc -l) = 1 ]; then
mv $tempdir/* .
rmdir $tempdir
else
mv $tempdir file
fi
I wouldn't try to detect it. I'd just force unzip to do what I want. With InfoZip:
$ unzip -j -d unzip-output-dir FileFromUntrustedSource.zip
-j makes it ignore any directory structure within the file, and -d tells it to put files in a particular directory, creating it if necessary.
If there are two files with the same name but in different subdirectories, the above command will make unzip ask if you want to overwrite the first with the second. You can add -o to force it to overwrite without asking, or -f to only overwrite if the second file is newer.

Resources