I have a file needs to copy unique directory call test
directory structure as below
/contentroot/path/a/x/test
/contentroot/path/a/y/test
/contentroot/path/a/z/test
--------------------------
as above I have more then 250 combination test directory
I have try below command ( by using asterisk) but it's only copy one test directly only and giving issue (cp: omitting directory )
cp myfile.txt /contentroot/path/a/*/test
any Help
Perhaps a for loop?
for FOLDER in /contentroot/path/a/*/test; do
cp myfile.txt $FOLDER
done
You can feed the output of an echo as an input to xargs. xargs will then run the cp command three times, appending the next directory path piped to it from the echo.
The -n 1 option on the xargs command is so it only appends one of those arguments at a time to the cp each time it runs.
echo /contentroot/path/a/x/test /contentroot/path/a/y/test /contentroot/path/a/z/test | xargs -n 1 cp myfile.txt
Warnings! Firstly this will over-write files (if they exist) and secondlt any bash command should be tested and used at the runners risk! ;)
Related
When i am trying to run below code it is giving error of cp: target "Featurespath" is not a directory
I have tried multiple option but not working.
Featurespath=/permanent/jag/media-*/*/print/cooked/*Features.xml
for file in $(ls $Featurespath);
do
cat $Featurespath | sed "/pB-/s/Direction=\"unidir\"/Direction=\"bidir\"/" $Featurespath > /permanent/jag/temp.xml
cp -rf /permanent/jag/temp.xml $Featurespath
rm /permanent/jag/temp.xml
done
i want modified xml to be pasted in same xml file.
The error you received was because of the cp line: bash expands$Featurespath into a list of files. When cp sees more than 2 parameters, it assumes the last parameter to be a directory, which is not in this case. Here is my suggested fix:
Featurespath=/permanent/jag/media-*/*/print/cooked/*Features.xml
for file in $Featurespath
do
sed "/pB-/s/Direction=\"unidir\"/Direction=\"bidir\"/" "$file" > /permanent/jag/temp.xml
mv -f /permanent/jag/temp.xml "$file"
done
Notes
Do not use ls: bash can expand the wildcards just fine
Within the loop, you are now dealing with individual files $file, not the list of file $Featurespath
Do not need to use the cat command, the sed command can take a file name
sed has an inline editing option, which eliminate the need for temp file. You might want to look into it.
Replace cp/rm combination with mv
Ultimately, like others have said, sed is not the right tool to edit XML contents, but it might work for simple cases
I have a large directory named as application_pdf which contains 93k files. My use-case is to split the directory into 3 smaller subdirectories (to a different location that the original large directory) containing around 30k files each.
Can this be done directly from the commandline.
Thanks!
Using bash:
x=("path/to/dir1" "path/to/dir2" "path/to/dir3")
c=0
for f in *
do
mv "$f" "${x[c]}"
c=$(( (c+1)%3 ))
done
If you have the rename command from Perl, you could try it like this:
rename --dry-run -pe 'my #d=("dirA","dirB","dirC"); $_=$d[$N%3] . "/$_"' *.pdf
In case you are not that familiar with the syntax:
-p says to create output directories, à la mkdir -p
-e says to execute the following Perl snippet
$d[$N%3] selects one of the directories in array #d as a function of the serially incremented counter $N provided to the snippet by rename
The output value is passed back to rename by setting $_
Remove the --dry-run if it looks good. Please run on a small directory with a copy of 8-10 files first, and make a backup before trying on all your 93k files.
Test
touch {0,1,2,3,4,5,6}.pdf
rename --dry-run -pe 'my #d=("dirA","dirB","dirC"); $_=$d[$N%3] . "/$_"' *.pdf
'0.pdf' would be renamed to 'dirB/0.pdf'
'1.pdf' would be renamed to 'dirC/1.pdf'
'2.pdf' would be renamed to 'dirA/2.pdf'
'3.pdf' would be renamed to 'dirB/3.pdf'
'4.pdf' would be renamed to 'dirC/4.pdf'
'5.pdf' would be renamed to 'dirA/5.pdf'
'6.pdf' would be renamed to 'dirB/6.pdf'
More for my own reference, but if you don't have the Perl rename command, you could do it just in Perl:
perl -e 'use File::Copy qw(move);my #d=("dirA","dirB","dirC"); my $N=0; #files = glob("*.pdf"); foreach $f (#files){my $t=$d[$N++%3] . "/$f"; print "Moving $f to $t\n"; move $f,$t}'
Something like this might work:
for x in $(ls -1 originPath/*.pdf | head -30000); do
mv originPath/$x destinationPath/
done
This probably quite basic but I have spent whole day finding an answer without much success.
I have an executable script that resides in ~/Desktop/shell/myScript.sh
I want a single line command to run this script from my terminal that outputs to a new directory in same directory where the script is located no matter what my present working directory is.
I was using:
mkdir -p tmp &&
./Desktop/shell/myScript.sh|grep '18x18'|cut -d":" -f1 > tmp/myList.txt
But it creates new directory in present working directory and not on the target location.
Any help would be appreciated.
Thanks!
You could solve it in one line if you pre-define a variable:
export LOC=$HOME/Desktop/shell
Then you can say
mkdir -p $LOC/tmp && $LOC/myScript.sh | grep '18x18' | cut -d":" -f1 > $LOC/tmp/myList.txt
But if you're doing this repeatedly it might be better long-term to wrap myScript.sh so that it creates the directory, and redirects the output, for you. The grep and cut parameters, as well as the output file name, would be passed as command-line arguments and options to the wrapper.
How about this:
SCRIPTDIR="./Desktop/shell/" ; mkdir "$SCRIPTDIR/tmp" ; "$SCRIPTDIR/myScript.sh" | grep '18x18' | cut -d ":" -f 1 > "$SCRIPTDIR/tmp/myList.txt"
In your case you have to give the path to the script anyway. If you put the script in the path where it is automatically searched, e.g. $HOME/bin, and you can just type myScript.sh without the directory prefix, you can use SCRIPTDIR=$( dirname $( which myScript.sh ) ).
Mixing directories with binaries and data files is usually a bad idea. For temporary files /tmp is the place to go. Consider that your script might become famous and get installed by the administrator in /usr/bin and run by several people at the same time. For this reason, try to think mktemp.
YOUR SCRIPT CAN DO THIS FOR YOU WITH SOME CODES
Instead of doing this manually from the command line and who knows where you will move your script and put it. add the following codes
[1] Find your script directory location using dirname
script_directory=`dirname $0`
The above code will find your script directory and save it in a variable.
[2] Create your "tmp" folder in your script directory
mkdir "$script_directory/tmp 2> /dev/null"
The above code will make a directory called "tmp" in your script directory. If the directory exist, mkdir will not overwrite any existing directory using this command line and gave an error. I hide all errors by "2> /dev/null"
[3] Open your script and modify it using "cut" and then redirect the output to a new file
cat "$0"|grep '18x18'|cut -d":" -f1 > "$script_directory"/tmp/myList.txt
I am using Mac Os. This is command line code to lauch my programm (two parts)
nucmer --mum file1.txt file2.txt
show-snps -Clr -x 2 out.delta > out_file1.snps
First part of the programm creates file out.delta. My file2.txt is always the same, but I want to launch this both parts 35000 times whith different file1.txt. All the file1s are located in the same directory.
Is it possible to do it using BASH?
Keep all the input files in a directory. Create a wrapper script to invoke nucmer script and then show-snps script. Your wrapper script will accept path to file directory as input. Iterate over all files in the directory and call your two scripts.
You could do something along these lines:
find . -maxdepth 1 -type f -print | grep -v './out_' | while read f
do
b=$(basename ${f})
nucmer --mum ${f} file2.txt
show-snps -Clr -x 2 out.delta > out_${b}.snps
done
The find bit finds all files in the current directory. grep filters out any previous output files, in case you've run some previously. The basename line strips off the leading ./ and trailing extension, and then your two programs get run with the input file name and an output filename based on the basename output.
If you don't get an argument list too long error, you could just use for:
for f in file*.txt; do nucmer --mum $f second.txt; show-snps -Clr -x 2 out.delta > out_${f%.txt}.snps; done
On my computer running Ubuntu, I have a folder full of hundreds files all named "index.html.n" where n starts at one and continues upwards. Some of those files are actual html files, some are image files (png and jpg), and some of them are zip files.
My goal is to permanently remove every single file except the zip archives. I assume it's some combination of rm and file, but I'm not sure of the exact syntax.
If it fits into your argument list and no filenames contain colon a simple pipe with xargs should do:
file * | grep -vi zip | cut -d: -f1 | tr '\n' '\0' | xargs -0 rm
First find to find matching file, then file to get file types. sed eliminates other file types and also removes everything but the filenames from the output of file. lastly, rm for deleting:
find -name 'index.html.[0-9]*' | \
xargs file | \
sed -n 's/\([^:]*\): Zip archive.*/\1/p' |
xargs rm
I would run:
for f in in index.html.*
do
file "$f" | grep -qi zip
[ $? -ne 0 ] && rm -i "$f"
done
and remove -i option if you feel confident enough
Here's the approach I'd use; it's not entirely automated, but it's less error-prone than some other approaches.
file * > cleanup.sh
or
file index.html.* > cleanup.sh
This generates a list of all files (excluding dot files), or of all index.html.* files, in your current directory and writes the list to cleanup.sh.
Using your favorite text editor (mine happens to be vim), edit cleanup.sh:
Add #!/bin/sh as the first line
Delete all lines containing the string "Zip archive"
On each line, delete everything from the : to the end of the line (in vim, :%s/:.*$//)
Replace the beginning of each line with "rm" followed by a space
Exit your editor, updating the file.
chmod +x cleanup.sh
You should now have a shell script that will delete everything except zip files.
Carefully inspect the script before running it. Look out for typos, and for files whose names contain shell metacharacters. You might need to add quotation marks to the file names.
(Note that if you do this as a one-line shell command, you don't have the opportunity to inspect the list of files you're going to delete before you actually delete them.)
Once you're satisfied that your script is correct, run
./cleanup.sh
from your shell prompt.
for i in index.html.*
do
$type = file $i;
if [[ ! $file =~ "Zip" ]]
then
rm $file
fi
done
Change the rm to a ls for testing purposes.