Here is my command
for i in `find . -name '*Source*.dat'`; do cp "$i" $INBOUND/$RANDOM.dat; done;
Here are the files (just a sample):
/(12)SA1 (Admitting Diagnosis) --_TA1-1 + TA1-2/Source.dat
./(12)SA1 (Admitting Diagnosis) --_TA1-1 + TA1-2/Source_2000C.dat
./(13)SE1 (External Cause of Injury) --_ TE1-1+TE1-2/Source.dat
./(13)SE1 (External Cause of Injury) --_ TE1-1+TE1-2/Source_2000C.dat
./(13)SE1 (External Cause of Injury) --_ TE1-1+TE1-2/Source_POATest.dat
./(14)SP1(Primary)--_ TP1-1 + TP1-2/Source.dat
./(14)SP1(Primary)--_ TP1-1 + TP1-2/Source_2000C.dat
./(14)SP1(Primary)--_ TP1-1 + TP1-2/Source_ProcDateTest.dat
./(15)SP1(Primary)--_ TP1-1 + TP1-2 - SP2 -- TP2-1 + TP2-2/Source.dat
./(16)SP1(Primary)--_ TP1-1 + TP1-2 +TP1-3- SP2 -- TP2-1 + TP2-2/Source.dat
./(17)SP1(Primary)--_ TP1-1 + TP1-2 +TP1-3/Source.dat
./(18)SP1(Primary)--_ TP1-1 + TP1-2 - SP2 -- TP2-1 + TP2-2 - Copy/Source.dat
./(19)SD1 (Primary)+SD2 (Other Diagnosis)--_ TD12/Source.dat
./(19)SD1 (Primary)+SD2 (Other Diagnosis)--_ TD12/Source_2000C.dat
./(19)SD1 (Primary)+SD2 (Other Diagnosis)--_ TD12/Source_POATest.dat
./(2)SD3--_TD4 SD4--_TD4/Source.dat
./(2)SD3--_TD4 SD4--_TD4/Source2.dat
Those spaces are getting tokenized by bash and this doesn't work.
In addition, I want to append some randomness to the end of these files so they don't collide in the destination directory but that's another story.
find . -name '*Source*.dat' -exec sh -c 'cp "$1" "$2/$RANDOM.dat"' -- {} "$INBOUND" \;
Using -exec to execute commands is whitespace safe. Using sh to execute cp is necessary to get a different $RANDOM for each copy.
If all the files are at the same directory level, as in your example, you don't need find. For example,
for i in */*Source*.dat; do
cp "$i" $INBOUND/$RANDOM.dat
done
will tokenize correctly and will find the correct files provided they are all in directories which are children of the current directory.
As #chepner points out in a comment, if you have bash v4 you can use **:
for i in **/*Source*.dat; do
cp "$i" $INBOUND/$RANDOM.dat
done
which should find exactly the same files as find would, without the tokenizing issue.
How about:
find . -name '*file*' -print0 | xargs -0 -I {} cp {} $INBOUND/{}-$RANDOM.dat
xargs is a handy way of constructing an argument list and passing it to a command.
find -print0 and xargs -0 go together, and are basically an agreement between the two commands about how to terminate arguments. In this case, it means the space won't be interpreted as the end of an argument.
-I {} sets up the {} as an argument placeholder for xargs.
As for randomising the file name to avoid a collision, there are obviously lots of things you could do to generate a random string to attach. The most important part, though, is that you verify that your new file name also does not exist. You might use a loop something like this to attempt that:
$RANDOM=$(date | md5)
filename=$INBOUND/$RANDOM.dat
while [ -e $filename ]; do
$RANDOM=$(date | md5)
filename=$INBOUND/$RANDOM.dat
done
I'm not necessarily advocating for or against generating a random filename with a hash of the current time: the main point is that you want to check for existence of that file first, just in case.
There are several ways of treating files with spaces. You can use findin a pipe, while and read:
find . -name '*Source*.dat' | while read file ; do cp "$file" "$INBOUND/$RANDOM.dat"; done
try something like
while read i;do
echo "file is $i"
cp "$i" $INBOUND/$RANDOM.dat
done < <(find . -name '*Source*.dat')
Related
I want to list all the filenames in one dir and copy them one by one. In my .bb file, I have this do_copy() function:
do_copy(){
original_dir=...
files=( $(find ${original_dir} -type f) )
for f in "${files[#]}"; do
echo $f
cp f ....
done
}
But when I build it, I got:
raise sherrors.ShellSyntaxError(''.join(msg))
bb.pysh.sherrors.ShellSyntaxError: LexToken(TOKEN,'$(find ${original_dir} -type f)',0,0)
followed by:
LexToken(RPARENS,')',0,0)
LexToken(NEWLINE,'\n',0,0)
LexToken(For,'for',0,0)
LexToken(TOKEN,'package_file',0,0)
LexToken(In,'in',0,0)
It seems that the line for listing filenames failed. Any ideas? Thanks!
I've some files in a folder A which are named like that:
001_file.xyz
002_file.xyz
003_file.xyz
in a separate folder B I've files like this:
001_FILE_somerandomtext.zyx
002_FILE_somerandomtext.zyx
003_FILE_somerandomtext.zyx
Now I want to rename, if possible, with just a command line in the bash all the files in folder B with the file names in folder A. The file extension must stay different.
There is exactly the same amount of files in each folder A and B and they both have the same order due to numbering.
I'm a total noob, but I hope some easy answer for the problem will show up.
Thanks in advance!
ZVLKX
*Example edited for clarification
An implementation might look a bit like this:
renameFromDir() {
useNamesFromDir=$1
forFilesFromDir=$2
for f in "$forFilesFromDir"/*; do
# Put original extension in $f_ext
f_ext=${f##*.}
# Put number in $f_num
f_num=${f##*/}; f_num=${f_num%%_*}
# look for a file in directory B with same number
set -- "$useNamesFromDir"/"${f_num}"_*.*
[[ $1 && -e $1 ]] || {
echo "Could not find file number $f_num in $dirB" >&2
continue
}
(( $# > 1 )) && {
# there's more than one file with the same number; write an error
echo "Found more than one file with number $f_num in $dirB" >&2
printf ' - %q\n' "$#" >&2
continue
}
# extract the parts of our destination filename we want to keep
destName=${1##*/} # remove everything up to the last /
destName=${destName%.*} # and past the last .
# write the command we would run to stdout
printf '%q ' mv "$f" "$forFilesFromDir/$destName.$f_ext"; printf '\n'
## or uncomment this to actually run the command
# mv "$f" "$forFilesFromDir/$destName.$f_ext"
done
}
Now, how would we test this?
mkdir -p A B
touch A/00{1,2,3}_file.xyz B/00{1,2,3}_FILE_somerandomtext.zyx
renameFromDir A B
Given that, the output is:
mv B/001_FILE_somerandomtext.zyx B/001_file.zyx
mv B/002_FILE_somerandomtext.zyx B/002_file.zyx
mv B/003_FILE_somerandomtext.zyx B/003_file.zyx
Sorry if this isn't helpful, but I had fun writing it.
This renames items in folder B to the names in folder A, preserving the extension of B.
A_DIR="./A"
A_FILE_EXT=".xyz"
B_DIR="./B"
B_FILE_EXT=".zyx"
FILES_IN_A=`find $A_DIR -type f -name *$A_FILE_EXT`
FILES_IN_B=`find $B_DIR -type f -name *$B_FILE_EXT`
for A_FILE in $FILES_IN_A
do
A_BASE_FILE=`basename $A_FILE`
A_FILE_NUMBER=(${A_BASE_FILE//_/ })
A_FILE_WITHOUT_EXTENSION=(${A_BASE_FILE//./ })
for B_FILE in $FILES_IN_B
do
B_BASE_FILE=`basename $B_FILE`
B_FILE_NUMBER=(${B_BASE_FILE//_/ })
if [ ${A_FILE_NUMBER[0]} == ${B_FILE_NUMBER[0]} ]; then
mv $B_FILE $B_DIR/$A_FILE_WITHOUT_EXTENSION$B_FILE_EXT
break
fi
done
done
I was changed directory name.
In this directory thousands of files.
Some projects use this files, projects have got symlinks on it.
How to find all symlinks, which have got folder name in their address?
how to change all this symlinks to another path in automatic mode?
if 2 only bash scripting with deleting and creating new - i will do it, but may be you know more easy way?
It's a bit complicated, but it can be done with find, readlink, a check to test whether the symlink is relative or not, and sed to get rid of .. in path names (copied 1:1 from this answer).
(Note that most convenient methods (such as readlink -f) are not available due to the symlinks targets not existing anymore.)
Assuming your old path is /var/lib/old/path:
oldpath='/var/lib/old/path';
find / -type l -execdir bash -c 'p="$(readlink "{}")"; if [ "${p:0:1}" != "/" ]; then p="$(echo "$(pwd)/$p" | sed -e "s|/\./|/|g" -e ":a" -e "s|/[^/]*/\.\./|/|" -e "t a")"; fi; if [ "${p:0:'${#oldpath}'}" == "'"$oldpath"'" ]; then ...; fi;' \;
Now replace the ... from above with ln -sf (-f to override the existing link).
Assuming your new path is /usr/local/my/awesome/new/path:
oldpath='/var/lib/old/path';
newpath='/usr/local/my/awesome/new/path';
find / -type l -execdir bash -c 'p="$(readlink "{}")"; if [ "${p:0:1}" != "/" ]; then p="$(echo "$(pwd)/$p" | sed -e "s|/\./|/|g" -e ":a" -e "s|/[^/]*/\.\./|/|" -e "t a")"; fi; if [ "${p:0:'${#oldpath}'}" == "'"$oldpath"'" ]; then ln -sf "'"$newpath"'${p:'${#oldpath}'}" "{}"; fi;' \;
Note that oldpath and newpath have to be absolute paths.
Also note that this will convert all relative symlinks to absolute ones.
It would be possible to keep them relative, but only with a lot of effort.
Breaking it down
For those of you who care what that one-line-inferno actually means:
find - a cool executable
/ - where to search, in this case the system root
-type l - match symbolic links
-execdir - for every match run the following command in the directory of the matched file:
bash - well, bash
-c - execute the following string (leading and trailing ' removed):
p="$(readlink "{}")"; - starting with the most inner:
" - start a string to make sure no expansion happens
{} - placeholder for the matched file's name (feature of -execdir)
" - end the string
readlink ... - find out where the symlink points to
p="$(...)" - and store the result in $p
if [ "${p:0:1}" != "/" ]; then - if the first character of $p is / (i.e. the symlink is absolute), then...
p="$(echo "$(pwd)/$p" | sed -e "s|/\./|/|g" -e ":a" -e "s|/[^/]*/\.\./|/|" -e "t a")"; - convert the path to an absolute one:
$(pwd) - the current directory (where the matched file lies, because we're using -execdir)
/$p - append a slash and the target of the symlink to the path of the working directory
echo "$(pwd)/$p" | - pipe the above to the next command
sed ... - resolve all ..'s, see here
p="$(...)" and store the result back into $p.
fi; - end if
if [ "${p:0:'${#oldpath}'}" == "'"$oldpath"'" ]; - if $p starts with $oldpath
${p:0:'${#oldpath}'} - substring of $p, starting at position 0, with length of $oldpath:
${#oldpath} - length of variable $oldpath
'...' - required because we're inside a '-quoted string
then - then...
ln -sf - link symbolically and override existing file, with arguments:
"'"$newpath"'${p:'${#oldpath}'}" - replace the $oldpath part of $p with $newpath (actually remove as many characters from $p as $oldpath long is, and prepend $newpath to it):
" - start a string
' - end the '-string argument to bash -c
" - append a "-string to it (in which variable expansion happens), containing:
$newpath - the value of $newpath
" - end the "-string argument to bash -c
' - append a '-string to it, containing:
${p: - a substring of p, starting at:
' - end the argument to bash -c
${#oldpath} - append the length of $oldpath to it
' - append another '-string to it
} - end substring
" - end string
"{}" - the link file, whose path stays the same
fi; - end if
\; - delimiter for -execdir
I am trying to write a perl script which checks all the directories in the current directory and then accordingly penetrates in the subsequent directories to the point where it contains the last directory. This is what I have written:
#!/usr/bin/perl -w
use strict;
my #files = <*>;
foreach my $file (#files){
if (-d $file){
my $cmd = qx |chown deep:deep $file|;
my $chdir = qx |cd $file|;
my #subfiles = <*>:
foreach my $ subfile(#subfiles){
if (-d $file){
my $cmd = qx |chown deep:deep $subfile|;
my $chdir = qx |cd $subfile|;
. # So, on in subdirectories
.
.
}
}
}
}
Now, some of the directories I have conatins around 50 sub directories. How can I penetrate through it without writing 50 if conditions? Please suggest. Thank you.
Well, a CS101 way (if this is just an exercise) is to use a recursive function
sub dir_perms {
$path = shift;
opendir(DIR, $path);
my #files = grep { !/^\.{1,2}$/ } readdir(DIR); # ignore ./. and ./..
closedir(DIR);
for (#files) {
if ( -d $_ ) {
dir_perms($_);
}
else {
my $cmd = qx |chown deep:deep $_|;
system($cmd);
}
}
}
dir_perms(".");
But I'd also look at File::Find for something more elegant and robust (this can get caught in a circular link trap, and errors out if you don't call it on a directory, etc.), and for that matter I'd look at plain old UNIX find(1), which can do exactly what you're trying to do with the -exec option, eg
/bin/bash$ find /path/to/wherever -type f -exec chown deep:deep {} \;
perldoc File::Find has examples for what you are doing. Eg,
use File::Find;
finddepth(\&wanted, #directories_to_search);
sub wanted { ... }
further down the doc, it says you can use find2perl to create the wanted{} subproc.
find2perl / -name .nfs\* -mtime +7 \
-exec rm -f {} \; -o -fstype nfs -prune
NOTE: The OS usually won't let you change ownership of a file or directory unless you are the superuser (i.e. root).
Now, we got that out of the way...
The File::Find module does what you want. Use use warnings; instead of -w:
use strict;
use warnings;
use feature qw(say);
use autodie;
use File::Find;
finddepth sub {
return unless -d; # You want only directories...
chown deep, deep, $File::Find::name
or warn qq(Couldn't change ownership of "$File::Find::name\n");
}, ".";
The File::Find package imports a find and a finddepth subroutine into your Perl program.
Both work pretty much the same. They both recurse deeply into your directory and both take as their first argument a subroutine that's used to operate on the found files, and list of directories to operate on.
The name of the file is placed in $_ and you are placed in the directory of that file. That makes it easy to run the standard tests on the file. Here, I'm rejecting anything that's not a directory. It's one of the few places where I'll use $_ as the default.
The full name of the file (from the directory you're searching is placed in $File::Find::name and the name of that file's directory is $File::Find::dir.
I prefer to put my subroutine embedded in my find, but you can also put a reference to another subroutine in there too. Both of these are more or less equivalent:
my #directories;
find sub {
return unless -d;
push #directories, $File::Find::name;
}, ".";
my #directories;
find \&wanted, ".";
sub wanted {
return unless -d;
push #directories, $File::Find::name;
}
In both of these, I'm gathering the names of all of the directories in my path and putting them in #directories. I like the first one because it keeps my wanted subroutine and my find together. Plus, the mysteriously undeclared #directories in my subroutine doesn't look so mysterious and undeclared. I declared my #directories; right above the find.
By the way, this is how I usually use find. I find what I want, and place them into an array. Otherwise, you're stuck putting all of your code into your wanted subroutine.
I have a function which generate a shell command which use find to delete all files that are not useful anymore:
DOWNLOAD_DIR = '/home/user/directory';
function purge(psmil, callback) {
var arg = [DOWNLOAD_DIR, '\\(', '-name', '"*.mp4"', '-o', '-name', '"*.zip"', '\\)', '!', '\\('],
file = [],
i = 0;
cpurge;
//Fill file with names of the files to keep
arg.push('-name');
arg.push('"' + file[i] + '"');
i = i + 1;
while( i < file.length) {
arg.push('-o');
arg.push('-name');
arg.push('"' + file[i] + '"');
i = i + 1;
}
arg.push('\\)');
arg.push('-ls');
arg.push('-delete');
cpurge = spawn('find', arg);
cpurge.stdout.on('data', function(data) {
console.log('data');
}
cpurge.stderr.on('data', function(data) {
console.log('err: ' + data);
}
cpurge.stdout.on('data', function(data) {
callback();
}
}
Example, it will generate the command:
find /home/user/directory \( -name "*.mp4" -o -name "*.zip" \) ! \( -name "tokeep.mp4" -o -name "tokeep2.mp4" \) -ls -delete
Which, put in a .sh file and started, work file, it list all .mp4 and .zip in /home/user/directory, print them and delete them
But when I look at the log of my app, it list everything on the disk, and delete all .mp4 and .zip in the directory
Why?
EDIT: Use find directly
I ve tried to use strace, I ve got this line:
2652 execve("/usr/bin/find", ["find", "/home/user/directory/", "\\(", "-name", "\"*.mp4\"", "-o", "-name", "\"*.zip\"", "\\)", "!", "\\(", "-name", "\"filetokeep.mp4", "-o", "-name", "\"filetokeep2.mp4\"", ...], [/* 17 vars */]) = 0
With Bash
When you pass arguments to bash using -c, then the argument just after -c must contain the whole thing you want bash to run. To illustrate, assuming NONEXISTENT does not exist:
$ bash -c ls NONEXISTENT
Will just ls all the files in your directory, no error.
$ bash -c 'ls NONEXISTENT'
Will launch ls NONEXISTENT and will give an error.
So your arg list must be built something like this:
['-c', 'find /home/user/directory \( -name "*.mp4" -o -name "*.zip" \) ! \( -name "tokeep.mp4" -o -name "tokeep2.mp4" \) -ls -delete']
The argument that comes after -c is the whole command you want bash to run.
Without Bash
But as I've said in the comment, I do not see anything in your use of find that should require you pass it to bash. So you could reduce your arg list to just what you want find to execute and spawn find directly. If you decide to do this, you must not quote the arguments you pass to find. So "*.mp4" must become *.mp4 (remove the quotes), \( must become (. The presence of the quotes and the slashes are just for bash. If you no longer use bash, then you must remove them. For instance, this:
'\\(', '-name', '"*.mp4"', '-o', '-name', '"*.zip"', '\\)', '!', '\\('
must become:
'(', '-name', '*.mp4', '-o', '-name', '*.zip', ')', '!', '('
and the same transformation must be applied to the rest of your arguments.