I need to find all .bashrc files and append "MYSQL_HISTFILE=/dev/null" to it, to remediate an issue. There are alot of .bashrc files, so can I do something like:
find / -type f -name ".bashrc" -exec echo "export MYSQL_HISTFILE=/dev/null" >> {} \;
>> is executed by the original shell process, it can't use substitution from find. And find doesn't run its command through a shell, so it can't do output redirection itself.
You need to execute bash explicitly so you can use redirection in the command.
find / -type f -name '.bashrc' -exec bash -c 'echo export MYSQL_HISTFILE=/dev/null >> "{}"' \;
Related
My goal is to empty every file in a directory. I DON'T want to actually delete the file, I just want to delete it's contents.
If you want to do this with a single file you can do > file.txt
If I want to run this operation on every file in a directory I can do this:
find . -exec /bin/bash -c '> {}' \;
Notice how the above command has to call /bin/bash. This is because simply running the command like this, find . -exec > {} \; says find: invalid argument ;' to -exec' I suspect this is because the redirection symbol is confusing the command.
I would like to run this command without needing to run /bin/bash within -exec
How can this be done?
One easy way to do this is by using truncate:
find -type f -exec truncate -s0 {} +
If you want to only use bash, you could use a while loop:
find -type f -print0 |
while IFS= read -r -d '' file; do
> "$file"
done
Finally, if you didn't mind using bash -c, it would be better to do it as follows to avoid calling bash so many times:
find -type f -exec bash -c 'for file; do > "$file"; done' - {} +
although I don't like that solution.
I have such a bash script. I want to gzip all .ppm files under a directory to another directory. For this reason I have written such a bash script:
cd /export/students/sait/12-may
for next_file in $(find . -type f ! -name *.ppm )
do
/bin/gzip -f -c $next_file > /export/students/sait/12-may-yedek/$next_file.gz
done
When I execute this script, I get such error:
/usr/bin/find: Argument list too long
How can I fix this problem?
Quote this part : *.ppm to prevent filename globbing, and also remove the ! as you want to find files with .ppm extension, not the other way around.
find . -type f -name '*.ppm'
Instead of running a loop you could do it with single find command which would provide white space safety:
find /export/students/sait/12-may -type f -name '*.ppm' -exec sh -c '/bin/gzip -f -c "$0" > "$0".gz' {} \;
It seems like I am unable to find a direct answer to this question.
I appreciate your help.
I'm trying to find all files with a specific name in a directory, read the last 1000 lines of the file and copy it in to a new file in the same directory. As an example:
Find all files names xyz.log in the current directory, copy the last 1000 lines to file abc.log (which doesn't exist).
I tried to use the following command with no luck:
find . -name "xyz.log" -execdir tail -1000 {} > abc.log \;
The problem I'm having is that for all the files in the current directory, they all write to abc.log in the CURRENT directory and not in the directory where xyz.log resides. Clearly the find with execdir is first executed and then the output is redirected to abc.log.
Can you guys suggest a way to fix this? I appreciate any information/help.
EDIT- I tried find . -name "xyz.log" -execdir sh -c "tail -1000 {} > abc.log" \; as suggested by some of the friends, but it gives me this error: sh: ./tail: No such file or directory error message. Do you guys have any idea what the problem is?
Luckily the solution to use -printf is working fine.
The simplest way is this:
find . -name "xyz.log" -execdir sh -c 'tail -1000 "{}" >abc.log' \;
A more flexible alternative is to first print out the commands and then execute them all with sh:
find . -name "xyz.log" -printf 'tail -1000 "%p" >"%h/abc.log"\n' | sh
You can remove the | sh from the end when you're trying it out/debugging.
There is a bug in some versions of findutils (4.2 and 4.3, though it was fixed in some 4.2.x and 4.3.x versions) that cause execdir arguments that contain {} to be prefixed with ./ (instead of the prefix being applied only to {} it is applied to the whole quoted string). To work around this you can use:
find . -name "xyz.log" -execdir sh -c 'tail -1000 "$1" >abc.log' sh {} \;
sh -c 'script' arg0 arg1 runs the sh script with arg0, arg1, etc. passed to it. By convention, arg0 is the name of the executable (here, "sh"). From the script you can access the arguments using $0 (corresponding to "sh"), $1 (corresponding to find's expansion of {}), etc.
The redirect isn't passed into execdir, so abc.log shows up in the directory you run the command in. -execdir also doesn't like embedded redirects. but you can workaround the problem by passing -execdir a shell command with a redirect embedded, like this:
find . -name "xyz.log" -execdir sh -c '/usr/bin/tail -1000 {} > abc.log' \;
Much credit to this blog post (not mine):
http://www.microhowto.info/howto/act_on_all_files_in_a_directory_tree_using_find.html
Edit
I put the full path to tail in the command (assuming it's in /usr/bin on your system), since sh may load a .profile with a PATH that differs from your current shell.
Here's another non-find (well, sorta - it still uses find but doesn't try to shoehorn find into doing the whole thing):
while read f
do
d=$(dirname "${f}")
tail -n 1000 "${f}" > "${d}/abc.log"
done < <(find . -type f -name xyz.log -print)
I have one script that only writes data to stdout. I need to run it for multiple files and generate a different output file for each input file and I was wondering how to use find -exec for that. So I basically tried several variants of this (I replaced the script by cat just for testability purposes):
find * -type f -exec cat "{}" > "{}.stdout" \;
but could not make it work since all the data was being written to a file literally named{}.stdout.
Eventually, I could make it work with :
find * -type f -exec sh -c "cat {} > {}.stdout" \;
But while this latest form works well with cat, my script requires environment variables loaded through several initialization scripts, thus I end up with:
find * -type f -exec sh -c "initscript1; initscript2; ...; myscript {} > {}.stdout" \;
Which seems a waste because I have everything already initialized in my current shell.
Is there a better way of doing this with find? Other one-liners are welcome.
You can do it with eval. It may be ugly, but so is having to make a shell script for this. Plus, it's all on one line.
For example
find -type f -exec bash -c "eval md5sum {} > {}.sum " \;
A simple solution would be to put a wrapper around your script:
#!/bin/sh
myscript "$1" > "$1.stdout"
Call it myscript2 and invoke it with find:
find . -type f -exec myscript2 {} \;
Note that although most implementations of find allow you to do what you have done, technically the behavior of find is unspecified if you use {} more than once in the argument list of -exec.
If you export your environment variables, they'll already be present in the child shell (If you use bash -c instead of sh -c, and your parent shell is itself bash, then you can also export functions in the parent shell and have them usable in the child; see export -f).
Moreover, by using -exec ... {} +, you can limit the number of shells to the smallest possible number needed to pass all arguments on the command line:
set -a # turn on automatic export of all variables
source initscript1
source initscript2
# pass as many filenames as possible to each sh -c, iterating over them directly
find * -name '*.stdout' -prune -o -type f \
-exec sh -c 'for arg; do myscript "$arg" > "${arg}.stdout"' _ {} +
Alternately, you can just perform the execution in your current shell directly:
while IFS= read -r -d '' filename; do
myscript "$filename" >"${filename}.out"
done < <(find * -name '*.stdout' -prune -o -type f -print0)
See UsingFind discussing safely and correctly performing bulk actions through find; and BashFAQ #24 discussing the use of process substitution (the <(...) syntax) to ensure that operations are performed in the parent shell.
I'm writing a script in bash.
I invoke it with
find *.zip -type f -exec ./myscript.sh {} \;
At the top of my script I invoke another script like this:
#!/bin/bash
. ticktick.sh
I get the following error
.: ticktick.sh: file not found
If I invoke the script like this
./myscript.sh somefile.zip
it works
If I put the ticktick.sh script in my path in another directory it breaks, so that isn't an option. Is there some special kind of context that scripts called with a find have? I'm obviously new to BASH scripting. Any help would be appreciated
I think there are 2 problems.
1.: if you want to search for all zip files in the current directory, you have to write the following command
find . -type f -name *.zip -exec ...
2.: you execute myscript.sh with ./ before it. So myscript.sh has to be in the current working directory. if your script is in /home/jd/ and you execute it from /home/ your myscript.sh will be not found.
first you have to determine the directory of your files:
install_path=$(dirname $(readlink -f $0))
So your complete find command is:
find . -type f -name *.zip -exec $install_path/myscript.sh {} \;
The myscript.sh file have to be in the same directory as ticktick.sh