Run expand on find results - linux

I'm trying to run the expand shell command on all files found by a find command. I've tried -exec and xargs but both failed. Can anyone explain me why? I'm on a mac for the record.
find . -name "*.php" -exec expand -t 4 {} > {} \;
This just creates a file {} with all the output instead of overwriting each individual found file itself.
find . -name "*.php" -print0 | xargs -0 -I expand -t 4 {} > {}
And this just outputs
4 {}
xargs: 4: No such file or directory

Your command does not work for two reasons.
The output redirection is done by the shell and not by find. That means that the shell will redirect finds output into the file {}.
The redirection would occur immediately. That means that the file will be written even before it is read by the expand command. So it's not possible to redirect a command's output into the input file.
Unfortunately expand doesn't allow to write it's output into a file. So you have to use output redirection. If you use bash you could define a function that executes expand, redirects the output into a temporary file and move the temporary file back over the original file. The problem is that find will run a new shell to execute the expand command.
But there is a solution:
expand_func () {
expand -t 4 "$1" > "$1.tmp"
mv "$1.tmp" "$1"
}
export -f expand_func
find . -name \*.php -exec bash -c 'expand_func {}' \;
You are exporting the function expand_func to sub shells using export -f. And you don't execute expand itself using find -exec but you execute a new bash that executes the exported expand_func.

'expand' isn't really worth the trouble.
You can just use sed instead:
find . -name "*.php" | xargs sed -i -e 's/\t/ /g'

Related

Find and show information from logs inside a folder in linux

I'm trying to create a little script using bash in linux. That allows me to find if there is any tag 103=16 inside a log
I have multiple folders named for example l51prdsrv-api1.nebex.local, l51prdsrv-oe1.nebex.local, etc... inside those folders are .log files like TRADX_gsoe3.log, TRADX_gseuoe2.log, etc... .
I need to find if inside those logs there is the tag 103=16
I'm trying this command
find . /opt/FIXLOGS/l51prdsrv* -iname "TRADX_" -type f | grep -e 103=16
But what it does is that is showing just the logs names and not the content to see if there is a tag 103=16
First of all, you are not searching files of the form TRADX_something.log, but only files which are just named TRADX_ (case-insensitively, so TradX_ would also be found).
Then you are feeding to grep the names of the files, but never look into the content of those files. From the grep man page, you see that the file content can be supplied either via stdin, or by specifying the file name on the command line. In your case, the latter is the way to go. Therefore you can either do a
find . /opt/FIXLOGS/l51prdsrv* -iname "TRADX_*.log" -type f -exec grep -F 103=16 {} \;
if you are only interested in the matchin lines, or a
find . /opt/FIXLOGS/l51prdsrv* -iname "TRADX_*.log" -type f -exec grep -F 103=16 {} /dev/null \;
if you also want to see the file names where the pattern matches. The reason is that grep is printing the filename only if it sees more than 1 filename on the command line and the /dev/null provides a second dummy file. find replaces the {} by the filename.
BTW, I used -f for grep instead of your -e, because you don't seem to use any specific regular expression pattern anyway.
But you don't need find for this task. An alternative would be an explicit loop:
shopt -s nocasematch # make globbing case-insensitive
shopt -s globstar # turn on ** globbing
for f in {.,/opt/FIXLOGS/l51prdsrv*}/**/tradx_*.log
do
[[ -f $f ]] && grep -F 103=16 "$f" /dev/null
done
While the loop looks more complicated at first glance, it is easier to extend the logic in case you want to do more with the files instead of just grepping the lines, for instance taking specific actions on those files which contain the pattern.
You are doing:
find . /opt/FIXLOGS/l51prdsrv* -iname "TRADX_" -type f | grep -e 103=16
I propose you do:
find . /opt/FIXLOGS/l51prdsrv* -iname "TRADX_" -type f -exec grep -e "103=16" {} /dev/null \;
What's the difference?
find ... -type f
=> gives you a list of files.
When you add | grep -e 103=16, then you perform that on the filenames.
When you add -exec grep ..., then you perform that on the files itselfs.

How to use grep to reverse search files in a folder

I'm trying to create a script which will find missing topics from multiple log files. These logfiles are filled top down, so the newest logs are at the bottom of the file. I would like to grep only the last line from this file which includes UNKNOWN_TOPIC_OR_PARTITION. This should be done in multiple files with completely different names. Is grep the best solution or is there another solution that suits my needs. I already tried adding tail, but that doesn't seem to work.
missingTopics=$(grep -Ri -m1 --exclude=*.{1,2,3,4,5} UNKNOWN_TOPIC_OR_PARTITION /app/tibco/log/tra/domain/)
You could try a combination of find, tac and grep:
find /app/tibco/log/tra/domain -type f ! -name '*.[1-5]' -exec sh -c \
'tac "$1" | grep -im1 UNKNOWN_TOPIC_OR_PARTITION' "sh" '{}' \;
tac prints files in reverse, the -exec sh -c SCRIPT "sh" '{}' \; action of find executes the shell SCRIPT each time a file matching the previous tests is found. The SCRIPT is executed with "sh" as parameter $0 and the path of the found file as parameter $1.
If performance is an issue you can probably improve it with:
find . -type f ! -name '*.[1-5]' -exec sh -c 'for f in "$#"; do \
tac "$f" | grep -im1 UNKNOWN_TOPIC_OR_PARTITION; done' "sh" '{}' +
which will spawn less shells. If security is also an issue you can also replace -exec by -execdir (even if with this SCRIPT I do not immediately see any exploit).

The mistake in find and sed command in linux?

I want add some script to my site.
But problem in one thing: site include hundreds of html files.
So I need to create some command to insert code after body tag. How I can do this?
find . -name '*.html' exec sed -i 's/<\/body>/<script src="1.js"><\/script><\/body>/g' {} \;
But it can't work.
Please, fix this command
There is an error in command - replace exec with -exec and should be fine.
find . -name '*.html' exec sed -i 's/<\/body>/<script src="1.js"><\/script><\/body>/g' {} \;
That also works for me:
find * -name "*.html" | xargs -L1 -I{} sed -i 's/<\/body>/<script src="1.js"><\/script><\/body>/g' {}
Changes:
replaced path . with '*'
the'xargs' tool gets all lines from stdin, and executes command separately for each of line, with possibility to pass that line as argument in command, so
in that case this is the same approach as find -cmd, but generally it opens another possibilites (check out the xargs manual).

Better way of using the redirection symbol in conjunction with find -exec

My goal is to empty every file in a directory. I DON'T want to actually delete the file, I just want to delete it's contents.
If you want to do this with a single file you can do > file.txt
If I want to run this operation on every file in a directory I can do this:
find . -exec /bin/bash -c '> {}' \;
Notice how the above command has to call /bin/bash. This is because simply running the command like this, find . -exec > {} \; says find: invalid argument ;' to -exec' I suspect this is because the redirection symbol is confusing the command.
I would like to run this command without needing to run /bin/bash within -exec
How can this be done?
One easy way to do this is by using truncate:
find -type f -exec truncate -s0 {} +
If you want to only use bash, you could use a while loop:
find -type f -print0 |
while IFS= read -r -d '' file; do
> "$file"
done
Finally, if you didn't mind using bash -c, it would be better to do it as follows to avoid calling bash so many times:
find -type f -exec bash -c 'for file; do > "$file"; done' - {} +
although I don't like that solution.

How to chop off part of a file extension using find -exec

I want to use "find" to rename a bunch of files, with the rename simply being the removal of part of the extension.
EXAMPLE:
abc.ext.DELAYED --> abc.ext
I've tried the following, but they simply aren't working:
find . -name *.DELAYED -execdir mv {} $(echo {} | sed 's:\.DELAYED::') \;
find . -name *.DELAYED -execdir mv {} $(echo {} | cut -f 1 -d".") \;
There are two problems with your commands.
The first problem is the * in the command. You need to enclose it in a string since otherwise bash would expand it as a glob expression - * expands to all files in the current folder.
The command should look like this:
find . -name '*.DELAYED' ...
The second problem is that command substitutions happen before the command gets executed meaning
$(echo {})
would expaneded to the literal {} will would lead to a command like
mv file1 file1
You can execute the command in a shell instead:
... -execdir bash -c 'mv {} $(echo {} | cut -f2 -d.)' \;
You have specifically tagged this question with "linux", so I assume that your distribution has the rename tool installed, which is bundled in util-linux package.
This avoids command substitution issues and chaining multiple programs:
find . -name '*.DELAYED' -execdir rename .DELAYED '' {} \;

Resources