find and copy so files while removing all but major version number - linux

Trying to use "find" to copy a bunch of shared objects. Almost there, but would like to remove all version numbers except the major version.
example would be somesharedobject.so.30.0.4 copied to somesharedobject.so.30
find . -maxdepth 1 -type f -name '*.so.*' -exec cp '{}' test/'{}' \;
I'm guessing I'm going to have to pipe to xargs and sed but just hitting a mental block.
find . -maxdepth 1 -type f -name '*.so.*'|xargs -I '{}' cp '{}' test/'{}'

Think I'm just going to go with something like this
find . -maxdepth 1 -type f -name '*.so.*' -exec cp '{}' test/'{}' \;
for f in test/*.so.* ; do mv "$f" "${f%.*.*}" ; done
seems to work ok from my tests

I would write a function + script to make the job easy
#!/bin/bash
specialised_copy(){
version="${1##*so.}"
# extract the version part alone in the above step
cp "$1" "test/${1%%.so*}.so.${version%%.*}"
#cut the major version part from the version and use it for copy
#note folder test should be relative to where the script is saved
}
export -f specialised_copy
find . -maxdepth 1 -type f -name '*.so.*' -exec bash -c 'specialised_copy "$1"' _ {} \;

Related

I want to get an output of the find command in shell script

Am trying to write a script that finds the files that are older than 10 hours from the sub-directories that are in the "HS_client_list". And send the Output to a file "find.log".
#!/bin/bash
while IFS= read -r line; do
echo Executing cd /moveit/$line
cd /moveit/$line
#Find files less than 600 minutes old.
find $PWD -type f -iname "*.enc" -mmin +600 -execdir basename '{}' ';' | xargs ls > /home/infa91punv/find.log
done < HS_client_list
However, the script is able to cd to the folders from HS_client_list(this file contents the name of the subdirectories) but, the find command (find $PWD -type f -iname "*.enc" -mmin +600 -execdir basename '{}' ';' | xargs ls > /home/infa91punv/find.log) is not working. The Output file is empty. But when I run find $PWD -type f -iname "*.enc" -mmin +600 -execdir basename '{}' ';' | xargs ls > /home/infa91punv/find.log as a command it works and from the script it doesn't.
You are overwriting the file in each iteration.
You can use xargs to perform find on multiple directories; but you have to use an alternate delimiter to avoid having xargs populate the {} in the -execdir command.
sed 's%^%/moveit/%' HS_client_list |
xargs -I '<>' find '<>' -type f -iname "*.enc" -mmin +600 -execdir basename {} \; > /home/infa91punv/find.log
The xargs ls did not seem to perform any useful functionality, so I took it out. Generally, don't use ls in scripts.
With GNU find, you could avoid the call to an external utility, and use the -printf predicate to print just the part of the path name that you care about.
For added efficiency, you could invoke a shell to collect the arguments:
sed 's%^%/moveit/%' HS_client_list |
xargs sh -c 'find "$#" -type f -iname "*.enc" -mmin +600 -execdir basename {} \;' _ >/home/infa91punv/find.log
This will run as many directories as possible in a single find invocation.
If you want to keep your loop, the solution is to put the redirection after done. I would still factor out the cd, and take care to quote the variable interpolation.
while IFS= read -r line; do
find /moveit/"$line" -type f -iname "*.enc" -mmin +600 -execdir basename '{}' ';'
done < HS_client_list >/home/infa91punv/find.log

find command to find files and concatenate them

I am trying to find all the files of type *.gz and cat them to total.gz and I think I am quite close on this.
This is the command I am using to list all *.gzfiles:
find /home/downloaded/. -maxdepth 3 -type d \( ! -name . \) \
-exec bash -c "ls -ltr '{}' " \
How to modify it so that it will concatenate all of them and write to ~/total.gz
Directory structure under downloaded is as follows
/downloaded/wllogs/303/07252014/SysteOut.gz
/downloaded/wllogs/301/07252014/SystemOut_13.gz
/downloaded/wllogs/302/07252014/SystemOut_14.gz
Use cat in -exec and redirect output of find:
find /home/downloaded/ -type f -name '*.gz' -exec cat {} \; > output
Use echo in -exec and redirect the output:
find /home/downloaded/ -name "*.gz" -exec echo {} \; > output

Linux find and delete files but redirect file names to be deleted

Is there a way to write the file names to a file before they are deleted for reference later to check what has been deleted.
find <PATH> -type f -name "<filePattern>" -mtime +1 -delete
Just add a -print expression to the invocation of find:
find <PATH> -type f -name "<filePattern>" -mtime +1 -delete -print > log
I'm not sure if this prints the name before or after the file is unlinked, but it should not matter. I suspect -delete -print unlinks before it prints, while -print -delete will print before it unlinks.
Like William said, you can use -print. However, instead of -print > log, you can also use the -fprint flag.
You'd want something like:
find <PATH> -type f -name "<filePattern>" -mtime +1 -fprint "<pathToLog>" -delete
For instance, I use this in a script:
find . -type d -name .~tmp~ -fprint /var/log/rsync-index-removal.log -delete
You can use -exec and rm -v:
find <PATH> -type f -name "<filePattern>" -mtime +1 -exec rm -v {} \;
rm -v will report what it is deleting.
With something like this you can execute multiple commands in the exec statement, like log to file, rm file, and whatever more you should need
find <PATH> -type f -name "<filePattern>" -mtime +1 -exec sh -c "echo {} >>mylog; rm -f {}" \;
From a shell script named removelogs.sh
run the command sh removelogs.sh in terminal
this is the text in removelogs.sh file.
cd /var/log;
date >> /var/log/removedlogs.txt;
find . -maxdepth 4 -type f -name \*log.old -delete -print >> /var/log/removedlogs.txt
. - to run at this location !!! so ensure you do not run this in root folder!!!
-maxdepth - to prevent it getting out of control
-type - to ensure just files
-name - to ensure just your filtered names
-print - to send the result to stdout
-delete - to delete said files
>> - appends to files not overwrites > creates new file
works for me on CENTOS7

Using find command on in a Bash Script to find integers

I need to find and archive files with a certain file name e.g. ABC_000.jpg
find ~/my-documents/ -iname "ABC_***.JPG" -type f -exec cp {} ~/my-documents/archive/ \;
however I can not seem to find a way to limit the find function to find only 3 integers as there are files that are named ABC_CBA.jpg that I do not want included
Try this find:
find ~/my-documents/ -iname "ABC_[0-9][0-9][0-9].JPG" -type f -exec cp '{}' ~/my-documents/archive/ \;
EDIT: Or using regex:
find -E ~/my-documents/ -iregex ".*ABC_[0-9]{3}\.JPG" -type f -exec cp '{}' ~/my-documents/archive/ \;

Loop Over Directories, Process files & Rename New Files

I'm writing a script that would loop over the sub-directories of a given directory, find for ".js" files, compiles with closure. I'm doing this with this commands:
find ./js/ -type f -name "*.js" -exec java -jar compiler.jar --compilation_level SIMPLE_OPTIMIZATIONS --js '{}' --js_output_file '{}'.compiled \;
And then removing the old ".js" files with:
find ./js/ -type f -name "*.js" | xargs rm -f
But, I can't rename the files with the names "foo.js.compiled" to "foo.js".
Please help. Thanks in advance.
Try
for i in `find . -type f -name "*.js.compiled"`; do mv $i ${i%.*} ; done
You can do something like:
find . -name "*.js.compiled" -exec rename -v 's/\.compiled$//' {} +
Test:
$ find . -name "foo*"
./fil/foo.js.compiled
$ find . -name "*.js.compiled" -exec rename -v 's/\.compiled$//' {} +
'./fil/foo.js.compiled' renamed to './fil/foo.js'
$ find . -name "foo*"
./fil/foo.js
use the following code:
find ./js/ -name "*.js.compiled" -print0 | while read -r -d '' filename; do
mv "$filename" "${filename/js.compiled/js}";
done

Resources