How can I list all executables on my Red Hat Linux system which link to libssl? I can get close with:
find / -type f -perm /a+x -exec ldd {} \; | grep libssl
ldd shows me which libraries the executable links with, but the line that contains the library name does not also show the filename, so although I get a lot of matches with grep, I can't figure out how to back out the name of the executable from which the match occured. Any help would be greatly appreciated.
find / -type f -perm /a+x -print0 |
while read -d $'\0' FILE; do
ldd "$FILE" | grep -q libssl && echo "$FILE"
done
I'm not sure, but maybe sudo lsof |grep libssl.so
find /usr/bin/ -type f -perm /a+x | while read i; do match=`ldd $i | grep libssl`; [[ $match ]] && echo $i; done
Instead of using -exec, pipe to a while loop and check a match before you echo the file name. Optionally, you could add "ldd $i" to the check on match using either () or a real if/then/fi block.
find / -type f -perm /a+x -xdev | while read filename ; do
if ldd "$filename" | grep -q "libssl" ; then
echo "$filename"
fi
done
The -xdev makes find stay on the same filesystem (i.e. it won't dive into /proc or /sys). Note: if constructed this on Mac OS X, the -perm of yours doesn't work here so I don't know whether it's correct. And instead of ldd I've used otool -L but the result should be the same.
Related
I would like to find the newest sub directory in a directory and save the result to variable in bash.
Something like this:
ls -t /backups | head -1 > $BACKUPDIR
Can anyone help?
BACKUPDIR=$(ls -td /backups/*/ | head -1)
$(...) evaluates the statement in a subshell and returns the output.
There is a simple solution to this using only ls:
BACKUPDIR=$(ls -td /backups/*/ | head -1)
-t orders by time (latest first)
-d only lists items from this folder
*/ only lists directories
head -1 returns the first item
I didn't know about */ until I found Listing only directories using ls in bash: An examination.
This ia a pure Bash solution:
topdir=/backups
BACKUPDIR=
# Handle subdirectories beginning with '.', and empty $topdir
shopt -s dotglob nullglob
for file in "$topdir"/* ; do
[[ -L $file || ! -d $file ]] && continue
[[ -z $BACKUPDIR || $file -nt $BACKUPDIR ]] && BACKUPDIR=$file
done
printf 'BACKUPDIR=%q\n' "$BACKUPDIR"
It skips symlinks, including symlinks to directories, which may or may not be the right thing to do. It skips other non-directories. It handles directories whose names contain any characters, including newlines and leading dots.
Well, I think this solution is the most efficient:
path="/my/dir/structure/*"
backupdir=$(find $path -type d -prune | tail -n 1)
Explanation why this is a little better:
We do not need sub-shells (aside from the one for getting the result into the bash variable).
We do not need a useless -exec ls -d at the end of the find command, it already prints the directory listing.
We can easily alter this, e.g. to exclude certain patterns. For example, if you want the second newest directory, because backup files are first written to a tmp dir in the same path:
backupdir=$(find $path -type -d -prune -not -name "*temp_dir" | tail -n 1)
The above solution doesn't take into account things like files being written and removed from the directory resulting in the upper directory being returned instead of the newest subdirectory.
The other issue is that this solution assumes that the directory only contains other directories and not files being written.
Let's say I create a file called "test.txt" and then run this command again:
echo "test" > test.txt
ls -t /backups | head -1
test.txt
The result is test.txt showing up instead of the last modified directory.
The proposed solution "works" but only in the best case scenario.
Assuming you have a maximum of 1 directory depth, a better solution is to use:
find /backups/* -type d -prune -exec ls -d {} \; |tail -1
Just swap the "/backups/" portion for your actual path.
If you want to avoid showing an absolute path in a bash script, you could always use something like this:
LOCALPATH=/backups
DIRECTORY=$(cd $LOCALPATH; find * -type d -prune -exec ls -d {} \; |tail -1)
With GNU find you can get list of directories with modification timestamps, sort that list and output the newest:
find . -mindepth 1 -maxdepth 1 -type d -printf "%T#\t%p\0" | sort -z -n | cut -z -f2- | tail -z -n1
or newline separated
find . -mindepth 1 -maxdepth 1 -type d -printf "%T#\t%p\n" | sort -n | cut -f2- | tail -n1
With POSIX find (that does not have -printf) you may, if you have it, run stat to get file modification timestamp:
find . -mindepth 1 -maxdepth 1 -type d -exec stat -c '%Y %n' {} \; | sort -n | cut -d' ' -f2- | tail -n1
Without stat a pure shell solution may be used by replacing [[ bash extension with [ as in this answer.
Your "something like this" was almost a hit:
BACKUPDIR=$(ls -t ./backups | head -1)
Combining what you wrote with what I have learned solved my problem too. Thank you for rising this question.
Note: I run the line above from GitBash within Windows environment in file called ./something.bash.
My goal is to write a shell script that uses "objdump -p" command to find all executable files that depend on the specified library in the specified directory. (OpenBSD).
I try something like this:
find $1 -perm -111 -print0 | xargs -r0 objdump -p | grep -l "NEEDED $2"
But this solution doesn't work because grep cannot figure out the filenames in which it found the given match. The difficulty is to determine the names of the executable files in which grep found the specified library.
Can anyone suggest a solution using the "objdump -p" command?
The trick is to execute a shell script rather than a single command to be able to re-use the file name.
finddepend() {
# Arg 1: The directory where to find
# Arg 2: The library name
basedir=$1
libname=$2
find "$basedir" \
\( -perm -100 -o -perm -010 -o -perm -001 \) \
\( -type f -o -type l \) \
-exec sh -c '
# Arg 0: Is a dummy _ for this inline script
# Arg 1: The executable file path
# Arg 2: The library name
filepath=$1
libname=$2
objdump -p "$filepath" 2>/dev/null |
if grep -qF " NEEDED $libname"; then
printf %s\\n "${filepath##*/}"
fi
' _ {} "$libname" \;
}
Example usage:
finddepend /bin libselinux.so
mv
systemctl
tar
sed
udevadm
ls
mknod
systemd
mkdir
ss
dir
vdir
cp
systemd-hwdb
netstat
Why do you want to use objdump when you can use ldd (List Dynamic Dependencies)? objdump gives a complete summary, which you need to process in order only to get the information you're looking for, while ldd only gives you that information.
I would like to preface this with I am a complete noob with scripting. So I have a situation where I need to manually look for a phone number that could live in one of hundreds of files.
so the logs live in the following directory.
/actlogs/sbclogger_archive
The logs file names are in directories numbered 01-31 inside of that directory and all the files are zipped.
Inside of those numbered directories are tons of files but the only ones I want to search are "sipd.logthenthedate.gz" and "sipmsg.logthenthedate.gz".
So I need to look in all the files in the following directory.
"/actlogs/sbclogger_archive"
Which has 31 directories labeled "01-31"
Then in each 01-31 there is hundreds of files the only ones I want to look are are "sipd.logthenthedate.gz" and "sipmsg.logthenthedate.gz".
The script I am using is below, please let me know what I could do to make this work.
#!/bin/bash
read -p "Enter a phone number: " text
read -p "Enter directory of log file's, Hint it should be /actlogs/sbclogger_archive: " directory
#arr=( $(find $directory -type f -exec grep -l "$text" {} \; | sort -r) )
#find $directory -type f -exec grep -qe "$text" {} \; -exec bash -c '
file=$(find $directory -type f -name 'sipd.log*' -exec grep -qe "$text" {} \; -exec bash -c 'select f; do echo $f; break; done' find-sh {} +;)
if [ -z "$file" ]; then
echo "No matches found."
else
echo "select tool:"
tools=("nano" "less" "vim" "quit")
select tool in "${tools[#]}"
do
case $tool in
"quit")
break
;;
*)
$tool $file
break
;;
esac
done
fi
This would give you the list of files matching:
find \( -name 'sipd.log[0-9]*.gz' -o -name 'sipmsg.log[0-9]*.gz' \) \
-exec sh -c 'gunzip -c {}| grep -m1 -q 888333' \; -print
./18/sipd.log20200118.gz
./7/sipd.log20200107.gz
Note: -m1 tells grep to stop after first match, since you need only the file name in this case, it's enough.
If you have zgrep, you can shorten it to:
find \( -name 'sipd.log[0-9]*.gz' -o -name 'sipmsg.log[0-9]*.gz' \) \
-exec zgrep -l '888333' {} \;
./18/sipd.log20200118.gz
./7/sipd.log20200107.gz
Also, some of the tools you are suggesting do not support gzip files (nano and some variants of less for example). In which case you might need to decompress the file and compress it again when done.
And, you might want to consider a loop if you want to "quit". Feeding the file list to the tool doesn't make sense.
Note: AFAIK zgrep doesn't do recursive:
DESCRIPTION
Zgrep invokes grep on compressed or gzipped files. These grep options will cause zgrep to terminate with an
error code:
(-[drRzZ]|--di*|--exc*|--inc*|--rec*|--nu*). All other options specified are passed directly to grep. If no file is specified, then
the
standard input is decompressed if necessary and fed to grep. Otherwise the given files are uncompressed if necessary and fed to
grep.
so zgrep -rl "$text" "$directory" or zgrep -rl --include 'simpd.log*.gz' "$test" {01..31} won't work except if you have a special zgrep
As you must unzip before using your tool, i would divide the problem in two blocks.
Firstly, i would expand the paths you need (looking under <directory> for the phone <text>), and then iterate to apply the tool (because some tools like vim or nano cannot be piped).
Try something like this:
#!/bin/bash
#...
# text/directory input stuff
#...
tmpdir=$(mktemp -d)
trap 'rm -rf ${tmpdir}' EXIT
while IFS= read -r file; do
unzipped=${tmpdir}/$(basename "${file}" .gz)
gunzip -c "${file}" > "${unzipped}"
${tool} "${unzipped}"
done < <(zgrep -lw "${text}" "${directory}"/{01..31}/{sipd.logthenthedate.gz,sipmsg.logthenthedate.gz} 2>/dev/null)
Above is the proposed invert-form by Charles Duffy following this Bash FAQ.
If you prefer to iterate an array, you could build in this way:
# shellcheck disable=SC2207
files=( $(zgrep -lw "${text}" "${directory}"/{01..31}/{sipd.logthenthedate.gz,sipmsg.logthenthedate.gz} 2>/dev/null) )
for file in "${files[#]}"; do
# etc.
as in our particular case, the files to match have no spaces in their names and shellcheck warning is not so important (hidden above).
BRs
Is there any sort option available in find command to get directory with least access date/time
find . -type d -printf "%A# %p\n" | sort -n | tail -n 1 | cut -d " " -f 2-
If you prefer the filename without leading path, replace %p by %f.
the below linux command displays the access and modified time along with size
stat -f
find -type d -printf '%T+ %p\n' | sort | head -1
source
find -type d -printf '%T+ %p\n' | sort
This sound like more of a job for ls:
ls -ultd *|grep ^d
The problem with using find, at least on my system (cygwin/bash), is that find accesses the dirs, so all access-times result in current time, defeating your apparent purpose.
A simple shell script will also do:
unset -v oldest
for i in "$dir"/*; do
[ "$i" -ot "$oldest" -o "$oldest" = "" ] && oldest="$i"
done
note: to find the oldest directory use "$dir"/*/ above (thanks Cyrus) and -type d below with the find command.
In bash if you need a recursive solution, then you can rewrite it as a while loop with process substitution using find
unset -v oldest
while IFS= read -r i; do
[ "$i" -ot "$oldest" -o "$oldest" = "" ] && oldest="$i"
done < <(find "$dir" -type f)
I want to recursively search a directory tree and get the 10 most recently modified files.
For each one of these files, i want to create a symlink in my /home/mostrecent/ directory.
I know i could solve this with a scripting language, but I'm a bit miffed that I can't do it with a linux command!
So far i have this:
find /home/myfiles -type f -printf '%TY-%Tm-%Td %TT %p\n' | sort | tail -n 10 | cut -c 32-
How do i create a symlink in /home/mostrecent for each one of these files, without using a scripting language?
Actually, bash is a scripting language, more than capable of doing that sort of stuff even from the command line :-)
Assuming that the command you posted works (and it seems to, based on my cursory testing), you can just do:
i=0
for f in $(CMD) ; do
ln -s $f $HOME/recent$i
((i++))
done
Or, as a one-liner:
i=0;for f in $(CMD);do ln -s $f $HOME/recent$i;((i++));done
This will create the files recent0 through recent9 in your home directory, which are symlinks to the most recent files.
Obviously, you should put your actual command where I've put the marker text CMD above. I've used the marker just so it formats nicely here on SO.
As Jan Hudec points out in a comment, that will only work for files without spaces, evil things in my opinion :-)
But, since people seem to use them, you can use the safer:
i=0
CMD | while read f; do
ln -s $f $HOME/recent$i
((i++))
done
And, again, the one-liner version:
i=0;CMD|while read f;do ln -s $f $HOME/recent$i;((i++));done
I solved this with sed.
All hail sed!
find /home/myfiles/ -type f -printf '%TY-%Tm-%Td %TT %p\n' | sort | tail -n 10 | cut -c 21- | sed -e "s/^/ln -s \"/" -e "s/$/\"/" -e "s/$/ \"\/home\recent\/\"/" | sh
If i pipe sed to cat instead of sh, this is the output:
ln -s "/home/myfiles/1.simplest" "/home/recent/"
ln -s "/home/myfiles/2.with space" "/home/recent/"
ln -s "/home/myfiles/3.with'apostraphe" "/home/recent/"
ln -s "/home/myfiles/4.with'apostrophe space" "/home/recent/"
Thanks for your help.
Create symlinks to several file types modified in the last 24 hrs, with the same filenames (but a different path of course)
Thanks to Pax, Jan and Jon, with a little modification...
Make a 'recent' directory
mkdir ~/recent
Create 'getrecentfiles.sh' and add...
#!/usr/bin/bash
find $HOME -mtime 0 -name \*.txt -print -o \
-mtime 0 -name \*.pdf -print -o \
-mtime 0 -name \*.extensionname -print -o | while read f; do
ln -s $f $HOME/recent/
done
filters:
-mtime (n*24hrs) is time since last modified (n=1 shows only files modified bw 24-48hrs ago)
-o is the OR operator for multiple files (default is AND)
Change it to executable, add it to your startup scripts and make a shortcut to ~/recent on your desktop, to have the latest files you want on hand!