CMake Linking Path Substitute for Microsoft Asure Cross-Compiling - azure

I am porting Microsoft Azure to OpenWrt (Atheros AR9330 rev 1#mips),
Follow the steps from https://github.com/Azure/azure-iot-sdk-c/blob/master/doc/SDK_cross_compile_example.md and https://github.com/Azure/azure-iot-sdk-c/issues/58
But I encounter a bug of the CMake script of Azure:
The libcurl would be linked by default path, for example:
in file umqtt/samples/mqtt_client_sample/CMakeFiles/mqtt_client_sample.dir/link.txt
.... -lcurl /home/gaiger/openwrt-cc/staging_dir/target-mips_34kc_uClibc-0.9.33.2/usr/lib/libssl.so /home/gaiger/openwrt-cc/staging_dir/target- \\
mips_34kc_uClibc-0.9.33.2/usr/lib/libcrypto.so -lpthread -lm -lrt -luuid -Wl,-rpath
It is very obvious that the libcurl and libuuid have been adopted by default system path instead of the target system library path (but the openssl path is the target's ).
This bug has been reported to Microsoft Azure team https://github.com/Azure/iot-edge/issues/119, but it has not been fixed currently.
I found that if I substitute the -lcurl and -luuid as where they exist authentically (-lcurl -> home/gaiger/openwrt-cc/staging_dir/target-mips_34kc_uClibc-0.9.33.2/usr/lib/libcurl.so, as for -luuid), the compilation would be passed. But the manual substitution is a toilsome work (for there are a lot link.txt files waiting to be modified), and it needs to be done again for next time compilation.
I have tried to modify my platform file, mips_34kc.cmake, to add the line (mentioned in the last post in https://github.com/Azure/iot-edge/issues/119 )
SET(CMAKE_EXE_LINKER_FLAGS "-Lhome/gaiger/openwrt-cc/staging_dir/target-mips_34kc_uClibc-0.9.33.2/usr/lib/libcurl.so" CACHE STRING "" FORCE)
SET(MAKE_SHARED_LINKER_FLAGS "-Lhome/gaiger/openwrt-cc/staging_dir/target-mips_34kc_uClibc-0.9.33.2/usr/lib/libcurl.so" CACHE STRING "" FORCE)
SET(CMAKE_C_FLAGS "-Lhome/gaiger/openwrt-cc/staging_dir/target-mips_34kc_uClibc-0.9.33.2/usr/lib/libcurl.so" CACHE STRING "" FORCE)
But the link.txt did not changed.
And I tried to write a script to substitue -lcurl as home/gaiger/openwrt-cc/staging_dir/target-mips_34kc_uClibc-0.9.33.2/usr/lib/libcurl.so (use sed), it mess up the file only, and I do not know how to write a script which will seek the files recursively.
Could anyone give me a clue or help ? Thank you.

I have written a shell script to detour the bug.
# bin/bash
#Power by Gaiger Chen 撰也垓恪, to fix azure sdk error in linking stage
echo Back up file as link.txt in the same folders
find -name link.txt -exec cp {} {}.bak -f \
#find -name link.txt -exec rm {}.bak -f \;
#find . -ipath "*link.txt" -type f -exec cp {} {}.bak \;
#find . -ipath "*link.txt" -type f -exec rm {}.bak \;
FOUND_LINKINK_TXT=$(find -name link.txt)
OPENWRT_LIB_PATH=""
echo "$FOUND_LINKINK_TXT" | while read LINE_CONTENT
do
if [ -z "$OPENWRT_LIB_PATH" ]; then
OPENWRT_LIB_PATH=$(sed -rn 's/.* (.*)libssl.so .*/\1/p' "$LINE_CONTENT")
echo "$OPENWRT_LIB_PATH"
fi
echo fixing file: "$LINE_CONTENT".
sed -i "s|-lcurl|$OPENWRT_LIB_PATH/libcurl.so|g" "$LINE_CONTENT"
sed -i "s|-luuid|$OPENWRT_LIB_PATH/libuuid.so|" "$LINE_CONTENT"
done # while read LINE_CONTENT
FILE_NUM=$(echo "$FOUND_LINKINK_TXT" | wc -l)
echo "$FILE_NUM" files have been fixed.
More detail could be found in my blogger:
http://gaiger-programming.blogspot.tw/2017/07/build-and-exploit-microsoft-azure-sdk.html

Related

Find all executable files that depend on the specified library in the specified directory

My goal is to write a shell script that uses "objdump -p" command to find all executable files that depend on the specified library in the specified directory. (OpenBSD).
I try something like this:
find $1 -perm -111 -print0 | xargs -r0 objdump -p | grep -l "NEEDED $2"
But this solution doesn't work because grep cannot figure out the filenames in which it found the given match. The difficulty is to determine the names of the executable files in which grep found the specified library.
Can anyone suggest a solution using the "objdump -p" command?
The trick is to execute a shell script rather than a single command to be able to re-use the file name.
finddepend() {
# Arg 1: The directory where to find
# Arg 2: The library name
basedir=$1
libname=$2
find "$basedir" \
\( -perm -100 -o -perm -010 -o -perm -001 \) \
\( -type f -o -type l \) \
-exec sh -c '
# Arg 0: Is a dummy _ for this inline script
# Arg 1: The executable file path
# Arg 2: The library name
filepath=$1
libname=$2
objdump -p "$filepath" 2>/dev/null |
if grep -qF " NEEDED $libname"; then
printf %s\\n "${filepath##*/}"
fi
' _ {} "$libname" \;
}
Example usage:
finddepend /bin libselinux.so
mv
systemctl
tar
sed
udevadm
ls
mknod
systemd
mkdir
ss
dir
vdir
cp
systemd-hwdb
netstat
Why do you want to use objdump when you can use ldd (List Dynamic Dependencies)? objdump gives a complete summary, which you need to process in order only to get the information you're looking for, while ldd only gives you that information.

find command: delete everything but one folder

I have this command:
find ~/Desktop/testrm -mindepth 1 -path ~/Desktop/testrm/.snapshot -o -mtime +2 -prune -exec rm -rf {} +
I want it to work as is, but it must avoid to remove a specific directory ($ROOT_DIR/$DATA_DIR).
it must remove the files inside the directory but not the directory itself
the flag "r" in rm is needed because it has to delete other directories
-prune is not suitable since it will discard the content and also sub directories
You can exclude individual paths using the short circuiting behavior of -o (like you already did with ~/Desktop/testrm/.snapshot).
However, for each excluded path you also have to exclude all of its parent directories. Otherwise you would delete a/b/c by deleting a/b/ or a/ with rm -rf.
In the following script, the function orParents generates a part of the find command. Example:
find $(orParents a/b/c) ... would run
find -path a/b/c -o -path a/b -o -path a -o ....
#! /usr/bin/env bash
orParents() {
p="$1"
while
printf -- '-path %q -o' "$p"
p=$(dirname "$p")
[ "$p" != . ]
do :; done
}
find ~/Desktop/testrm -mindepth 1 \
$(orParents "$ROOT_DIR/$DATA_DIR") -path ~/Desktop/testrm/.snapshot -o \
-mtime +2 -prune -exec rm -rf {} +
Warning: You have to make sure that $ROOT_DIR/$DATA_DIR does not end with a / and does not contain glob characters like *, ?, and [].
Spaces are ok as printf %q escapes them correctly. However, find -path interprets its argument as a glob pattern independently. We could do a double quoting mechanism. Maybe something like printf %q "$(sed 's/[][*?\]/\\&/' <<< "$p")", but I'm not so sure about how exactly find -path interprets its argument.
Alternatively, you could write a script isParentOf and do ...
find ... -exec isParentOf "$ROOT_DIR/$DATA_DIR" {} \; -o ...
... to exclude $ROOT_DIR/$DATA_DIR and all of its parents. This is probably safer and more portable, but slower and a hassle to set up (find -exec bash -c ... and so on) if you don't want to add a script file to your path.

Binary operator expected

I have a simple problem with binary operator but I can't resolve it. Can anyone help me why this shell script not work:
set -o nounset -o pipefail -o errexit
if [ -e /root/mom/*.php ]; then
find /root/mom/*.php -exec gpg --clearsign {} \;
else
echo "Hello world"
fi
If you want to do something for all php files in a directory, just use find:
find /root/mom -name "*.php" -exec gpg --clearsign {} \;
Note that it takes a list of directories to search in, not a list of plain files. There's no need to try to see if files exist before using it; it's not an error if they don't.

Find multiple files and rename them in Linux

I am having files like a_dbg.txt, b_dbg.txt ... in a Suse 10 system. I want to write a bash shell script which should rename these files by removing "_dbg" from them.
Google suggested me to use rename command. So I executed the command rename _dbg.txt .txt *dbg* on the CURRENT_FOLDER
My actual CURRENT_FOLDER contains the below files.
CURRENT_FOLDER/a_dbg.txt
CURRENT_FOLDER/b_dbg.txt
CURRENT_FOLDER/XX/c_dbg.txt
CURRENT_FOLDER/YY/d_dbg.txt
After executing the rename command,
CURRENT_FOLDER/a.txt
CURRENT_FOLDER/b.txt
CURRENT_FOLDER/XX/c_dbg.txt
CURRENT_FOLDER/YY/d_dbg.txt
Its not doing recursively, how to make this command to rename files in all subdirectories. Like XX and YY I will be having so many subdirectories which name is unpredictable. And also my CURRENT_FOLDER will be having some other files also.
You can use find to find all matching files recursively:
find . -iname "*dbg*" -exec rename _dbg.txt .txt '{}' \;
EDIT: what the '{}' and \; are?
The -exec argument makes find execute rename for every matching file found. '{}' will be replaced with the path name of the file. The last token, \; is there only to mark the end of the exec expression.
All that is described nicely in the man page for find:
-exec utility [argument ...] ;
True if the program named utility returns a zero value as its
exit status. Optional arguments may be passed to the utility.
The expression must be terminated by a semicolon (``;''). If you
invoke find from a shell you may need to quote the semicolon if
the shell would otherwise treat it as a control operator. If the
string ``{}'' appears anywhere in the utility name or the argu-
ments it is replaced by the pathname of the current file.
Utility will be executed from the directory from which find was
executed. Utility and arguments are not subject to the further
expansion of shell patterns and constructs.
For renaming recursively I use the following commands:
find -iname \*.* | rename -v "s/ /-/g"
small script i wrote to replace all files with .txt extension to .cpp extension under /tmp and sub directories recursively
#!/bin/bash
for file in $(find /tmp -name '*.txt')
do
mv $file $(echo "$file" | sed -r 's|.txt|.cpp|g')
done
with bash:
shopt -s globstar nullglob
rename _dbg.txt .txt **/*dbg*
find -execdir rename also works for non-suffix replacements on basenames
https://stackoverflow.com/a/16541670/895245 works directly only for suffixes, but this will work for arbitrary regex replacements on basenames:
PATH=/usr/bin find . -depth -execdir rename 's/_dbg.txt$/_.txt' '{}' \;
or to affect files only:
PATH=/usr/bin find . -type f -execdir rename 's/_dbg.txt$/_.txt' '{}' \;
-execdir first cds into the directory before executing only on the basename.
Tested on Ubuntu 20.04, find 4.7.0, rename 1.10.
Convenient and safer helper for it
find-rename-regex() (
set -eu
find_and_replace="$1"
PATH="$(echo "$PATH" | sed -E 's/(^|:)[^\/][^:]*//g')" \
find . -depth -execdir rename "${2:--n}" "s/${find_and_replace}" '{}' \;
)
GitHub upstream.
Sample usage to replace spaces ' ' with hyphens '-'.
Dry run that shows what would be renamed to what without actually doing it:
find-rename-regex ' /-/g'
Do the replace:
find-rename-regex ' /-/g' -v
Command explanation
The awesome -execdir option does a cd into the directory before executing the rename command, unlike -exec.
-depth ensure that the renaming happens first on children, and then on parents, to prevent potential problems with missing parent directories.
-execdir is required because rename does not play well with non-basename input paths, e.g. the following fails:
rename 's/findme/replaceme/g' acc/acc
The PATH hacking is required because -execdir has one very annoying drawback: find is extremely opinionated and refuses to do anything with -execdir if you have any relative paths in your PATH environment variable, e.g. ./node_modules/.bin, failing with:
find: The relative path ‘./node_modules/.bin’ is included in the PATH environment variable, which is insecure in combination with the -execdir action of find. Please remove that entry from $PATH
See also: https://askubuntu.com/questions/621132/why-using-the-execdir-action-is-insecure-for-directory-which-is-in-the-path/1109378#1109378
-execdir is a GNU find extension to POSIX. rename is Perl based and comes from the rename package.
Rename lookahead workaround
If your input paths don't come from find, or if you've had enough of the relative path annoyance, we can use some Perl lookahead to safely rename directories as in:
git ls-files | sort -r | xargs rename 's/findme(?!.*\/)\/?$/replaceme/g' '{}'
I haven't found a convenient analogue for -execdir with xargs: https://superuser.com/questions/893890/xargs-change-working-directory-to-file-path-before-executing/915686
The sort -r is required to ensure that files come after their respective directories, since longer paths come after shorter ones with the same prefix.
Tested in Ubuntu 18.10.
Script above can be written in one line:
find /tmp -name "*.txt" -exec bash -c 'mv $0 $(echo "$0" | sed -r \"s|.txt|.cpp|g\")' '{}' \;
If you just want to rename and don't mind using an external tool, then you can use rnm. The command would be:
#on current folder
rnm -dp -1 -fo -ssf '_dbg' -rs '/_dbg//' *
-dp -1 will make it recursive to all subdirectories.
-fo implies file only mode.
-ssf '_dbg' searches for files with _dbg in the filename.
-rs '/_dbg//' replaces _dbg with empty string.
You can run the above command with the path of the CURRENT_FOLDER too:
rnm -dp -1 -fo -ssf '_dbg' -rs '/_dbg//' /path/to/the/directory
You can use this below.
rename --no-act 's/\.html$/\.php/' *.html */*.html
This command worked for me. Remember first to install the perl rename package:
find -iname \*.* | grep oldname | rename -v "s/oldname/newname/g
To expand on the excellent answer #CiroSantilliПутлерКапут六四事 : do not match files in the find that we don't have to rename.
I have found this to improve performance significantly on Cygwin.
Please feel free to correct my ineffective bash coding.
FIND_STRING="ZZZZ"
REPLACE_STRING="YYYY"
FIND_PARAMS="-type d"
find-rename-regex() (
set -eu
find_and_replace="${1}/${2}/g"
echo "${find_and_replace}"
find_params="${3}"
mode="${4}"
if [ "${mode}" = 'real' ]; then
PATH="$(echo "$PATH" | sed -E 's/(^|:)[^\/][^:]*//g')" \
find . -depth -name "*${1}*" ${find_params} -execdir rename -v "s/${find_and_replace}" '{}' \;
elif [ "${mode}" = 'dryrun' ]; then
echo "${mode}"
PATH="$(echo "$PATH" | sed -E 's/(^|:)[^\/][^:]*//g')" \
find . -depth -name "*${1}*" ${find_params} -execdir rename -n "s/${find_and_replace}" '{}' \;
fi
)
find-rename-regex "${FIND_STRING}" "${REPLACE_STRING}" "${FIND_PARAMS}" "dryrun"
# find-rename-regex "${FIND_STRING}" "${REPLACE_STRING}" "${FIND_PARAMS}" "real"
In case anyone is comfortable with fd and rnr, the command is:
fd -t f -x rnr '_dbg.txt' '.txt'
rnr only command is:
rnr -f -r '_dbg.txt' '.txt' *
rnr has the benefit of being able to undo the command.
On Ubuntu (after installing rename), this simpler solution worked the best for me. This replaces space with underscore, but can be modified as needed.
find . -depth | rename -d -v -n "s/ /_/g"
The -depth flag is telling find to traverse the depth of a directory first, which is good because I want to rename the leaf nodes first.
The -d flag on rename tells it to only rename the filename component of the path. I don't know how general the behavior is but on my installation (Ubuntu 20.04), it could be the file or the directory as long as it is the leaf node of the path.
I recommend the -n (no action) flag first along with -v, so you can see what would get renamed and how.
Using the two flags together, it renames all the files in a directory first and then the directory itself. Working backwards. Which is exactly what I needed.
classic solution:
for f in $(find . -name "*dbg*"); do mv $f $(echo $f | sed 's/_dbg//'); done

Determine which executables link to a specific shared library on Linux

How can I list all executables on my Red Hat Linux system which link to libssl? I can get close with:
find / -type f -perm /a+x -exec ldd {} \; | grep libssl
ldd shows me which libraries the executable links with, but the line that contains the library name does not also show the filename, so although I get a lot of matches with grep, I can't figure out how to back out the name of the executable from which the match occured. Any help would be greatly appreciated.
find / -type f -perm /a+x -print0 |
while read -d $'\0' FILE; do
ldd "$FILE" | grep -q libssl && echo "$FILE"
done
I'm not sure, but maybe sudo lsof |grep libssl.so
find /usr/bin/ -type f -perm /a+x | while read i; do match=`ldd $i | grep libssl`; [[ $match ]] && echo $i; done
Instead of using -exec, pipe to a while loop and check a match before you echo the file name. Optionally, you could add "ldd $i" to the check on match using either () or a real if/then/fi block.
find / -type f -perm /a+x -xdev | while read filename ; do
if ldd "$filename" | grep -q "libssl" ; then
echo "$filename"
fi
done
The -xdev makes find stay on the same filesystem (i.e. it won't dive into /proc or /sys). Note: if constructed this on Mac OS X, the -perm of yours doesn't work here so I don't know whether it's correct. And instead of ldd I've used otool -L but the result should be the same.

Resources