getting dependent modules (shared objects) for a binary - linux

I have a binary file on linux .. tclsh8.4. It depends on certain tcl*.so files.
Is there a way to get this information from the binary file itself?
The tcl*.so files on which the binary tclsh8.4 depends is in some other directory having limited permission. What should I do to the binary file in order to use the same .so files from some other location?
Would just copying oved .so files in the same location work?

Use ldd for this.
Copying the shared objects over would not work since the Linux loader only look for shared objects in directories specified in /etc/ld.so.conf. You would need to use $LD_LIBRARY_PATH to tell the loader where to find extra shared objects.

To see the dependencies of dynamic .so file you can use the ldd command. To get info about the executable file, check the readelf command.
If you need to check the dependencies of multiple .so files, you can use the next script:
#!/bin/bash
# dependencies.sh
# Needs to specify the path to check for .so dependencies
if [ $# -ne 1 ]
then
echo 'You need to specify the path'
exit 0
fi
path=$1
for file in "$(find $path -name '*.so')"
do
ldd $file
done
exit 0
I hope it helps.

Related

Should PATH contain directories or full paths to binaries?

I am trying to set up a correct PATH, but I'm wondering what it should contain. If I have
/usr/bin/ls
/usr/local/bin/ls
and I want to prefer the one in /usr/local/bin, which of the following should I use?
PATH=/usr/local/bin/ls:/usr/bin/ls
PATH=/usr/local/bin:/usr/bin
or something else entirely?
This is not per se a suitable question for Stack Overflow.
I expect this to be closed as General Computing or Too Broad;
but the answer is frequently needed by beginners, so I hope this won't be deleted.
PATH works only with directories, not with single files
From the POSIX standard (emphasis mine)
PATH
This variable shall represent the sequence of path prefixes that certain functions and utilities apply in searching for an executable file known only by a filename. The prefixes shall be separated by a colon ( ':' ). [...] The list shall be searched from beginning to end, applying the filename to each prefix, until an executable file with the specified name and appropriate execution permissions is found.
When you type in ls into your shell and your PATH is set to /usr/local/bin/ls:/usr/bin/ls then your shell will …
… look for an executable with the path /usr/local/bin/ls/ls (note the double ls at the end).
As that path does not exist on your system your shell will proceed to look for an executable with the path /usr/bin/ls/ls (double ls again). That path also doesn't exist.
The shell couldn't find an executable using all paths in PATH so your shell will print something like bash: ls: command not found.
So, what did we learn? Paths listed by PATH have to be directories. You cannot list single files. Therefore the correct answer in your case is PATH=/usr/local/bin:/usr/bin.
Where things get interesting
Imagine the following situation. You have two versions of program c1 and two versions of program c2. The different versions are stored in the directories /a/ and /b/.
/a/c1
/a/c2
/b/c1
/b/c2
How can we set PATH to prefer /a/c1 over /b/c1/ but at the same time /b/c2 over /a/c2?
Sadly, there is no way to achieve this directly as we can only specify directories in PATH. We have to move/rename some files or create symlinks and use the symlinks inside the paths. One possible solution:
mkdir /c
ln -s /a/c1 /c/c1
ln -s /b/c2 /c/c2
export PATH=/c:/a:/b
The trailing :/a:/b is not really necessary here. I included them under the assumption that /a and /b contain more executables than just c1 and c2.
Indeed, as you can easily find out through experimentation, the variable PATH should already contain a list of directories which are consulted in that order. In fact, you should already find that you have /usr/local/bin and /usr/bin in the default PATH, usually in this order (though perhaps with other directories between them, and certainly with more directories around them).
To inspect your current PATH value, try
echo "$PATH"
or for a slightly more human-readable rendering
echo "${PATH//:/$'\n'}" # bash only
or
echo "$PATH" | tr ':' '\012' # any POSIX system
If you managed to set your PATH to an invalid value (which would cause simple commands like ls and cat to no longer be found, and produce command not found errors) you can try
PATH=/usr/local/bin:/usr/bin:/usr
to hopefully at least restore the essential functionality so that you can use cp or a simple system editor to get back to the original, safe, system default PATH.

how to fix "cannot open shared object file" in Cshell script?

I am working on Linux system and have a Fortran executable e.g. a.exe successfully run by directly execute. I want to execute this a.exe inside a Cshell script, but is always report error as "error while loading shared libraries: libnetcdff.so.6: cannot open shared object file: No such file or directory"
when I do 'ldd a.exe', it report me some dependency of libraries for this executables.
libnetcdff.so.6 => /met5/ZR_LOCAL_LIBS/lib/libnetcdff.so.6 (0x00002ab536656000)
The library did existed, and I also have the path set as $LD_LIBRARY_PATH
the a.exe need two inputs $INFILE1, $INFILE2 and will generate the output at $OUTPUT
it can be executed by hand typing ./a.exe and providing the path of $INFILE1 and $INFILE2, however, when I write a simple Cshell script as the form:
#!/bin/csh
#
setenv BASE $PWD
setenv PROGNAME a.exe
cd $BASE
setenv INFILE1 $BASE/agtsc_ave_2017.nc
setenv INFILE2 $BASE/agtsc_ave_2029.nc
setenv OUTFILE $BASE/emis_pct_2029_relative_to_2017.nc
if ( -e $OUTFILE ) rm -f $OUTFILE
$BASE/$PROGNAME
it will report the error as:
a.exe: error while loading shared libraries: libnetcdff.so.6: cannot open shared object file: No such file or directory
I have no idea how to debug through this. Can anyone help me to fix it? Thanks a lot!
I think I found the problem. I use other people's .cshrc, so there is a $path issue causing the shell script can not find the corresponding libraries. I delete the older .cshrc file and create a new one based on my condition, the problem is gone. Thanks.

Libraries paths defined by Master bash script, but having to run it in every terminal session, how to make more efficient?

I have build a set of libraries and many of my Fortran programs will use them. This creates a problem in that if I ever need to change the location of the libraries then I will need to individually update the path directories in each make file.
How is this usually overcome? I have planned instead to have each make file read a path from a single master path file in the home or root directory (this files location will never change). Within this file is the path for each Library and if any path changes only this file needs to updated.
So I wrote a bash script file, called Master_Library_Paths:
export Library1_Name = {Library1_Name_Path}
echo $Library1_Name
export Library2_Name = {Library2_Name_Path}
echo $Library2_Name
export Library3_Name = {Library3_Name_Path}
echo $Library3_Name
And placed it in my home directory. Then in the make files, I have a line:
$(shell . {Path for Master_Library_Paths} ) \
And load the libraries:
-I$(Library1_Name)
-I$(Library2_Name)
-I$(Library3_Name)
This works great if I run ./Master_Library_Paths in the terminal session first and then go to the directory to compile the program, however that is quite time consuming, How can I fix it so that these arguments Library1_Name, Library2_Name ect are known throughout the system?
New system wide LD_LIBRARY_PATH´s can be added in /etc/ld.so.conf , /etc/ld.so.conf.d/
Or may be in /etc/profile.d/
-

Doubt regarding executable files in linux

I have a program written in C, which is named computeWeight.c and to compile it i use the following code
chaitu#ubuntu:~$ gcc -Wall -o computeWeight computeWeight.c
//to execute it:
chaitu#ubuntu:~$ ./computeWeight
Do i have any mechansim where i can directly use as mentioned below,
chaitu#ubuntu:~$ computeWeight
Should i be changing any permissions on the executable to get this?
You need to add "." to your path. Some people regard this as dangerous, though. See for instance http://www.arsc.edu/support/policy/dotinpath.html .
The $PATH variable define the places where linux would look for executables (try typing echo $PATH in a terminal). You need to put that file in one of those places. One way is to add a bin folder in your home directory, put the executable file there, and add this line (which adds the bin directory in your home folder to the search path) to your .cshrc file so that it'd be executed for every shell:
set PATH = ($PATH $HOME/bin)
With that said I don't think typing ./ is that bad.
export PATH=$PATH:.

Move/copy files/folder in linux/solaris using only bash built-ins

There was a situation when somebody moved the whole rootdir into a subdir on a remote system, thus all the system tools like cp, mv, etc didn't work anymore. We had an active session though but couldn't find a way to copy/move the files back using only bash built-ins.
Do somebody know of a way to achieve this?
I even thought about copy the cp or mv binary in the currentdir with
while read -r; do echo $LINE; done
and then redirect this to a file, but it didn't work. Guess because of all the special non printable chars in a binary file that can't be copied/displayed using echo.
thanks.
/newroot/lib/ld-linux.so.2 --library-path /newroot/lib \
/newroot/bin/mv /newroot/* /
(Similar for Solaris, but I think the dynamic linker is named ld.so.1 or something along those lines.)
Or, if your shell is sh-like (not csh-like),
LD_LIBRARY_PATH=/newroot/lib /newroot/bin/mv /newroot/* /
If you have prepared with sash pre-installed, then that is static and has a copy built-in (-cp).
Otherwise LD_LIBRARY_PATH=/copied/to/path/lib /copied/to/path/bin/cp might work?
I think it might have a problem with not having ld-so in the expected place.
Here's a reasonable ghetto replacement for cp. You'll want echo -E if the file ends with a new line (like most text files), echo -nE if it doesn't (like most binaries).
echo -nE "`< in.file`" > out.file
Old thread, but got exactly the same stupid mistake. /lib64 was moved to /lib64.bak remotely and everything stopped working.
This was a x86_64 install, so ephemient's solution was not working:
# /lib64.bak/ld-linux.so.2 --library-path /lib64.bak/ /bin/mv /lib64.bak/ /lib64
/bin/mv: error while loading shared libraries: /bin/mv: wrong ELF class: ELFCLASS64
In that case, a different ld-linux had to be used:
# /lib64.bak/ld-linux-x86-64.so.2 --library-path /lib64.bak/ /bin/mv /lib64.bak/ /lib64
Now the system is salvaged. Thanks ephemient!
/subdir/bin/mv /subdir /
or am I missing something in your explanation?
If you have access to another machine, one solution is to download and compile a Busybox binary. It will be a single binary contains most of the common tools you need to restore your system. This might not work if your system is remote though.

Resources