I just installed Cmake from git clone wget http://www.cmake.org/files/v2.8/cmake-2.8.3.tar.gz in a new folder on a Linux server. The compilation worked but cmake command is not recognized from other paths. Should I copy the entire contents of cmake-2.8.0 folder to usr/local/bin? Or is the contents of bin folder that need to be copied?
Thanks
On Linux and other Unix-based systems, a common arrangement is to install packages to /opt and add relevant entries to the PATH environment variable to make them available. This is intended for packages not provided by the native package manager or distribution. By choosing an appropriate directory structure, this can be done in a way which also allows different versions to be installed simultaneously and the user can pick which one they want by adding the relevant directory to the PATH.
For the specific case of CMake asked about in the question, you can use a directory structure like /opt/cmake/<version> and then add the relevant /opt/cmake/<version>/bin directory to your PATH (e.g. /opt/cmake/3.8.2/bin for the 3.8.2 CMake release). You can even just download the official pre-built CMake tarballs, unpack them and move the top level directory into the /opt/cmake area as the particular version you downloaded. I've used this successfully on Linux, MacOS and Solaris, as I'm sure have many others.
Note that once you've run CMake on a particular source tree, the cmake executable doesn't need to be on the PATH any more. If cmake needs to be re-run, the build will do so itself and it records the full path to the cmake executable in its own cache, so the PATH isn't even consulted (this is essential in ensuring the same version of CMake continues to be used for all builds regardless of the PATH, since PATH can change between login sessions, etc.). You would only need cmake on your PATH if you intend to invoke cmake manually or for the first time you run it on a source tree, but in both of these cases you can always just use the full path to the cmake executable if you preferred.
I should also add that the entire set of files provided in the CMake package are required, not just the bin directory. CMake makes extensive use of files in its other directories, such as the various modules it comes with. If you are building CMake from source, you may want to build the package target so you get a relocatable tarball or similar which will contain everything that should be included when you provide a CMake package on your system.
After the build, use 'sudo make install'. This will make sure the correct libraries and binaries are copied to their proper places.
Usually this will install the binary to /usr/local/bin.
Make sure the PATH variable has this included.
sudo make install did not copy to /usr/local/bin/ for some reason, so I copied the content of CMAKE /bin. to usr/local/bin an it worked.
cp –a bin/. /usr/local/bin/
I've created packages previously by using a Makefile, the command "dh_make --createorig", then adjusting files in the debian folder generated and finally using the debuild command to generate the .deb. That workflow is simple and works for me, but I was told to adjust it a little in a way that you could build the project from the sources without requiring the orig files and I'm unsure how to do it, but according to this (https://askubuntu.com/questions/17508/how-to-have-debian-packaging-generate-two-packages-given-an-upstream-source-arch) and this structure (http://bazaar.launchpad.net/~andrewsomething/imagination/debian/files) there must be a way. In my case I would have a folder with the sources and all of that and then a debian folder (generated with dh_make) but I'm unsure on how to avoid the debuild command to ask for the .orig files or if I should be using some other command for this.
Sorry for the superlong question, I think I provided all the relevant information, but I can clarify if anything is fuzzy.
The difference is in the version number in the file debian/changelog.
If you use 1.2.3-1 it implied Debian build 1 of an upstream package 1.2.3 --- for which the build programs (dpkg-buildpackage and whichever wrappers on top) --- assume an .orig.tar.gz to exists.
But if you use 1.2.3 it will consider the package 'Debian native' and the archive is just a .tar.gz and not an .orig.tar.gz.
Now the choice should not be driven by your convenience alone. If this has an upstream source, use the first scheme. If not, the second can be fine. In the packages I maintain I have both but way more of the former.
If you want to create a Debian directory directly in the source package (ie you're packaging your own work, rather than from an upstream release) you could use the --native option to dh_make
I think the question was asked differently, it was somewhat clear that the project was upstream and it's probably not a good reason to change its format to native.
Currently I package some upstream python project, this exact same question came to my mind. Why isn't there any dh_* hook to overwrite in order to generate this origin tarball on the fly so you do not get bothered by:
This package has a Debian revision number but there does not seem to be
an appropriate original tar file or .orig directory in the parent directory;
for a start, I added a makefile to the project:
# Makefile
VERSION:=$(shell dpkg-parsechangelog -S Version | sed -rne 's,([^-\+]+)+(\+dfsg)*.*,\1,p'i)
UPSTREAM_PACKAGE:=click_${VERSION}.orig.tar.gz
dpkg:
tar cafv ../${UPSTREAM_PACKAGE} . --exclude debian --exclude .git
debuild -uc -us
clean:
rm -f ../${UPSTREAM_PACKAGE}
debuild clean
so a simple make clean dpkg was all it needed to build the package.
Now I think the question remains if someone has some bright idea how to insert the tar operation within the debian/rules so I could just call debuild -uc -us and it magically creates the orig tarball I would be awsome :)
I'm zipping a pre-built (no source/object files) binary application for distribution. The binary application requires a couple of libraries not included by default. The only way I seem to be able to get the application to start on the end-user is by including a run.sh that sets the library path to the current directory:
export LD_LIBRARY_PATH=./:$LD_LIBRARY_PATH
./MyApp.out
However, I'd really like to allow the user to just unzip the zip and doubleclick MyApp.out (without the shell script). Can I edit MyApp.out to search the current directory for the library? I've done something similar on OSX using install_name_tool, but that tool isn't available here.
You want to set the rpath. See this answer. So link using
gcc yourobjects*.o -L/some/lib/dir/ -lsome -Wl,-rpath,.
But you might want even to use -Wl,-rpath,$PWD or perhaps -Wl,-rpath,'$ORIGIN'. See this.
You could also (and this should work for a pre-built executable) configure your /etc/ld.so.conf by adding a line there with an absolute path (of the directory containing the lib), then running ldconfig -v ... See ldconfig(8)
I would suggest adding /usr/local/lib into /etc/ld.so.conf and making a symlink from /usr/local/lib/libfoo.so to e.g. $HOME/libfoo.so etc... (then run ldconfig ...). I don't think adding a user specific directory to /etc/ld.so.conf is reasonable ...
PS. What you really want is to package your application (e.g. as a *.deb package for Debian or Ubuntu, or an *.rpm for Fedora or Redhat). Package management systems handle dependencies!
I have a debian package that I built that contains a tar ball of the files, a control file, and a postinst file. Its built using dpkg-deb and it installs properly using dpkg.
The modification I would like to make is to have the installation directory of the files be determined at runtime based on an environment variable that will be set when dpkg -i is run on the deb file. I echo out the environment variable in the postinst script and I can see that its set properly.
My questions:
1) Is it possible to dynamically determine the installation directory at runtime?
2) If its possible how would I go about this? I have read about the rules file and the mypackage.install files but I don't know if either of these would allow me to accomplish this.
I could hack it by copying the files to the target location in the posinst script but I would prefer to do it the right way if possible.
Thanks in advance!
So this is what I found out about this problem over the past couple of weeks.
With prepackaged binaries you can't build a debian package with a destination directory dynamicall determined at runtime. I believe that this might be possible if installing a package that is built from source where you can set the install directory using configure. But in this case since these are embedded Ubuntu machines they don't have make so I didn't pursue such an option. I did work out a non traditional method (hack) for installing that did work. Since debian packages simply contain a tar ball relative to / simply build your package relative to a directory under /tmp. In the postinst script you can then determine where to copy the files from the archive into a permanent location.
I expected that after rebooting and the automatic deletion of the subdirectory under /tmp that dpkg might not know that the file package existed. This wasn't a problem. When I ran 'dpkg -l myapp' it showed as still installed. Updating the package using dpkg/apt-get also worked without a hitch.
What I did find is that if you attempted to remove the package using 'dpkg -r myapp' that dpkg would try and remove /tmp which wasn't good. However /tmp isn't easily removed so it never succeeded. Plus in our situation we never remove packages but instead simply upgrade them.
I eventually had to abandon the universal package due to code differences in the sources resulting in having to recompile per platform but I would have left it this way and it did work.
I tried using --instdir to change the install directory of the package and it does relocate the files but dpkg fails since the dpkg file can't be found relative to the new instdir. Using --instdir is sort of like a chroot. I also tried --admindir and --root in various combinations to see if I could use the dpkg system relative to / but install relocate the files but they didn't work. I guess rpm has a relocate option that works but not Ubuntu.
You can also write a script that runs dpkg-deb with a different environment for 6 times, generating 6 different packages. When you make a modification, you simply have to run your script, and all 6 packages gets generated and you can install them on your machines avoiding postinst hacking!
Why not install to a standard location, and simply use a postinst script to create symbolic links to the desired location? This is much cleaner, and shouldn't break anything in dpk -I.
Program is part of the Xenomai test suite, cross-compiled from Linux PC into Linux+Xenomai ARM toolchain.
# echo $LD_LIBRARY_PATH
/lib
# ls /lib
ld-2.3.3.so libdl-2.3.3.so libpthread-0.10.so
ld-linux.so.2 libdl.so.2 libpthread.so.0
libc-2.3.3.so libgcc_s.so libpthread_rt.so
libc.so.6 libgcc_s.so.1 libstdc++.so.6
libcrypt-2.3.3.so libm-2.3.3.so libstdc++.so.6.0.9
libcrypt.so.1 libm.so.6
# ./clocktest
./clocktest: error while loading shared libraries: libpthread_rt.so.1: cannot open shared object file: No such file or directory
Is the .1 at the end part of the filename? What does that mean anyway?
Your library is a dynamic library.
You need to tell the operating system where it can locate it at runtime.
To do so,
we will need to do those easy steps:
Find where the library is placed if you don't know it.
sudo find / -name the_name_of_the_file.so
Check for the existence of the dynamic library path environment variable(LD_LIBRARY_PATH)
echo $LD_LIBRARY_PATH
If there is nothing to be displayed, add a default path value (or not if you wish to)
LD_LIBRARY_PATH=/usr/local/lib
We add the desired path, export it and try the application.
Note that the path should be the directory where the path.so.something is. So if path.so.something is in /my_library/path.so.something, it should be:
LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/my_library/
export LD_LIBRARY_PATH
./my_app
Reference to source
Here are a few solutions you can try:
ldconfig
As AbiusX pointed out: If you have just now installed the library, you may simply need to run ldconfig.
sudo ldconfig
ldconfig creates the necessary links and cache to the most recent
shared libraries found in the directories specified on the command
line, in the file /etc/ld.so.conf, and in the trusted directories
(/lib and /usr/lib).
Usually your package manager will take care of this when you install a new library, but not always, and it won't hurt to run ldconfig even if that is not your issue.
Dev package or wrong version
If that doesn't work, I would also check out Paul's suggestion and look for a "-dev" version of the library. Many libraries are split into dev and non-dev packages. You can use this command to look for it:
apt-cache search <libraryname>
This can also help if you simply have the wrong version of the library installed. Some libraries are published in different versions simultaneously, for example, Python.
Library location
If you are sure that the right package is installed, and ldconfig didn't find it, it may just be in a nonstandard directory. By default, ldconfig looks in /lib, /usr/lib, and directories listed in /etc/ld.so.conf and $LD_LIBRARY_PATH. If your library is somewhere else, you can either add the directory on its own line in /etc/ld.so.conf, append the library's path to $LD_LIBRARY_PATH, or move the library into /usr/lib. Then run ldconfig.
To find out where the library is, try this:
sudo find / -iname *libraryname*.so*
(Replace libraryname with the name of your library)
If you go the $LD_LIBRARY_PATH route, you'll want to put that into your ~/.bashrc file so it will run every time you log in:
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/path/to/library
Update
While what I write below is true as a general answer about shared libraries, I think the most frequent cause of these sorts of message is because you've installed a package, but not installed the -dev version of that package.
Well, it's not lying - there is no libpthread_rt.so.1 in that listing. You probably need to re-configure and re-build it so that it depends on the library you have, or install whatever provides libpthread_rt.so.1.
Generally, the numbers after the .so are version numbers, and you'll often find that they are symlinks to each other, so if you have version 1.1 of libfoo.so, you'll have a real file libfoo.so.1.0, and symlinks foo.so and foo.so.1 pointing to the libfoo.so.1.0. And if you install version 1.1 without removing the other one, you'll have a libfoo.so.1.1, and libfoo.so.1 and libfoo.so will now point to the new one, but any code that requires that exact version can use the libfoo.so.1.0 file. Code that just relies on the version 1 API, but doesn't care if it's 1.0 or 1.1 will specify libfoo.so.1. As orip pointed out in the comments, this is explained well at here.
In your case, you might get away with symlinking libpthread_rt.so.1 to libpthread_rt.so. No guarantees that it won't break your code and eat your TV dinners, though.
You need to ensure that you specify the library path during
linking when you compile your .c file:
gcc -I/usr/local/include xxx.c -o xxx -L/usr/local/lib -Wl,-R/usr/local/lib
The -Wl,-R part tells the resulting binary to also look for the library
in /usr/local/lib at runtime before trying to use the one in /usr/lib/.
Try adding LD_LIBRARY_PATH, which indicates search paths, to your ~/.bashrc file
LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/path_to_your_library
It works!
The linux.org reference page explains the mechanics, but doesn't explain any of the motivation behind it :-(
For that, see Sun Linker and Libraries Guide
In addition, note that "external versioning" is largely obsolete on Linux, because symbol versioning (a GNU extension) allows you to have multiple incompatible versions of the same function to be present in a single library. This extension allowed glibc to have the same external version: libc.so.6 for the last 10 years.
cd /home/<user_name>/
sudo vi .bash_profile
add these lines at the end
LD_LIBRARY_PATH=/usr/local/lib:<any other paths you want>
export LD_LIBRARY_PATH
Another possible solution depending on your situation.
If you know that libpthread_rt.so.1 is the same as libpthread_rt.so then you can create a symlink by:
ln -s /lib/libpthread_rt.so /lib/libpthread_rt.so.1
Then ls -l /lib should now show the symlink and what it points to.
I had a similar error and it didn't fix with giving LD_LIBRARY_PATH in ~/.bashrc .
What solved my issue is by adding .conf file and loading it.
Go to terminal an be in su.
gedit /etc/ld.so.conf.d/myapp.conf
Add your library path in this file and save.(eg: /usr/local/lib).
You must run the following command to activate path:
ldconfig
Verify Your New Library Path:
ldconfig -v | less
If this shows your library files, then you are good to go.
running:
sudo ldconfig
was enough to fix my issue.
I had this error when running my application with Eclipse CDT on Linux x86.
To fix this:
In Eclipse:
Run as -> Run configurations -> Environment
Set the path
LD_LIBRARY_PATH=/my_lib_directory_path
Wanted to add, if your libraries are in a non standard path, run ldconfig followed by the path.
For instance I had to run:
sudo ldconfig /opt/intel/oneapi/mkl/2021.2.0/lib/intel64
to make R compile against Intel MKL
All I had to do was run:
sudo apt-get install libfontconfig1
I was in the folder located at /usr/lib/x86_64-linux-gnu and it worked perfectly.
Try to install lib32z1:
sudo apt-get install lib32z1
If you are running your application on Microsoft Windows, the path to dynamic libraries (.dll) need to be defined in the PATH environment variable.
If you are running your application on UNIX, the path to your dynamic libraries (.so) need to be defined in the LD_LIBRARY_PATH environment variable.
The error occurs as the system cannot refer to the library file mentioned. Take the following steps:
Running locate libpthread_rt.so.1 will list the path of all the files with that name. Let's suppose a path is /home/user/loc.
Copy the path and run cd home/USERNAME. Replace USERNAME with the name of the current active user with which you want to run the file.
Run vi .bash_profile and at the end of the LD_LIBRARY_PATH parameter, just before ., add the line /lib://home/usr/loc:.. Save the file.
Close terminal and restart the application. It should run.
I got this error and I think its the same reason of yours
error while loading shared libraries: libnw.so: cannot open shared
object file: No such file or directory
Try this. Fix permissions on files:
cd /opt/Popcorn (or wherever it is)
chmod -R 555 * (755 if not ok)
I use Ubuntu 18.04
Installing the corresponding -dev package worked for me,
sudo apt install libgconf2-dev
Before installing the above package, I was getting the below error:
turtl: error while loading shared libraries: libgconf-2.so.4: cannot open shared object file: No such file or directory
I got this error and I think its the same reason of yours
error while loading shared libraries: libnw.so: cannot open shared object
file: No such file or directory
Try this. Fix permissions on files:
sudo su
cd /opt/Popcorn (or wherever it is)
chmod -R 555 * (755 if not ok)
chown -R root:root *
A similar problem can be found here.
I've tried the mentioned solution and it actually works.
The solutions in the previous questions may work. But the following is an easy way to fix it.
It works by reinstalling the package libwbclient
in fedora:
dnf reinstall libwbclient
You can read about libraries here:
https://domiyanyue.medium.com/c-development-tutorial-4-static-and-dynamic-libraries-7b537656163e