I have a library that I originally developed for Linux. I am now in the process of porting it to Cygwin. I have noticed that every library on my Cygwin system is installed like this:
DLL (cygfoo.dll) installed to /usr/bin mode 755
Static archive (libfoo.a) installed to /usr/lib mode 644
Import library (libfoo.dll.a) installed to /usr/lib mode 755
The first two make perfect sense to me. DLLs are executables so they should be mode 755. Static archives are not executables, so they are mode 644. The third one, however, seems odd to me. Import libraries are in fact static archives, not executables (ar -t libfoo.dll.a lists the contents of the archive). Shouldn't they also be installed mode 644?
Why is it the convention on Cygwin to install import libraries with mode 755?
Back to 2000:
On NTFS partitions, NT/W2K require the execute permission for DLLs to
allow loading a DLL on process startup.
That's no problem unless a person using ntsec gets a tar archive
packed by a person not using ntsec or packing on a FAT partition.
Since Cygwin fakes the execute permission only for the suffixes "exe",
"bat", "com", DLLs are treated as non executable by the stat() call
when ntsec isn't set.
When a person using ntsec unpacks that tar archive, the start of an
application which requires one of the DLLs from the archive will fail
with the Windows message
"The application failed to initialize properly (0xc0000022)"
which isn't that meaningful for most of the users.
To solve that problem we would have to do a simple step. Fake execute
permissions for DLLs when ntsec isn't set or the file system doesn't
support ACLs (FAT/FAT32).
Here: http://cygwin.com/ml/cygwin-developers/2000-10/msg00044.html
The only answer that occurs to me is that the installer is looking for the ".dll" string in the filename to activate the executable attribute (x) of the copied files... should been looking for /.+\.dll$/ (.dll only at the end).
It is understandable that the impedance mismatch between OS's/filesystems force the installer/copier to have a strategy to decide attribute values at the installer operation (it is easier than have to map a attribute list to the copied file... and in windows only have to look for .exe, .com and .dll)
To confirm this rename your "libfoo.dll.a" to "libfoo.dxx.a" and test it...
This is a Windows requirement: since the .dll file contains code that will be executed, it requires the executable bit to be set.
Remove permission to execute the file, and Windows won't let any of the code inside it be executed, even if the process doing the executing is separate.
Side note: it's a common misconception that there's no +x bit for Windows. That's technically true; Windows doesn't use POSIX rwx permissions, although Cygwin does try to provide an interface that resembles them. Windows does use ACLs (Access Control Lists) for permissions, however, and these do have a concept of "execute permissions", which is what Cygwin maps down to its +x bit.
There's a long discussion on the Cygwin mailing list about this, if you want sources/further reading.
Seems it's just a lazy hack resulting in this behavior for filenames containing ".dll". See hasanyasin's answer for the reasoning behind the "fix" (here).
Related
I am curious to find out who handles the paths in cygwin.
For instance if I do the following, it works:
cd C:\
However when I do:
$ pwd
/cygdrive/c
Who is responsible for the discrepancy here?
The reason I am curious is that "cd C:" among'st other tools accept windows paths but when it comes to displaying them they show something different.
If I do include the cygwin bin folder in my path (under regular cmd) then I know it all works as in cmd, so what is it thats causing this convertion, is it the bash/shell?
I am curious to find out who handles the paths in cygwin.
Largely, a small bit of C++ code in winsup/cygwin/path.cc. A particularly important function is normalize_posix_path. This code ends up compiled in the DLL cygwin1.dll which is used by all Cygwin applications.
All paths in Cygwin are "POSIX" paths resolved by the Cygwin DLL itself. The normalize_posix_path function recognizes the presence of certain syntax in paths (drive letter names and backslashes), and arranges for them to be treated as "Win32 paths".
That is the reason why you can feed c:/Users/... to a Cygwin program, as an alternative to /cygdrive/c/Users/....
$ pwd
/cygdrive/c
How this works is that Cygwin maintains a native Win32 current working directory, and so it knows perfectly well that this is C:\. However, this native information is mapped backwards to a POSIX path. Cygwin does this by scanning through its mount table where it sees that C:\ is "mounted" as /cygdrive/c. (The pwd utility is just reporting what Cygwin's implementation of the POSIX getcwd function is returning. The backwards mapping happens inside that function). Note that since Cygwin is operating in a virtual POSIX namespace created by its mount table, that space contains abstract places which have no Win32 native counterpart. For instance, you can cd to the /dev directory where you have entries like tty. This has no native location, and so getcwd will just report the POSIX path. Cygwin tries to keep the Cygwin-internal current working directory in sync with the Win32 one when there is a corespondence; it does so without using the SetCurrentDirectory Win32 function, and without maintaining the concept that Windows drives have individual current working directories.
If I do include the cygwin bin folder in my path (under regular cmd) then I know it all works as in cmd, so what is it thats causing this convertion, is it the bash/shell?
Actually, it does not all work as in cmd! Though Cygwin programs understand "Win32-ish" paths, the support is not complete. You can pass a path like D:file.txt to a truly native Windows program. It resolves via the current directory associated with the D drive, which could be D:\bob\documents, in which case the path stands for D:\bob\documents\file.txt. If there is no such directory then it stands for D:\file.txt. A Cygwin program will not understand this drive-relative path. In fact, D:file.txt won't even be recognized as a drive-letter reference (as of Cygwin 2.5.2). This is because the colon is not followed by a directory separator character (backslash or slash).
This should be really simple, but I'm having trouble. I want to include some shared Qt libraries with my application in the installation folder so the user doesn't have to download Qt separately. On Windows, this seemed to work fine, but Ubuntu complains about not being to find the Qt libraries when they are in the same folder as the application.
How do I add the installation directory to shared library search path?
I was able to add the installation directory to shared library search path by adding the following lines to the .pro file, which set the rpath of the binary to $ORIGIN (the install folder). I needed to add the location of QT libs on my current machine (/usr/lib/qt5.5 and /usr/lib/qt5.5/lib) so that the project would build in QtCreator.
unix:!macx {
# suppress the default RPATH if you wish
QMAKE_LFLAGS_RPATH=
# add your own with quoting gyrations to make sure $ORIGIN gets to the command line unexpanded
QMAKE_LFLAGS += "-Wl,-rpath,\'\$$ORIGIN\':/usr/lib/qt5.5:/usr/lib/qt5.5/lib"
}
(The unix:!macx line makes it apply to linux only)
Windows, Linux and OSX behave quite differently. Windows is easiest: dump all the dll's in the application dir. OSX is next and Linux is most difficult.
Linux has certain search paths for searching shared objects. These search paths are mainly system libraries and possibly some user libraries. As you do not want to mess around with system files of your user one would prefer to have the shared objects in the application dir. This is possible but you have to tell Linux to read that directory. You can do this with setting the environment variable LD_LIBRARY_PATH. You can do this with a script. See my answer.
In short: This question is basically about telling Linux to load the development version of the .so file for executables in the dev directory and the installed .so file for others.
In long: Imagine a shared library, let's call it libasdf.so. And imagine the following directories:
/home/user/asdf/lib: libasdf.so
/home/user/asdf/test: ... perform_test
/opt/asdf/lib: libasdf.so
/home/user/jkl: ... use_asdf
In other words, you have a development directory for your library (/home/user/asdf) and you have an installed copy of its previous stable version (/opt/asdf) and some other programs using it (/home/user/jkl).
My question is, how can I tell Linux, to load /home/user/asdf/lib/libasdf.so when executing /home/user/asdf/test/perform_test and to load /opt/asdf/lib/libasdf.so when executing /home/user/jkl/use_asdf? Note that, even though I specify the directory by -L during link, Linux uses other methods (for example /ect/ld.so.conf and $LD_LIBRARY_PATH) to find the .so file.
The reason I need such a thing is that, of course the executables in the development directory need to link with the latest version of the library, while the other programs, would want to use the stable version.
Putting ../lib in the library path doesn't seem like a secure idea, not to mention not completely correct since you can't run the test from a different directory.
One solution I thought about is to have perform_test link with libasdf-dev.so and upon install, copy libasdf-dev.so as libasdf.so and have others link with that. This solution has one problem though. Imagine the following additional directory:
/home/user/asdf/tool: ... use_asdf_too
Which gets installed to:
/opt/asdf/bin: use_asdf_too
In my solution, it is unknown what use_asdf_too should be linked against. If linked against libasdf.so, it wouldn't work properly if invoked from the dev directory and if linked against libasdf-dev.so, it wouldn't work properly if invoked from the installed location.
What can I do? How is this managed by other people?
Installed shared objects usually don't just end with ".so". Usually they also include their soname, such as libadsf.so.42.1. The .so file for development is typically a symlink to a fully-versioned filename. The linker will look for the .so file and resolve it to the full filename, and the loader will then load the fully-versioned library instead.
I use LD_LIBRARY_PATH to set the path of a certain user library for an application. But if I set capabilities on this application
sudo setcap CAP_NET_BIND_SERVICE=eip myapplication
then LD_LIBRARY_PATH seems to be ignored. When I launch the program, Linux complains that it cannot find a certain shared library.
I guess that there's some kind of protection kicking in, to prevent applications with extended rights from being hijacked. Is there a workaround?
As already stated in other answers, this behavior is intended. There is some kind of workaround if you can compile (or at least link) the application yourself. Then you can pass -Wl,-rpath <yourDynamicLibraryPath> to gcc or -rpath <yourDynamicLibraryPath> to ld and you won't have to specify LD_LIBRARY_PATH at all on execution.
The solution to this problem on linux is as follows:
go to directory
$cd /etc/ld.so.conf.d/
create a new file
$touch xyz.conf
open this file using any editor
$vi xyz.conf
Add the your dynamic library path in this file line by line for e.g. if your path is as follows:
/home/xyz/libs1:/home/xyz/libs2/:/home/xyz/libs3/
then there should be three entries in this file as follows:
/home/xyz/libs1/
/home/xyz/libs2/
/home/xyz/libs3/
Then save this file and execute the following command:
$ldconfig
All the above mentioned operation need to be performed from root login
The man page for sudo explains:
Note that the dynamic linker on most operating systems will remove
variables that can control dynamic linking from the environment of
setuid executables, including sudo. Depending on the operating system
this may include RLD*, DYLD*, LD_, LDR_, LIBPATH, SHLIB_PATH, and
others. These type of variables are removed from the environment
before sudo even begins execution and, as such, it is not possible for
sudo to preserve them.
As this link explains, the actual mechanism for doing this is in glibc. If the UID does not match the EUID (which is the case for any setuid program, including sudo), then all "unsecure environment variables" are removed. Thus, a program with elevated privileges runs without alteration.
Yes, it's disabled for security reasons.
An alternative to consider is to "correct" a poorly compiled ELF shared library and/or executable using patchelf to set the rpath.
https://nixos.org/patchelf.html
ld.so.conf is not always the sure bet. It will work if whatever you are running was compiled properly. In my case, with a particular specially packaged vendor's apache product, it was compiled so poorly: They did not even use unique .so filenames so they conflicted with .so filenames from RPMs in the base RHEL repositories that provided some pretty critical commonly used libraries. So this was the only option to isolate how they were used. Using ld.so.conf against those shared objects in the vendor's lib path would have blown away a lot of stuff, that included yum, along with glibc shared library failures, system-wide.
I have an application that relies on Qt, GDCM, and VTK, with the main build environment being Qt. All of these libraries are cross-platform and compile on Windows, Mac, and Linux. I need to deploy the application to Linux after deploying on Windows. The versions of vtk and gdcm I'm using are trunk versions from git (about a month old), more recent than what I can get apt-get on Ubuntu 11.04, which is my current (and only) Linux deployment target.
What is the accepted method for deploying an application that relies on these kinds of libraries?
Should I be statically linking here, to avoid LD_LIBRARY_PATH? I see conflicting reports on LD_LIBRARY_PATH; tutorials like this one suggest that it's the 'right way' to modify the library path to use shared libraries through system reboots. Others suggest that I should never set LD_LIBRARY_PATH. In the default version of GDCM, the installation already puts libraries into the /usr/local/lib directory, so those libraries get seen when I run ldd <my program>. VTK, on the other hand, puts its libraries into /usr/local/lib/vtk-5.9, which is not part of the LD_LIBRARY_PATH on most user's machines, and so is not found unless some change is made to the system. Copying the VTK files into '/usr/local/lib' does not allow 'ldd' to see the files.
So, how can I make my application see VTK to use the libraries?
On windows, deploying the dlls is very straightforward, because I can just include them in the installer, and the application finds them because they are in the local directory. That approach does not work in Linux, so I was going to have the users install Qt, GDCM, and VTK from whatever appropriate source and use the default locations, and then have the application point to those default locations. However, since VTK is putting things into a non-standard location, should I also expect users to modify LD_LIBRARY_PATH? Should I include the specific versions of the libraries that I want and then figure out how to make the executable look in the local directory for those libraries and ignore the ones it finds in the library path?
Every "serious" commercial application I have ever seen uses LD_LIBRARY_PATH. They invariably include a shell script that looks something like this:
#!/bin/sh
here="${0%/*}" # or you can use `dirname "$0"`
LD_LIBRARY_PATH="$here"/lib:"$LD_LIBRARY_PATH"
export LD_LIBRARY_PATH
exec "$0".bin "$#"
They name this script something like .wrapper and create a directory tree that looks like this:
.wrapper
lib/ (directory full of .so files)
app1 -> .wrapper (symlink)
app1.bin (executable)
app2 -> .wrapper (symlink)
app2.bin (executable)
Now you can copy this whole tree to wherever you want, and you can run "/path/to/tree/app1" or "/path/to/tree/app2 --with --some --arguments" and it will work. So will putting /path/to/tree in your PATH.
Incidentally, this is also how Firefox and Chrome do it, more or less.
Whoever told you not to use LD_LIBRARY_PATH is full of it, IMHO.
Which system libraries you want to put in lib depends on which Linux versions you want to officially support.
Do not even think about static linking. The glibc developers do not like it, they do not care about supporting it, and they somehow manage to break it a little harder with every release.
Good luck.
In general, you're best off depending on the 'normal' versions of the libraries for whatever distribution you're targetting (and saying you don't support dists that don't support recent enough versions of the lib), but if you REALLY need to depend on a bleeding edge version of some shared lib, you can link your app with -Wl,-rpath,'$ORIGIN' and then install a copy of the exact version you want in the same directory as your executable.
Note that if you use make, you'll need $$ in the makefile to get a single $ into the argument that is actually sent to the linker. The single qutoes are needed so the shell doesn't munge things...
Well, there are two options for deploying Linux application.
The correct way:
make a package for your app and for the libraries, if they are so special, that they can't be installed from standard repositories
There are two major package formats. RPM and DEB.
The easy way:
make a self-extract file that will install the "windows way" into /opt.
You can have libraries in the same directory as the executable, it's just not the preferred way.