How to build ncurses statically - ncurses

Since Arch Linux does not provide any static libraries for ncurses, I need to make it myself. However, I don't see any option in the configure script that says anything about static linking. How do I make a static ncurses library?

It's in configure but it's well-hidden:
Options to Specify the Libraries Built/Used:
--disable-libtool-version enable to use libtool's incompatible naming scheme
--with-libtool generate libraries with libtool
--with-shared generate shared-libraries
--with-normal generate normal-libraries (default)
--with-debug generate debug-libraries (default)
I suppose ncurses is old enough that static libraries are "normal" and shared libraries are the new hotness! Anyhow, if you build with defaults, or explicitly specify --with-normal, you should get static libraries (libncurses.a, libpath.a, etc.).

Related

Conan library with both static and dynamic libraries

I have a project which uses libusb as a conan dependency. For most compilations (Windows and Linux), only using the static library was enough, but for cross-compiling this project from Linux to OSX it requires both the .dylib and .a files. When I run conan install with the dependencies, if I set the shared attribute to true, it attaches --enable-shared --disable-static to the configure process and if I set it to false, it sets --disable-shared --enable-static.
Is there any way in Conan I can directly influence the configure command (I already tried it out and that ensures that both files are created during the compilation of the library).
I originally developed that Conan package, and since then the community improved it a lot.
The short answer is No. Why? Conan packages were developed to separate shared libraries from static libraries for all projects. Each package has a specific package ID and they are not mixed in terms of libraries. This is a package design, not a Conan sanity rule.
If your project uses both libraries, I would say something is really wrong and should be fixed instead of looking for package workaround which could cost more time than fixing your real problem.
But if you don't find a solution, there is a hack, you can use Deploy generator, download both libraries to folder and configure your project consume from that folder.
Influence Conan to use the same package reference, but different options is not allowed by it design. Another option would be forking the original project and removing those options, and adding a new option "both", where both are present. Keep in mind "both" won't be accepted as official option.

Convert Static Lib to Dynamic

I have a libsomething.a file which is a static library with all dependencies included.
I need to be able to import this in Python as it is a Python C library. According to this, it is not possible to use a static library as a CPython library.
How can I take my .a file and make it a .so, keeping all static dependencies baked in?
Background: I am using Crowbar to build a CPython shared library which can be called from Python in AWS Lambda. Until now, it has worked flawlessly, but as soon as I added in dependencies which require OpenSSL, I get linker problems when running the code in Lambda.
The issue here is that the Amazon Linux image that is used to execute code has an ancient OpenSSL version. I have recreated the runtime environment, but the issue is that the old version of OpenSSL no longer exists in Amazon's yum repository. This means that installing openssl-devel pulls down OpenSSL 1.0.2k, where in the runtime the version of OpenSSL provided is 1.0.1.
This results in linking failing at runtime in Lambda. Therefore, I need a way to build a (mostly) statically linked shared library. The only shared libraries I want my SO to link from are libc and the kernel, with everything else statically compiled in.
In the Lambda execution environment, LD_LIBRARY_PATH is set to /usr/lib64:/lib64:./lib, so anything in the lib folder will be loaded, but only as a last result, and if I link against OpenSSL, I get the wrong version every time.
In Rust, I have the option of producing liblambda.a or liblambda.so, a static or a shared library. I'm assuming that producing a *.a and then converting into a shared library only linking to glibc and kernel dependencies.
No, you cannot do that conversion from static library to shared one (at least not in practice). Read How To Write Shared Libraries by Drepper.
One of the main reason is that shared libraries want (that it nearly need) to have position independent code (which static libraries usually don't have).
However, on Linux most libraries are free software. So why don't you recompile your library from source code into a shared library?
(you might perhaps recompile that specific version of OpenSSL from source)

ICU files needed during run time

In order to understand ICU and its APIs, I wrote a sample program and the libraries this code would link against are -licuuc and -licui18n. The libraries were available because the libicu-devel.x86_64 package was installed on the test system.
In my quest to understand how to integrate ICU library with my application that is targeted for a centOS platform, I stumbled across this page, which says:
For simple use of ICU's predefined data, this section on data management can safely be skipped. The data is built into a library that is loaded along with the rest of ICU. No specific action or setup is required of either the application program or the execution environment.
This indicates that if the application has no intention of adding its own data, the data available in the libraries can be used. On my test system where ICU is installed, these are the files:
$ sudo find . -name "*icu*"
./opt/rbt_boost/include/boost/regex/icu.hpp
./lib64/libicui18n.so.42
./lib64/libicui18n.so.42.1
./lib64/libicuuc.so.42.1
./lib64/libicuuc.so.42
./usr/lib64/libicui18n.so.42
./usr/lib64/libicule.so
./usr/lib64/libicuio.so.42
./usr/lib64/libicutu.so
./usr/lib64/libiculx.so.42.1
./usr/lib64/pkgconfig/icu.pc
./usr/lib64/libicui18n.so
./usr/lib64/libicui18n.so.42.1
./usr/lib64/libicule.so.42.1
./usr/lib64/libicuuc.so.42.1
./usr/lib64/libiculx.so
./usr/lib64/libicuuc.so.42
./usr/lib64/libicuio.so.42.1
./usr/lib64/icu
./usr/lib64/libicudata.so.42
./usr/lib64/libicule.so.42
./usr/lib64/libicutu.so.42.1
./usr/lib64/libicuio.so
./usr/lib64/libicudata.so
./usr/lib64/libicudata.so.42.1
./usr/lib64/libiculx.so.42
./usr/lib64/libicutu.so.42
./usr/lib64/libicuuc.so
./usr/bin/icu-config
./usr/share/icu
./usr/share/man/man1/icu-config.1.gz
./var/lib/yum/yumdb/l/e59bf24facac0acba1622a5180d0e2a22dda69c8-libicu-devel-4.2.1-9.1.el6_2-x86_64
./var/lib/yum/yumdb/l/7062f72703a5afbf894d617b94db3d4769fe643d-libicu-4.2.1-9.1.el6_2-x86_64
Questions:
Which of these ICU libraries (and files) should be packaged with the application for ICU data to be available at run time? As mentioned earlier, I used libicui18n and libicuuc libraries for linking, so these need to be present.
Aside from the above two libraries, libicudata, going by the name, seems to be the obvious candidate. Correct?
Is there a static version of libicui18n and libicuuc libraries available for use or does one have to build it?
In general, what is the process followed for integrating ICU with a product?
Thanks!
ICU always needs to link against its data library.
Here's a very general discussion about which libraries you ened.
ICU has to be built with the --enable-static option to allow static linkage.
Ideally you will want to use pkg-config to manage your linkage against ICU.
If you are on centOS, rather than linking statically (with its headaches), you might consider just compiling against the libicu-devel package (using pkg-config as mentioned above) and then at run time your users can just include the appropriate libicu package.

Is it possible to compile a portable executible on Linux based on yum or rpm?

Usually one rpm depends on many other packages or libs. This is not easy for massive deployment without internet access.
Since yum can automatically resolve dependencies. Is it possible to build a portable executable? So that we can copy it to other machines with the same OS.
If you want a known collection of RPMs to install, yum offers a downloadonly plugin. With that, you should be able to collect all the associated RPMs in one shot to install what you wanted on a disconnected machine.
The general way to build a binary without runtime library dependencies is to build it to be static, ie. using the -static argument to gcc, which links in static versions of the libraries required such that they're included in the resulting executable. This doesn't bundle in any data file dependencies or external executables (ie. libexec-style helpers), but simpler applications often don't need them.
For more complex needs (where data files are involved, or elements of the dependency chain can't be linked in for one reason or another), consider using AppImageKit -- which bundles an application and its dependency chain into a runnable ISO. See docs/links at PortableLinuxApps.org.
In neither of these cases does rpm or yum have anything to do with it. It's certainly possible to build an RPM that packages static executables, but that's a matter of changing the %build section of the spec file such that it passes -static to gcc, not of doing anything RPM-specific.
To be clear, by the way -- there are compelling reasons why we don't use static libraries all the time!
Using shared libraries means that applying a security update to a library only means replacing the library itself, not recompiling all applications using it.
Using shared libraries is more memory-efficient, since the single shared copy of the library in memory can be used by multiple applications.
Using shared libraries means your executables don't need to include full copies of all the libraries they use, making them much smaller.

Proper way to make and use Rust shared libraries?

I am working on bindings for a cpp library.
To do this I wrote a capi / wrapper for the library and compiled that to a shared lib (.so file).
My question is, how do I then use and integrate this file into cargo without forcing the user to install it? Currently I build the cpp via a Makefile called from the build variable in Cargo.toml, but I am unsure what to do with the compiled lib.
For testing, I can either use rpath or LD_LIBRARY_PATH to point the executable to the right location, but this will not work when distributing a library.
How are people managing this?
First of all, determine whether you really need a shared library. It's not clear from your question, but if you compiled your own wrapper into a shared library, that's probably unnecessary - you can compile your code into a static library and link it directly into your executable.
Moreover, you can try to link that third-party library statically too. I don't think this should be hard. And yes, you need to use build command in the manifest to do all of this now.
However, if you still need to use a shared library and you don't want the end user to install it herself (which is strange, because that's the point of shared libraries), you have to distribute it manually. For example, you can write a makefile which assembles an archive which your users may extract and use. For your program to find the library correctly you will either have the user to install this archive into the system root directory (e.g. /usr on linux; then this shared library will be located automatically) or you will have to write small shell script wrapper around your executable which will locate the shared library and set appropriate LD_LIBRARY_PATH.
I'd go for the first path. Usually all major platforms provide means to create installation packages (deb/rpm/pkg.tar.xz/whatever on Linux, brew on Mac, windows installer on Windows, though on Windows you can just put your shared library in the same directory as the executable and it will work). You just have to create packages for the platform your users work on, so your program will be installed in correct directories and your shared library will be resolved automatically.

Resources