Bundling an scons based source - scons

We are using SCons for all our build need, and we would like to distribute a library in open source.
Now most softwares uses ./configure, make and make install as build mechanism, we were wondering how we should bundle our library.
We have the following solutions:
Just bundle like the way it is, requiring scons to build.
Add a dummy configure and makefile that just call scons.
Add autoconf and a makefile.
How it is perceived to get a software requiring python and scons to build?

I think it depends largely on your target audience (ie, users who can easily install scons if they don't have it, or ones who can't), but if you are distributing source at all then presumably your users are happy compiling things, and they can install scons too (and python if for some obscene reason they don't have it already)
Also, if you are worried about people not being able to build it, you should probably be distributing a binary package anyway.

If your library is cross-platform and can be compiled on Windows too, then using scons is the right choice.

Another option would be to include the scons-local version in the package. This reduces the dependencies to just python.

Related

Conan library with both static and dynamic libraries

I have a project which uses libusb as a conan dependency. For most compilations (Windows and Linux), only using the static library was enough, but for cross-compiling this project from Linux to OSX it requires both the .dylib and .a files. When I run conan install with the dependencies, if I set the shared attribute to true, it attaches --enable-shared --disable-static to the configure process and if I set it to false, it sets --disable-shared --enable-static.
Is there any way in Conan I can directly influence the configure command (I already tried it out and that ensures that both files are created during the compilation of the library).
I originally developed that Conan package, and since then the community improved it a lot.
The short answer is No. Why? Conan packages were developed to separate shared libraries from static libraries for all projects. Each package has a specific package ID and they are not mixed in terms of libraries. This is a package design, not a Conan sanity rule.
If your project uses both libraries, I would say something is really wrong and should be fixed instead of looking for package workaround which could cost more time than fixing your real problem.
But if you don't find a solution, there is a hack, you can use Deploy generator, download both libraries to folder and configure your project consume from that folder.
Influence Conan to use the same package reference, but different options is not allowed by it design. Another option would be forking the original project and removing those options, and adding a new option "both", where both are present. Keep in mind "both" won't be accepted as official option.

How do I build Nim library packages

I've created a nimble library package as per the documentation. When I try to build it using nimble build I get the following error.
Error: Nothing to build. Did you specify a module to build using the bin key in your .nimble file?
I can do this and it does fix the error but according to the documentation adding the bin key to the .nimble file turns my package into a binary package.
Other things I have tried:
Use nimble install: This does not appear to verify that my code will actually compile and will happily install anything to the local package directory (I added a C# class to my .nim file, for example, and it was successfully installed).
Use nimble c: This works but I have to pass in the path to the nim file I want to compile and the binDir entry in the .nimble file is ignored resulting in the output being placed in the same directory as the file being built. This complicates the development cycle because I have to manually clean up after the compiler.
Use the compiler directly. This is pretty much the same as the previous option with the same flaws.
I guess I could also create a separate .nim file and import my library after it is installed but this is a big overhead for just wanting to verify that a package in the early stages of development will actually compile.
I just want to be able to verify that the source code in my library package is syntactically correct and will compile. How is this meant to be done for library packages?
From your provided link to the nimble package manager documentation I have the feeling that
https://github.com/nim-lang/nimble#tests
is what you are looking for. But I have never used the test command, so I am not sure. I do my test manually still, I read the nimble docs maybe 4 years ago and can not really remember. And currently there is much package manager related work going on, I heard there is a new, alternative package manager called nimph, and from a forum thread I think I read something that nimble is going to change and improve also. Maybe you should consider subscribing to the Nim forum, that is the place where the bright Nim devs are. Well, at least a few of them.

Is there a way to install an Haskell executable as a dependency?

I found myself writing Haskell commands based upon other commands provided by other Haskell packages, but i could not find a way to install an executable as a dependency.
As far as i could see, Cabal and Stack provide ways for a package to depend on a library, but not on an executable.
If i want to build upon the functionality already provided by another executable, the only way i know is to ask the users to install that other package as well. That also means that i cannot assume the executable is there or its version is the right one.
So is there a way for an Haskell package to depend on an executable provided by another package?

Is it possible to compile a portable executible on Linux based on yum or rpm?

Usually one rpm depends on many other packages or libs. This is not easy for massive deployment without internet access.
Since yum can automatically resolve dependencies. Is it possible to build a portable executable? So that we can copy it to other machines with the same OS.
If you want a known collection of RPMs to install, yum offers a downloadonly plugin. With that, you should be able to collect all the associated RPMs in one shot to install what you wanted on a disconnected machine.
The general way to build a binary without runtime library dependencies is to build it to be static, ie. using the -static argument to gcc, which links in static versions of the libraries required such that they're included in the resulting executable. This doesn't bundle in any data file dependencies or external executables (ie. libexec-style helpers), but simpler applications often don't need them.
For more complex needs (where data files are involved, or elements of the dependency chain can't be linked in for one reason or another), consider using AppImageKit -- which bundles an application and its dependency chain into a runnable ISO. See docs/links at PortableLinuxApps.org.
In neither of these cases does rpm or yum have anything to do with it. It's certainly possible to build an RPM that packages static executables, but that's a matter of changing the %build section of the spec file such that it passes -static to gcc, not of doing anything RPM-specific.
To be clear, by the way -- there are compelling reasons why we don't use static libraries all the time!
Using shared libraries means that applying a security update to a library only means replacing the library itself, not recompiling all applications using it.
Using shared libraries is more memory-efficient, since the single shared copy of the library in memory can be used by multiple applications.
Using shared libraries means your executables don't need to include full copies of all the libraries they use, making them much smaller.

Best practice: deploying depencencies on Linux

What is the best practice for deploying dependencies on Linux when shipping an own application?
Some SO posts recommend to include all dependencies in the package (utilizing LD_LIBRARY_PATH), other posts recommend to only ship the binary and use the "dependency" feature of the DEB/RPM packages instead. I tried to use the second approach, but immediately ran into the problem that one dependency (libicu52) doesn't seem to be available in certain Linux distributions yet. For example, in my OpenSuse test installation only "libicu51" is available in the package manager.
I initially thought that the whole idea of the packaging system is to avoid duplicate SO files in the system. But does it really work (see above), or should I rather ship all dependencies with my app, to make sure that it runs on all distributions?
For custom application, which "does not care" about distribution-specific packaging, versioning, it's upgrades, etc,. I would recommend to redistribute dependencies manually.
You can use RPATH linker option, by it's setting value to $ORIGIN you will tell linker to search libraries in directory, relative to that binary file, without need to pre-set LD_LIBRARY_PATH before execution:
gcc -Wl,-rpath,'$ORIGIN/../lib'
Example taken from here.

Resources