I built ZeroMQ and Sodium from source and have them installed properly on my development machine, which is just a Pi2. I have one other machine that I want to make sure these gets installed to properly. Is there a proper way to do this other than just copy .a and .so files around?
So, there are different ways of handling this particular issue.
If you're installing all your built-from-source packages into a dedicated tree (maybe /usr/local, or /opt/mypackages) then simply copying files around is a fine solution, using something like rsync. Particularly since you only have two machines, anything more complicated may not be worth the effort.
If you're trying to install ZeroMQ and Sodium along side system-managed files (in, e.g., /usr/lib and /usr/bin)...don't do that. That is, don't try to mix "things installed by packages" with "things installed from source", because that way lies sadness and doom.
That said, a more manageable way of distributing these files would be to build custom packages and then setting up a local apt repository, so that you can just apt install the packages on your systems. There are various guides out there for doing this if you want to go down this route. It's a good skill to have in general, especially if you ever want to share your tools with someone else (because it makes it easy for them to install any necessary dependencies).
I am looking for a good way to install an application I developed with all its dependencies in a fancy way. Currently I have a big make file that downloads, unpacks, compiles and installs all dependencies. This however is a little tedious, since there are quite a few dependencies and the make file is getting larger and larger which eventually will be hard to maintain. Therefore I am looking for a packaging tool with the following features:
It should be a light weight package manager which is very easy to install (or even installs itself and afterwards all my dependencies)
The destination of the installed binaries, libraries etc. should be customizable
Each installation process of a dependency should be easy configurable
It should be possible to include self written scripts that get executed at a specific point during the installation process (in order to manipulate make files, flags etc)
No admin rights should be necessary since all clients that install my application will not have admin rights and are not able to use an already installed package manager
I do not know if this kind of software exists. I myself don't have much of experience with packaging tools.
Thx in advance for any link, hint, suggestion!
opkg is something thats based on ipkg (now defunct) and originally dpkg. Its used in embedded systems. Light weight for sure.
ports from crux linux (www.crux.nu)?
A quick search returns InstallJammer. I would propose make debs and rpms and tarballs and stick with standard installation process (root privileges and such)m but if you can't do that, then, well, you can't.
I'm sure you know how suspicious it would look for the user.
I'm developing an application using Python 3. What is the best practice to use third party libraries for development process and end-user distribution? Note that I'm working within these constraints:
Developers in the team should have the exact same version of the libraries.
An ideal solution would work on both Windows and Linux.
I would like to avoid making the user install software before using our own; that is, they shouldn't have to install product A and product B before using ours.
You could use setuptools to create egg files for your libraries, assuming they aren't available in egg form already. You could then bundle the eggs alongside your software, which would need to either install them, or ensure that they were on the import path.
This has some complexities, i.e. if your libraries have C-extensions, then your eggs become platform-specific, but in my experience this is the most widely-accepted means of 'bundling' stuff in Python.
I have to say that this remains one of Python's weaknesses, though; the third-party ecosystem is certainly aimed at developers rather than end-users.
There are no best practices, but there are a few different tracks people follow. With regard to commercial product distribution there are the following:
Manage Your Own Package Server
With regard to your development process, it is typical to either have your dev boxes update from a local package server. That allows you to "freeze" the dependency list (i.e. just stop getting upstream updates) so that everyone is on the same version. You can update at particular times and have the developers update as well, keeping everyone in lockstep.
For customer installs you usually write an install script. You can collect all the packages and install your libs, as well as the other at the same time. There can be issues with trying to install a new Python, or even any standard library because the customer may already depend on a different version. Usually you can install in a sandbox to separate your packages from the systems packages. This is more of a problem on Linux than Windows.
Toolchain
The other option is to create a toolchain for each supported OS. A toolchain is all the dependencies (up to, but not including base OS libs like glibc). This toolchain gets packaged up and distributed for both developers AND customers. Best practice for a toolchain is:
change the executable to prevent confusion. (ie. python -> pkg_python)
don't install in .../bin directories to prevent accidental usage. (ie. on Linux you can install under .../libexec. /opt is also used although personally I detest it.)
install your libs in the correct location under lib/python/site-packages so you don't have to use PYTHONPATH.
Distribute the source .py files for the executables so the install script can relocate them appropriately.
The package format should be an OS native package (RedHat -> RPM, Debian -> DEB, Win -> MSI)
For developers use PIP with requirements file.
For end users, specify requirements in setup.py.
I just finished porting an application from Windows into Linux.
I have to create an installer of the application.
The application is not open source => I should distribute the application's binaries (executable file, couple .so files, help files and images).
I found several methods to do it:
- RPM and DEB packages;
- installer in .sh files;
- Autopackage.
I don't like first method (RPM and DEB packages) because I don't want to mantain different packages for different Linux distros.
What is the best way to distribute a binary application for Linux?
Having been through this a couple of times with commercial products, I think the very best answer is to use the native installer for each supported platform. Anything else produces an unpleasant experience for the end-user, and in practice you have to test on every platform you want to support anyway, so it's not really a significant burden to maintain packages for each. The idea that you can create a binary that can "just work" on every platform out there, including some you've never even heard of, just really doesn't work all that well.
My recommendation is that you pick a platform or two to support initially (Red Hat and Ubuntu would be my suggestions) and then let user demand drive the creation of additional installation packages. Perhaps make it known that you're willing to support additional platforms, for a modest fee that covers your time and effort in packaging and testing on that platform. If a platform proves to be very different, you may need to charge more for ongoing support.
Oh, and I cannot overemphasize the value of virtual machines for scenarios like this. You need to build VMs for each platform you support, and perhaps multiple VMs per platform to make it easy to test different configurations.
There were a lot of good answers (mine included :)) here. Although that is more about binary compatibility (which you do need to worry about).
For installer I would recommend autopackage (we successfully released several versions of our software with it), they did the "installer.sh" part already and more (desktop integration for example).
You have to be careful and test your upgrade scenarios and stuff, depending on how complex you package structure is, but it is pretty neat overall. I fixed few bugs with dependency handling in 1.2.6, so it should be fine.
UPDATE: The original question was deleted, so reposting full answer here, ignore all references to autopackage, that was merged into Listaller, not sure if relevant parts survived.
For standard libraries (like crypto++, pthreads, etc) that are likely to be available in a distribution -- link dynamically and tell users to get them from their distro repository. Or link statically if it is feasible.
For weird libraries that you must control version of (if you want to deploy Qt4 app on territory of enemy gnomes for example), compile them yourself and install into a private spot only your app knows about.
Never install private libs into standard places unless you can be sure to not interfere with package systems of all distros you support. (and that they can't interfere with you either).
Use rpath instead of LD_LIBRARY_PATH, and set it properly for all you binaries and all dlls that reference each other. You can set rpath on you binary to "$ORIGIN;$ORIGIN/../lib;/opt/my/private/libs" and have linker search those places before any standard paths. (have to setsome linker flag for origin to work I think). Make sure to set rpath on your libs too: for example QtGui needs QtCore, and if user happens to install standard package with different version, you absolutely don't want it picked up (exe -> ../lib/QtGui.so (4.4.3) -> /usr/local/lib/QtCore.so (4.4.2) -- a sure way to die early).
If you compile with any rpath, you can change it later with chrpath, thus making it possible to tweak install location as part of post processing or install script.
Maintain binary compatibility. GLIB_C is pretty much static for your users, so you should link against some sufficiently old version. 2.3 is a safe bet. You can use APBuild -- a gcc wrapper that enforces GLIB_C version and does few other binary compatibility tricks, so you don't have to compile all you apps on a really old distro.
If you link to anything statically, it generally will have to be rebuilt with APBuild too, otherwise it is bound to drag newer GLIB_C symbols. All .so's you install privately will naturally have to be built with it too. Sometimes you have to patch third party libs to use older symbols. (I had to patch ruby to return real permissions instead of effective ones, since there is no such functions in older GLIB_C. Still not sure if I broke anything :)).
For integration with desktop environments (file associations, mime-types, icons, start menu entries, etc) use xdg-utils. Beware though, like everything on linux they don't really like spaces in filenames :). Make sure to test those things on each target distro -- xdg implementations are riddled with bugs and quirks.
For actual install you can either provide variety of native packages (rpm, deb and a few more), or roll out your own installer, or find installer that works on all distros bypassing native package managers. We successfully used Autopackage (same people who made APbuild) for that.
It's possible to install an RPM on Debian and an APT on RHEL.
If you are going to statically link this program, or dynamically link only with libraries that you will be distributing in the package, then it doesn't much matter how you distribute it. The simplest way is tar.gz and that would work.
OTOH if it is dynamically linked with system libraries, and particularly if it has dependencies on dynamic libraries that will be shared with the client's other applications, then you kind of need to do either RPM, APT, or both.
You may want to try out InstallBuilder. It is crossplatform (runs on Windows, Linux, Mac OS X, Solaris and nearly any other Unix platform out there). It is used by Intel, Motorola, GitHub, MySQL, Nokia/Trolltech and many other companies so you will be in good company :) In addition to binary installers, it can also create cross-distro RPMs and DEB packages.
InstallBuilder is commercial, but we offer free licenses for open source programs and very significant discounts for mISVs or solo-developers, just drop us a line.
Create a .tar.bz2 archive with the binary, then publish a feed for it, like this:
<?xml version="1.0" ?>
<interface uri="http://mysite/myprog.xml"
xmlns="http://zero-install.sourceforge.net/2004/injector/interface">
<name>MyProgram</name>
<summary>what it does</summary>
<description>A longer description goes here.</description>
<implementation main='bin/myprog'
id="sha1new=THEDIGEST"
version='1.0'>
<archive href='http://mysite/myprogram-1.0.tar.bz2'
size='10000'/>
</implementation>
</interface>
Sign it with your GPG key. You can use the tools on 0install.net to calculate the digest and add the GPG signature for you in the correct format.
Then, put it on your web-site at the address in the uri attribute. Any user on most Linux distributions (e.g. Ubuntu, Fedora, Debian, Gentoo, ArchLinux, etc) can then install and run your program with:
0launch http://mysite/myprog.xml
Their system will also check for updates periodically. There are various GUIs for the different desktop environments, but the command-line will work everywhere.
Also look at some of the existing feeds for inspiration.
I tell you an additional possibility, although I am not aware of its status: the Loki installer. Loki was a company doing videogames porting for Linux. It went down in 2002, but the installer is available.
InstallShield is also available for linux. No idea on the status though.
Although many people are proposing you to go with tar.gz, please don't. I assume you want to provide a pleasant experience for the installation procedure to your users. A tar.gz is one of the most low level, low quality, low usability choices you can do. It works everywhere because it does basically nothing, as you know.
The guys at freedesktop.org and the LSB are quite clear on where to put stuff. What you need is a friendly program to do that. Autopackage imho has the numbers (I love it), but despite its age, I haven't seen a single program out there distributed as an autopackage.
Evaluate it carefully, but don't skip the chance of being part of the momentum in favour of it, just because it's not popular. If it works for you, and it works for your users, everything else does not matter.
There is no best way (universally speaking).
tar.gz the binaries, that should work.
Today, I would also look at Snapcraft and Flatpak which are embraced by some popular distributions. I explored other options and it is what ended up working best for me. Flatpak in particular also helped me learn about standard Linux desktop conventions to follow.
You may also want to look at AppImage (https://appimage.org/). The concept is that it produces a single binary file that the user downloads, sets executable, and runs directly; no installation necessary, no dependencies to install (since the app image typically includes all the dependencies except basic stuff like glibc). This makes for a really great user experience!
Some downsides:
The image may be large, since it probably includes all files/libraries/... the app depends on.
As the image creator, you're responsible for security updates to any of the libraries you add into your image.
An AppImage is great for a user-run application that's pretty isolated from anything else on the system (i.e. daemons, system configuration, etc.), but if your app relies on things like udev integration, desktop file installation, dbus registration, etc. this isn't easy, since the apps files aren't available when the app isn't running (making udev rules hard), and there is by definition no installer that gets run (making desktop file installation hard).
I've also looked into this at work and I'd have to agree there really isn't a "best way". If your application is being distributed as source then I'd go with the make/configure methods packaged up in a tar.gz. That seems fairly universal in the Linux world.
A good way to get an idea of what to do is to look at larger organziation and see how they distribute their binaries.
have you struggled with Linux deployment before?
I need to deploy an application into a dedicated Linux box with no outside network access.
The configuration should be as simple as possible, robust for different configurations (missing libraries and build tools) and preferably automatic. Another difficulty I need to consider is that I need to connect to an Oracle database.
What would you recommend as the best way for deployment? I have some ideas, but not sure which is the best.
I can use Java
I will need to install JDK, and that solves mostly everything
Another big problem is that the code we presently have in Java is poorly written and slow.
I'm not sure if I need to install Instantclient to connect to Oracle under linux
I can use C (I do have the source code for a very well-written LGPL program)
And use dpkg to deploy
The Linux box is most likely a Ubuntu server, but I'm not sure which version is installed
I can't use apt-get, but I can copy all the packages I need
I know I can use dpkg -s to check which packages they are, but I'm really not sure if I might miss dependencies.
I guess I will need build-essentials and pcap or such
And use static linking
I configured it with ./configure LDFLAGS=-static with no errors and it works on my computer now
I have chroot into this directory and run it without problems, does this mean this is okay?
I really need to test this on a new Linux box to make sure
And use Statifier
I browsed stackoverflow and found this app, haven't tried it out yet.
Seems like people have used it with mixed success.
And create a build environment and make
I have no confidence that this is going to work
Using C leaves some problems
But the program is incomplete, I have to process this data, preferably not in C.
I have to install Instantclient, which is difficult to deploy
I can use Perl
I can't use CPAN
I have already downloaded the libraries, so maybe I could just copy them into the deployed machine, I am not sure how or whether this works
Perl is slow
I have to install Instantclient anyways
Please share your similar experience.
C with static linking solves a lot of the portability problems at the expense of a larger executable. To make sure that everything is truly getting statically linked and not secretly depending on any outside libraries, run ldd on your executable and make sure it isn't dynamically loading everything. Note that this won't be 100% portable amoung various linux machines because Oracle instantclient has some dependencies on kernel versions, but it should work on any reasonably new kernel.
Edit: If the box has LAN access and and just no internet access, why not run your own apt repository on the local network. You could even create a .deb for your application and put it on the same server, than on that machine you just need to execute apt-get myApplication and it will pull down your app and any noninstalled dependencies as well. Setting up an apt mirror is actually pretty easy and this would be pretty slick. If network access is missing alltogether, you can still create an install dvd that has all the debs including your app and set up apt-get to pull from there.