Setup standalone cygwin applications - cygwin

I want to setup a minimal set of cygwin applications (ls, diff, path, find, grep) so that they run on a machine without the full cygwin install.
I am assuming all I need are the *.exe files and *.dll that are relevant. So far, this is what I have. It works so far, but I was wondering if there are any issues down the road that I might experience.

Not really, but you might want to look at UnxUtils, which has some advantages over cygwin for the sort of application you're describing:
It does not depend on an external DLL.
The executables use msvcrt.dll, rather than cygwin.dll so they play nicely with native windows paths. There is no disconnect between the /cygdrive path and the native paths used by the rest of the system.
Because of (2) it integrates much more nicely into command or bat files if you have occasion to have to do this.
UnxUtils is quite good for deploying functionality like sed to windows machines because you can just drop sed.exe into an application directory and not have to worry about registering any DLL's or other installation complexities. CMD.exe will pipe and redirect well enough to use these in batch files, and the utilities do not mind \r\n line terminators.

There is also the GnuWin32 project. I use that and CygWin, so sometimes I have a hard-time telling what kind of environment I'm working in..... not that that's a bad thing!

One issue I can see is licensing. You may need to research under what conditions you are allowed to redistribute binaries. (It may be a simple as including a statement in a README file about where to obtain the source.)
Another issue is Cygwin updates. When new binaries are released, how will you keep end users up to date?
A third potential problem would be configuration files that an application would need. No doubt this would be easy enough to figure out in testing, however.
Have you considered MinGW? It would seem to fit your purposes better than Cygwin.

Related

Does go have OS-specific packages that cannot be used on another OS?

I cannot call myself an absolute beginner in go, and I'm definitively not an expert.
Today, I noticed something VERY confusing. I was experimenting with Unix sockets and Windows named pipes, and from my research, there are 2 packages that support Windows named pipes:
https://github.com/natefinch/npipe
https://github.com/Microsoft/go-winio (I could not find ANY documentation, help, how-to-use, whatsoever about this package)
My OS is Linux, and I decided to give it a go: go get the package(s), and write the code to later test on a Windows machine, but, to my surprise, at least in VSCode, those packages are not recognized by the tooling.
When I look at npipe for example, I see it only has npipe_windows.go, which, if I'm not mistaken, is supposed to automatically be used on Windows.
So, I think there is the concept of OS-specific packages in Go, right? And if so, does it mean that I cannot use, for example, VSCode's go tools to code against Windows packages on Linux?
That, in my opinion, would be extremely inconvenient to have to switch systems in order to write something that works both on Linux and Windows... Although I guess that's only true if we're developing on Linux, and Windows should cover both.
But for me, it doesn't make sense NOT to be able to develop something on Linux; the best environment to develop on IMHO (except Apple-related code of course)
Am I missing something here?
Thank you
So, I think my question actually involves more the tooling rather than the go language itself. gopls is the tool used by VSCode if you choose to use the go language server.
As thre README says, it's in alpha and not stable, and there are known issues listed on the repo, which seem to be the source of my confusions.
I think the main issue that's related to what I'm seeing is:
x/tools/gopls: does not handle build tags

Tools to help manage sets of multiple versions of executables on Linux?

We are in a networked Linux environment and what I'm looking for is a FOSS or generic system level method for managing what versions of executables and libraries get used, per session. The executables would preferably be installed on the network. The executables will be in-house tools and installs of commercial packages like Houdini, Maya and Nuke.
The need for this is that we'd prefer to have multiple versions of the software installed and available for the artists but there needs to be an easy way to select which version to use. As an added benefit, I'd like to be able to track the version of software used to generate a given output as metadata. I've worked at studios that did this successfully but I was not 100% up to speed on how it was achieved. Every executable in a given set was assigned a single uber version for the set. That way, the "approved packages" of the studio tools were all collapsed into a single package of tools that were known to work together.
Due to the way they install, some programs make setting this up easy (It's as simple as adding their install directories to $PATH). Other programs don't make it quite so easy. I'm particularly worried about how to handle the libraries a program might install. What's needed is a generic access method I can use to wrap everything into a clean front end.
Does anyone know of such a system available in the wild or am I going to have to implement it from scratch? Google hasn't been very helpful in finding a solution.
Thanks!
Check out the "modules" system at http://modules.sourceforge.net/ ; it's quite widely used in HPC.
There is eselect . I have only used it on funtoo(offspring of gentoo) but it seems to be doing what you need. It is also written entirely in BASH, so it should be quite possible to port to other distros.

How can I know if my executable will also to run on other computers (linux)?

I have an executable I want to be able to distribute and run in other Linux systems. Is there a way to be reasonably sure if this will work, without access to the final runtime environment?
For example, I am concerned my executable could be using a dynamic library that is only present on my development machine.
Supply any relevant shared libraries with the executable, and set $LD_LIBRARY_PATH in a shell script that invokes the executable to tell the linker where to find the shared libraries. See the ld.so(8) man page for more details, and be sure to follow the appropriate licenses.
Take a look at the Debian (.deb) and Redhat (.rpm) packaging stuff. This is the installer for your package. They aren't all that difficult, and you can tell it to reference other packages that have the required shared objects.
The packaging tools can usually repair packages that are missing libraries and so on. It will also help you place your binaries in such a way that you don't need to set LD_LIBRARY_PATH or put a shell-script front end on your executable.
They aren't that difficult to learn either. Spend a day playing with each and you'll get a passable installer package.
Is there a way to be reasonably sure if this will work, without access to the final runtime environment?
One's environment could be a different architecture than yours, even while it stays Linux. Therefore, the only sure way to get your program to the widemost audience is to ship source code.
Couldn't you just statically linking everything into a supper massive black hole^W^W binary do the trick?

Linux vs Solaris - Compiling software

Background:
At work I'm used to working on Solaris 10. We have sysadmins who know what they're doing and can help out if required.
I've compiled things like apache, perl and mod_perl from source without any problems.
I've been given a redhat server to play with and am hitting problems. The sysadmins are out sick at the moment.
I keep hitting problems regarding LD_LIBRARY_PATH when building software. At the moment for test purposes I am compiling to my home directory, as I don't have root, or permissions to install anywhere else.
I plan on having an area under /opt for us to install into, like we do on Solaris, but I'll need out sysadmin around to create that for us.
My .bashrc had nothing for LD_LIBRARY_PATH so I've been appending things to that to get stuff built (e.g. ffmpeg from source). I've been reading about this and apparently this isn't the way to go, it's not reliable or something. I don't have access to ldconfig (permission denied).
Now the quetions:
What is the best way to build applications under linux so that they won't break? Creating entries under /etc/ld.so.conf.d/ ?
Can anyone give a brief overview of what LD_LIBRARY_PATH actually does?
From the ld.so(8) man page:
LD_LIBRARY_PATH
A colon-separated list of directories in which to search for ELF
libraries at execution-time. Similar to the PATH environment
variable.
But honestly, find an admin. Become one if need be. Oh, and build packages.
LD_LIBRARY_PATH makes it possible for individual users or individual processes to add locations to the search path on a fine-grained basis. /etc/ld.so.conf should be used for system wide library path setting, i.e. deploying your application. (Better yet you could package it as an rpm/deb and deploy it through your distributions usual package channels)
Typically a user might use LD_LIBRARY_PATH to force execution of their program to pick a different version of a library. Normally this is useful for favouring debugging or instrumented versions of libraries, but you can also use it to inject your own code into 3rd party code. (It is also possible use this for malicious purposes sometimes, if you can alter someone's bash profile to trick them into executing your code, without realising it).
Some applications also set LD_LIBRARY_PATH if they install "private" libraries in non-default locations, i.e. so they won't be used for normal dynamic linking but still exist. For scenarios like this though I'd be inclined to prefer dlopen() and friends.
Setting LD_LIBRARY_PATH is considered harmful because (amongst other reasons):
Your program is dynamically linked based on your LD_LIBRARY_PATH. This means that it could link against a particular version of a library, which happened to be in your LD_LIBRARY_PATH e.g. /home/user/lib/libtheora.so. This can cause lots of confusion if someone else tries to run it with without yourLD_LIBRARY_PATH and ends up linking against the default version e.g. in /usr/lib/libtheora.so.
It is used in preference to any default system link path. This means that if you end up having a dodgy libc on you LD_LIBRARY_PATH it could end up doing bad things like compromising your account.
As ignacio said, use packages wherever you can. This avoids library nightmares.

What’s the best way to distribute a binary application for Linux?

I just finished porting an application from Windows into Linux.
I have to create an installer of the application.
The application is not open source => I should distribute the application's binaries (executable file, couple .so files, help files and images).
I found several methods to do it:
- RPM and DEB packages;
- installer in .sh files;
- Autopackage.
I don't like first method (RPM and DEB packages) because I don't want to mantain different packages for different Linux distros.
What is the best way to distribute a binary application for Linux?
Having been through this a couple of times with commercial products, I think the very best answer is to use the native installer for each supported platform. Anything else produces an unpleasant experience for the end-user, and in practice you have to test on every platform you want to support anyway, so it's not really a significant burden to maintain packages for each. The idea that you can create a binary that can "just work" on every platform out there, including some you've never even heard of, just really doesn't work all that well.
My recommendation is that you pick a platform or two to support initially (Red Hat and Ubuntu would be my suggestions) and then let user demand drive the creation of additional installation packages. Perhaps make it known that you're willing to support additional platforms, for a modest fee that covers your time and effort in packaging and testing on that platform. If a platform proves to be very different, you may need to charge more for ongoing support.
Oh, and I cannot overemphasize the value of virtual machines for scenarios like this. You need to build VMs for each platform you support, and perhaps multiple VMs per platform to make it easy to test different configurations.
There were a lot of good answers (mine included :)) here. Although that is more about binary compatibility (which you do need to worry about).
For installer I would recommend autopackage (we successfully released several versions of our software with it), they did the "installer.sh" part already and more (desktop integration for example).
You have to be careful and test your upgrade scenarios and stuff, depending on how complex you package structure is, but it is pretty neat overall. I fixed few bugs with dependency handling in 1.2.6, so it should be fine.
UPDATE: The original question was deleted, so reposting full answer here, ignore all references to autopackage, that was merged into Listaller, not sure if relevant parts survived.
For standard libraries (like crypto++, pthreads, etc) that are likely to be available in a distribution -- link dynamically and tell users to get them from their distro repository. Or link statically if it is feasible.
For weird libraries that you must control version of (if you want to deploy Qt4 app on territory of enemy gnomes for example), compile them yourself and install into a private spot only your app knows about.
Never install private libs into standard places unless you can be sure to not interfere with package systems of all distros you support. (and that they can't interfere with you either).
Use rpath instead of LD_LIBRARY_PATH, and set it properly for all you binaries and all dlls that reference each other. You can set rpath on you binary to "$ORIGIN;$ORIGIN/../lib;/opt/my/private/libs" and have linker search those places before any standard paths. (have to setsome linker flag for origin to work I think). Make sure to set rpath on your libs too: for example QtGui needs QtCore, and if user happens to install standard package with different version, you absolutely don't want it picked up (exe -> ../lib/QtGui.so (4.4.3) -> /usr/local/lib/QtCore.so (4.4.2) -- a sure way to die early).
If you compile with any rpath, you can change it later with chrpath, thus making it possible to tweak install location as part of post processing or install script.
Maintain binary compatibility. GLIB_C is pretty much static for your users, so you should link against some sufficiently old version. 2.3 is a safe bet. You can use APBuild -- a gcc wrapper that enforces GLIB_C version and does few other binary compatibility tricks, so you don't have to compile all you apps on a really old distro.
If you link to anything statically, it generally will have to be rebuilt with APBuild too, otherwise it is bound to drag newer GLIB_C symbols. All .so's you install privately will naturally have to be built with it too. Sometimes you have to patch third party libs to use older symbols. (I had to patch ruby to return real permissions instead of effective ones, since there is no such functions in older GLIB_C. Still not sure if I broke anything :)).
For integration with desktop environments (file associations, mime-types, icons, start menu entries, etc) use xdg-utils. Beware though, like everything on linux they don't really like spaces in filenames :). Make sure to test those things on each target distro -- xdg implementations are riddled with bugs and quirks.
For actual install you can either provide variety of native packages (rpm, deb and a few more), or roll out your own installer, or find installer that works on all distros bypassing native package managers. We successfully used Autopackage (same people who made APbuild) for that.
It's possible to install an RPM on Debian and an APT on RHEL.
If you are going to statically link this program, or dynamically link only with libraries that you will be distributing in the package, then it doesn't much matter how you distribute it. The simplest way is tar.gz and that would work.
OTOH if it is dynamically linked with system libraries, and particularly if it has dependencies on dynamic libraries that will be shared with the client's other applications, then you kind of need to do either RPM, APT, or both.
You may want to try out InstallBuilder. It is crossplatform (runs on Windows, Linux, Mac OS X, Solaris and nearly any other Unix platform out there). It is used by Intel, Motorola, GitHub, MySQL, Nokia/Trolltech and many other companies so you will be in good company :) In addition to binary installers, it can also create cross-distro RPMs and DEB packages.
InstallBuilder is commercial, but we offer free licenses for open source programs and very significant discounts for mISVs or solo-developers, just drop us a line.
Create a .tar.bz2 archive with the binary, then publish a feed for it, like this:
<?xml version="1.0" ?>
<interface uri="http://mysite/myprog.xml"
xmlns="http://zero-install.sourceforge.net/2004/injector/interface">
<name>MyProgram</name>
<summary>what it does</summary>
<description>A longer description goes here.</description>
<implementation main='bin/myprog'
id="sha1new=THEDIGEST"
version='1.0'>
<archive href='http://mysite/myprogram-1.0.tar.bz2'
size='10000'/>
</implementation>
</interface>
Sign it with your GPG key. You can use the tools on 0install.net to calculate the digest and add the GPG signature for you in the correct format.
Then, put it on your web-site at the address in the uri attribute. Any user on most Linux distributions (e.g. Ubuntu, Fedora, Debian, Gentoo, ArchLinux, etc) can then install and run your program with:
0launch http://mysite/myprog.xml
Their system will also check for updates periodically. There are various GUIs for the different desktop environments, but the command-line will work everywhere.
Also look at some of the existing feeds for inspiration.
I tell you an additional possibility, although I am not aware of its status: the Loki installer. Loki was a company doing videogames porting for Linux. It went down in 2002, but the installer is available.
InstallShield is also available for linux. No idea on the status though.
Although many people are proposing you to go with tar.gz, please don't. I assume you want to provide a pleasant experience for the installation procedure to your users. A tar.gz is one of the most low level, low quality, low usability choices you can do. It works everywhere because it does basically nothing, as you know.
The guys at freedesktop.org and the LSB are quite clear on where to put stuff. What you need is a friendly program to do that. Autopackage imho has the numbers (I love it), but despite its age, I haven't seen a single program out there distributed as an autopackage.
Evaluate it carefully, but don't skip the chance of being part of the momentum in favour of it, just because it's not popular. If it works for you, and it works for your users, everything else does not matter.
There is no best way (universally speaking).
tar.gz the binaries, that should work.
Today, I would also look at Snapcraft and Flatpak which are embraced by some popular distributions. I explored other options and it is what ended up working best for me. Flatpak in particular also helped me learn about standard Linux desktop conventions to follow.
You may also want to look at AppImage (https://appimage.org/). The concept is that it produces a single binary file that the user downloads, sets executable, and runs directly; no installation necessary, no dependencies to install (since the app image typically includes all the dependencies except basic stuff like glibc). This makes for a really great user experience!
Some downsides:
The image may be large, since it probably includes all files/libraries/... the app depends on.
As the image creator, you're responsible for security updates to any of the libraries you add into your image.
An AppImage is great for a user-run application that's pretty isolated from anything else on the system (i.e. daemons, system configuration, etc.), but if your app relies on things like udev integration, desktop file installation, dbus registration, etc. this isn't easy, since the apps files aren't available when the app isn't running (making udev rules hard), and there is by definition no installer that gets run (making desktop file installation hard).
I've also looked into this at work and I'd have to agree there really isn't a "best way". If your application is being distributed as source then I'd go with the make/configure methods packaged up in a tar.gz. That seems fairly universal in the Linux world.
A good way to get an idea of what to do is to look at larger organziation and see how they distribute their binaries.

Resources