How are portable apps called by the system when needed (eg file association)? - linux

There are a few implementations of portable apps in Linux, but it seems that all Mac OS X apps are portable. Since Mac OS X completely embraces this model, I'm assuming they already have a solution to this problem.
Since Windows "installs" apps by putting files all over the place, and changing things in the registry, file associations can be made easily. But, let's say I just downloaded MPlayer for Mac OS X (or whatever). I want all my movies to open in MPlayer. Then, I decide to move MPlayer's app bundle (hey, it's portable, right?). Will the association break? Or is that not how it's done at all on OS X?
How would one implement portable apps in Linux? Should it be similar to OS X's model? I know this is a very open-ended question, but any suggestions are appreciated.

OS X's Launch Services database keeps track of document bindings in several ways—generally it does its best to try to match an application even if you've moved it.
You can run lsregister -dump (lsregister is /System/Library/Frameworks/CoreServices.framework/Frameworks/LaunchServices.framework/Support/lsregister) to see what the Launch Services database says about a binding. For example, if I bind text files to open with TextWrangler, I see:
handler id: 3124
content type: public.plain-text
options:
all roles: com.barebones.textwrangler (0x3ea30180)
public.plain-text is a Uniform Type Identifier (which maps to one or more file extensions, MIME types, etc., and may have subtypes) representing plain text, and com.barebones.textwrangler is the bundle ID of TextWrangler.
I'm not aware of any Linux standard as robust as this for document binding—to do something like the Mac, first there'd need to be a standard method for identifying applications regardless of their location or name (like the Java package-like/reverse-DNS method on the Mac), then a registry for type mappings and bindings that was followed by enough desktop environments to be useful, and some way of registering applications as they're installed.
You don't necessarily need separate files, like Info.plist in Mac application bundles, to store this information; even on Mac OS X you can embed information into a binary section which Launch Services indexes just fine (note that this is not a separate "fork" or extended attribute; it's like embedding debug information in an executable). So perhaps some derivative of the .desktop files could be embedded. On the other end, you'd need a way of recognizing content. Ideally you'd even be able to do content sniffing like the file(1) command to identify a document type; classic Mac OS did this with the Translation Manager (which permitted registration of converters from one format to another, as well as sniffers).
UTIs and the Translation Manager handle(d) clipboard and drag & drop content as well as files on disk; unifying these format representations is pretty useful while you're at it.

Each file browser (e.g. Nautilus, Konqueror) would have to be configured to use its own file associations. Fortunately, the Free Desktop project has been working on standardizing file associations (among many other things). According to the shared MIME database description, no formal specification has been written yet, but the format is pretty much standardized.
The Free Desktop project also uses .desktop files to provide "portability" (maybe you should use another word for this... perhaps "movable"?). If you move the executable outside of PATH, you can update the .desktop to point to the correct location.
Basically, there is a lot of ongoing work in the Linux community moving towards more user-friendly and developer-friendly (i.e. standardized) ways of accomplishing these goals. But things are not done yet.

Related

How to pack files into one executable file for Linux and Windows?

I'm creating an desktop app on Golang with Muon UI (using Ultralight instead of Chromium) and cross-build my app for Linux and Windows. For now the app work fine but it required Ultralight libraries (*.dll for Windows and *.so for Linux). But I wanna distribution my app as single executable file. How I can create two executable files? First file for Linux, it's should include main executable file for Linux and only *.so libraries. And second file should include main executable file for Windows and only *.dll libraries. How I can to do this?
Are there any CLI utils for this? (for using in gitlab CI inside Docker for example) Or maybe I can to do this via Golang (for example using embed package. Can I embedded libraries into exe file, that it is can run)?
Or can I use cgo for link dynamic libs as static into binary file?
The honest answer would be: "With great difficulty, lots of pain, blood and tears."
The somewhat longer answer is, that a precompiled DLL/.so may contain slightly more than a mere static library. It it possible to "convert" a DLL/.so into a static library? Somewhat. It boils down to dumping its contents into object files, reverting all the relocation entries, possibly dealing with versioned symbols and weak symbols. No, there are no kitchen sink utilities out there, doing all that for you on an executable binary level.
If you can limit yourself to Linux, you may want to look into Flatpak. What this does is wrapping everything up into a sort of "self extracting archive", which upon launch will transparently and invisibly unpack itself into an in-situ temporary mount point (which you won't see from the rest of the system).
Now, one option would be to build all the dependencies of your program yourself, and arranging for those builds to be created as static libraries. In that case you're no longer dealing with DLLs. However some libraries do not want to be built for static linking, so your mileage may vary there.
Truth to be told: Why is distributing multiple files any issue at all? On Linux/*BSD you must ship separate icon and .desktop files anyway, so that stuff shows up in the Desktop application menus. Yes, it'd be nice if instead of dealing with XDG desktop entry files we had the option to place all of that information into a special – let's call it .xdgdata – readonly section, with some well known symbol names, so that we could have truly single file distributable executables.
My honest suggestion: Don't sweat about it. Just ship the whole bunch of files and don't worry too much about "how this looks".

Do any modern filesystems support arbitrary metadata handling?

My overall goal is to help people in my organisation find more relevant unstructured data. Files are currently stored on NT drives, multiple SharePoint vintages, Linux disks. Various open and proprietary file formats, some support metadata, many don't.
I need tools that can interrogate files for metadata - some work needed here. I also need something that can crawl and index the metadata, and put it in context - again, further work needed.
However, having generated some metadata for a file, I'd like to attach it to the file inside the filesystem, so that it always remains with the file - not a hidden file or anything like that, but a deeper association within the file system. Do any file systems (preferably Linux) support this kind of feature?
Yes, that's called extended attributes, and most native filesystems on linux supports it.
You can manipulate these attributes with the getfattr/setfattr commands, and there's a corresponding C API for doing the same.

Recommended FHS compliant application test/install workflow under Linux?

I'm in the process of switching to Linux for development, and I'm puzzled about how to maintain a good FHS compliancy in my programs.
For example, under Windows, I know that all the resources (Bitmaps, audio data, etc.) that my program will need can be found with relative paths from the executable, so its the same if I'm running the program from my development directory, or from an installation (Under "Program Files" for example), the program will be able to locate all its files.
Now, under Linux, I see that usually the executable goes under /usr/local/bin and its resources on /usr/local/share. (And the truth is that I'm not even sure of this)
For convenience reasons (such as version control) I'd like to have all the files pertaining to the project under a same path, say, for example, project/src for the source and project/data for resource files.
Is there any standard or recommended way to let me just rebuild the binary for testing and use the files on the project/data directory, while also being able to locate the files when they are under /usr/local/share?
I thought for example of setting a symlink under /usr/local/share pointing to my resources dir, and then just hardcode that path inside my program, but I feel its quite hackish and not very portable.
Also, I thought of running an install script that copies all the resources to /usr/local/share everytime I change, or add resources, but I also feel its not a good way to do it.
Could anyone tell me or point me to where it tells how this issue is usually resolved?
Thanks!
For convenience reasons (such as version control) I'd like to have all the files pertaining to the project under a same path, say, for example, project/src for the source and project/data for resource files.
You can organize your source tree as you wish — it need not bear any resemblance to the FHS layout desired of installed software.
I see that usually the executable goes under /usr/local/bin and its resources on /usr/local/share. (And the truth is that I'm not even sure of this)
The standard prefix is /usr. /usr/local is for, well, "local installations" as the FHS spec reiterates.
Is there any standard or recommended way to let me just rebuild the binary for testing and use the files on the project/data directory
Definitely. Run ./configure --datadir=$PWD/share for example is the way to point your build to the data files form the source tree (substitute by proper path) and use something like -DDATADIR="'${datadir}'" in AM_CFLAGS to make the value known to the (presumably C) code. (All of that, provided you are using autoconf/automake. Similar options may be available in other build systems.)
This sort of hardcoding is what is used in practice, and it suffices. For a development build within your own working copy, having a hardcoded path should not be a problem, and final builds (those done by a packager) will simply use the standard FHS paths.
You could just test a few locations. For example, first check if you have a data directory within the directory you're currently running the program from. If so, just go ahead and use it. If not, try /usr/local/share/yourproject/data, and so on.
For developing/testing, you can use the data directory within your project folder, and for deploying, use the stuff in /usr/local/share/. Of course, you can test for even more locations (e.g. /usr/share).
Basically the requirement for this method is that you have a function that builds the correct paths for all filesystem accesses. Instead of fopen("data/blabla.conf", "w") use something like fopen(path("blabla.conf"), "w"). path() will construct the correct path from the path determined using the directory tests when the program started. E.g. if the path was /usr/local/share/yourproject/data/, the string returned by path("blabla.conf") would be "/usr/local/share/yourproject/data/blabla.conf" - and there is your nice absolute path.
That's how I'd do it. HTH.
My preferred solution in cases like this is to use a configuration file, along with a command-line option that overrides its location.
For example, a configuration file for a fully deployed application named myapp could reside in /etc/myapp/settings.conf and a part of it could look like this:
...
confdir=/etc/myapp/
bindir=/usr/bin/
datadir=/usr/share/myapp/
docdir=/usr/share/doc/myapp/
...
Your application (or a launcher script) can parse this file to determine where to find the rest of the needed files.
I believe that you can reasonably assume in your code that the location of the configuration file is fixed under /etc/myapp - or any other location specified at compile time. Then you provide a command line option to allow that location to be overridden:
myapp --configfile=/opt/myapp/etc/settings.conf ...
It might also make sense to have options for some of the directory paths as well, so that the user can easily override any of the configuration file settings. This approach has a couple of advantages:
Your users can relocate the application very easily - just by moving the files, modifying the paths in the configuration file and then using e.g. a wrapper script to call the main application with the proper --configfile option.
You can easily support FHS, as well as any other scheme you need to.
While developing, you can have your testsuite use a specially crafted configuration file with the paths being wherever you need them to be.
Some people advocate probing the system at runtime to resolve issues like this. I usually suggest avoiding such solutions for at least the following reasons:
It makes your program non-deterministic. You can never tell at a first glance which configuration file it picks up - especially if you have multiple versions of the application on your system.
At any installation mix-up, the application will remain fat and happy - and so will the user. In my opinion, the application should look at one specific and well-documented location and abort with an informative message if it cannot find what it is looking for.
It's highly unlikely that you will always get everything right. There will always be unexpected rare environments or corner cases that the application will not handle.
Such behaviour is against the Unix philosophy. Even comamnd shells probe multiple locations because all locations can hold a file that should be parsed.
EDIT:
This method is not mandated by any formal standard that I know of, but it is the prevalent solution in the Unix world. Most major daemons (e.g. BIND, sendmail, postfix, INN, Apache) will look for a configuration file at a certain location, but will allow you to override that location and - through the file - any other path.
This is mostly to allow the system administrator to implement whetever scheme they want or to setup multiple concurrent installations, but it does help during testing as well. This flexibility is what makes it a Best Practice if not a proper standard.

Is it or should it be possible to modify the GUI of an application after it's compiled?

I'm a Linux user, and I have been very hesitant to use Glade to design GUIs, since the xml files it produces can easily be modified. I know it doesn't sound like a major issue, but what if it's a commercial app that you just don't want people changing?
I use Mac OS X every once in a while, and I figured out that they use files called ".nib"s for GUIs. I think they're essentially the same type used in Nextstep and Openstep (there's even a Linux app which lets you edit these files). Anyway, these files are included in the application bundle, and according to some people, are completely editable. This person claims he even successfully edited Keynote's interface.
Now, why would that be possible? Is it completely okay for the end user to change the interface? Or is it better to have the GUI directly in the compiled application code, like traditional GTK apps?
OS X nib files are one option; the other option is to do things programmatically. For android, XML files can define the GUI or program code can do it. In Windows WPF, the UI is made in XML. Firefox/Mozilla? XUL, another XML-based UI language.
Most modern GUI toolkits have either both of these options or even just defining UIs in files.
But even binaries are modifiable. With a good binary reverse engineering tool, it's wide open. The only way to be really certain is to do what Apple did with iOS, and run signed code; the entire bundle is signed by a key and can't be run if modified.
This isn't a problem for most everyone. Why do you care if the UI is modified? The underlying code isn't, so functionality can't be added or modified.
As a corollary (and a little off-topic) something that you might have a valid concern about is stuff a little more like this.
I don't really see a problem with it. If a user messes up his UI, then it's his problem. Think of it like moddable games. Users always loved them, and in the end, most games benefit from it. There is usually nothing secret about an application's user interface. If there is, you could always do some sort of encryption.
As others have said, you can also add checksums if you just want to disallow editing.
The xml specifies little more than what the interface looks like. Without the compiled-in event handling code, it's pretty much useless. My opinion is customers change it at their own risk, and you might actually get some free useful improvements out of their hacks.
If you're really paranoid about people changing it, you could always add an MD5 digest verification step or something when you load the xml, or compile the xml string into a header file, but that defeats many of the benefits.
The theming engine can make substantial-looking changes to your GUI, as can tools like Parasite. Updating the Glade layout — at their own risk — is much safer than either of those.
What's wrong with users customizing the UI anyway?

Specifying different platform specific package at compile time in Ada (GNAT)

I'm still new to the Ada programming world so forgive me if this question is obvious.
I am looking at developing an application (in Ada, using the features in the 2005 revision) that reads from the serial port and basically performs manipulation of the strings and numbers it receives from an external device.
Now my intention was to likely use Florist and the POSIX terminal interfaces to do all the serial work on Linux first....I'll get to Windows/MacOS/etc... some other time but I want to leave that option open.
I would like to follow Ada best practices in whatever I do with this. So instead of a hack like conditional compilation under C (which I know Ada does not have anyway) I would like to find out how you are suppose to specify a change in package files from the command line (gnatmake for example)?
The only thing I can think of right now is I could name all platform packages exactly the same (i.e. package name Serial.Connector with the same filenames) and place them in different folders in the project archive and then upon compilation specify the directories/Libraries to look in for the files with -I argument and change directory names for different platforms.
This is way I was shown for GCC using C/C++...is this still the best way with Ada using GNAT?.
Thanks,
-Josh
That's a perfectly acceptable way of handling this kind of situation. If at all possible you should have a common package specification (or specifications if more than one is appropriate), with all the platform-specific stuff strictly confined to the corresponding package body variations.
(If you did want to go down the preprocessor path, there's a GNAT preprocessor called gnatprep that can be used, but I don't like conditional compilation either, so I'd recommend staying with the separate subdirectories approach.)
You could use the GNAT Project file package Naming: an extract from a real example, where I wanted to choose between two versions of a package in the same directory, one with debug additions, is
...
type Debug_Code is ("no", "yes");
Debug : Debug_Code := External ("DEBUG", "no");
...
package Naming is
case Debug is
when "yes" =>
for Spec ("BC.Support.Managed_Storage")
use "bc-support-managed_storage.ads-debug";
for Body ("BC.Support.Managed_Storage")
use "bc-support-managed_storage.adb-debug";
when "no" =>
null;
end case;
end Naming;
To select the special naming, either set the environment variable DEBUG to yes or build with gnatmake -XDEBUG=yes.
Yes, the generally accepted way to handle this in Ada is to do it with different files, selected by your build system. Gnu make is about as multiplatform as it gets, and can allow you to build different files (with different names and/or directories and everything) under different configurations.
As a matter of fact, I find this a superior way (over #ifdefs) to do it in C as well.

Resources