What is the difference between .cfg and .conf in linux? - linux

it seems so many configuration files in linux,
some files having extension .cfg and some files having .conf
Little bit confusion, what is the difference .cfg and .conf.

There's no particular meaning. Both are short for "configuration". There's no real standard for what configuration files should be called.
Apparently the authors of some programs preferred .conf, and others preferred .cfg.
If you need to create a configuration file for a particular program, you just have to use the name that program expects.

Related

How to pack files into one executable file for Linux and Windows?

I'm creating an desktop app on Golang with Muon UI (using Ultralight instead of Chromium) and cross-build my app for Linux and Windows. For now the app work fine but it required Ultralight libraries (*.dll for Windows and *.so for Linux). But I wanna distribution my app as single executable file. How I can create two executable files? First file for Linux, it's should include main executable file for Linux and only *.so libraries. And second file should include main executable file for Windows and only *.dll libraries. How I can to do this?
Are there any CLI utils for this? (for using in gitlab CI inside Docker for example) Or maybe I can to do this via Golang (for example using embed package. Can I embedded libraries into exe file, that it is can run)?
Or can I use cgo for link dynamic libs as static into binary file?
The honest answer would be: "With great difficulty, lots of pain, blood and tears."
The somewhat longer answer is, that a precompiled DLL/.so may contain slightly more than a mere static library. It it possible to "convert" a DLL/.so into a static library? Somewhat. It boils down to dumping its contents into object files, reverting all the relocation entries, possibly dealing with versioned symbols and weak symbols. No, there are no kitchen sink utilities out there, doing all that for you on an executable binary level.
If you can limit yourself to Linux, you may want to look into Flatpak. What this does is wrapping everything up into a sort of "self extracting archive", which upon launch will transparently and invisibly unpack itself into an in-situ temporary mount point (which you won't see from the rest of the system).
Now, one option would be to build all the dependencies of your program yourself, and arranging for those builds to be created as static libraries. In that case you're no longer dealing with DLLs. However some libraries do not want to be built for static linking, so your mileage may vary there.
Truth to be told: Why is distributing multiple files any issue at all? On Linux/*BSD you must ship separate icon and .desktop files anyway, so that stuff shows up in the Desktop application menus. Yes, it'd be nice if instead of dealing with XDG desktop entry files we had the option to place all of that information into a special – let's call it .xdgdata – readonly section, with some well known symbol names, so that we could have truly single file distributable executables.
My honest suggestion: Don't sweat about it. Just ship the whole bunch of files and don't worry too much about "how this looks".

Is there a data: URI-like construct for paths on Linux?

If you "open" an URI like data:text/html,<p>test</p>, the opened "file" contains <p>test</p>.
Is there a corresponding approach to apply this principle to Linux paths?
Example:
I want a path to a "virtual file" that "contains" example-data, ideally without actually creating this file.
So I'm basically looking for something you can replace some_special_path_results_in with in /some_very_special_path_results_in/example-data so that the opened "file" just "contains" example-data.
You can use process substitution in bash.
some_command <(printf '%s' '<p>test</p>')
I want a path to a "virtual file" that "contains" example-data, ideally without actually creating this file.
Maybe you should consider using tmpfs.
On Linux, creating a file is a very common and basic operation.
Why can't you create some "temporary" file? or some FUSE filesystem?
You could technically write your kernel module providing a new file system.
Be aware that files are mostly inode(7)-s (see stat(2)). They do have some meta data (but no MIME types). And any process can try to open(2) a file (sometimes, two processes are opening or accessing the same file). See path_resolution(7) and credentials(7).
Maybe you want pipe(7), fifo(7) or unix(7) sockets.
Read also Advanced Linux Programming, syscalls(2), a good textbook on operating systems, and see the KernelNewbies and Linux From Scratch and Linux BootPrompt websites
Technically Linux is open source: you are allowed to download, study and improve and recompile its source code. See this for the kernel code (it is free software), GNU libc, GCC, etc....
PS. Take into account legal software licensing considerations. Ask your lawyer to explain you the GPL licenses.

Best practice for paths defined in cross platform config files

I'm building a node application that has config files that are to be edited by users of the application, and they have file paths in them.
These config files will be used in Windows, Linux and MacOSX. For example, a project using this application might be developed in both Windows and MacOSX, but the config files are the same.
What is the best practice for the format of the paths in this case?
I have a couple of possibilities:
Force POSIX for the config files, and then when handling them, convert them to the current platform. The problem here is that Windows users might not like to have paths in the different format that they are used to.
Allow both formats in the config files and when parsing, convert them to the current platform. I'm afraid this might lead to issues parsing. Also, it might lead to weird mixed config files.
I think there's a lot of software out there that had the same dilemma, so I'm wondering if there's some best practice out there.
Thank you for your help!
Edit: paths are only going to be relative, no absolute paths. If someone puts an absolute path, then that config can't be used in different OSs anyway, so it's ok.
Converting between / and \ on the fly should not be a big issue, dealing with the root of the path is where things get problematic.
I would suggest that you add support for some kind of string expansion so that users can specify paths relative to known locations. Something like this perhaps:
imagedir=%config%/myimages
fallbackimagedir=%appdir%/images
You should also try to deal with full paths.
I would suggest that you convert Windows drive letter paths like c:\foo to /c/foo when reading the config on POSIX systems. This is easy since the string length is the same but it does assume that /c has been set up to be something useful.
Reading a POSIX path on Windows is trickier, when given a path that begins with / you have to prefix it with something. What that something is, is up to you, perhaps the drive letter root of the drive where your application is installed.

Is it possible for the same file to exist in more than one directory?

Just a simple question, borne out of learning about File Systems;
Is it possible for a single file two simultaneously exist in two or more directories?
I'd like to know if this is possible in Linux and well as Windows.
Yes, you can do this with either hard- or soft links (and maybe on Windows with shortcuts. I'm not sure about that). Note this is different from making a copy of the file! In both cases, you only store the same file once, unlike when you make a copy.
In the case of hard links, the same file (on disk) will be referenced in two different places. You cannot distinguish between the 'original' and the 'new one'. If you delete one of them, the other will be unaffected; a file will only actually be deleted when the last "reference" is removed. An important detail is that the way hard links work means that you cannot create them for directories.
Soft links, also referred to as symbolic links, are a bit similar to shortcuts in Windows, but on a lower level. if you open them for read or write operations, you'll read from the file, but you can distinguish between reading from the file directly, and reading from the soft link.
In Windows, the use of soft links is fairly uncommon, but there is support for it (IDK about the filesystem APIs, but there's a tool called ln just like on Unix).

Recommended FHS compliant application test/install workflow under Linux?

I'm in the process of switching to Linux for development, and I'm puzzled about how to maintain a good FHS compliancy in my programs.
For example, under Windows, I know that all the resources (Bitmaps, audio data, etc.) that my program will need can be found with relative paths from the executable, so its the same if I'm running the program from my development directory, or from an installation (Under "Program Files" for example), the program will be able to locate all its files.
Now, under Linux, I see that usually the executable goes under /usr/local/bin and its resources on /usr/local/share. (And the truth is that I'm not even sure of this)
For convenience reasons (such as version control) I'd like to have all the files pertaining to the project under a same path, say, for example, project/src for the source and project/data for resource files.
Is there any standard or recommended way to let me just rebuild the binary for testing and use the files on the project/data directory, while also being able to locate the files when they are under /usr/local/share?
I thought for example of setting a symlink under /usr/local/share pointing to my resources dir, and then just hardcode that path inside my program, but I feel its quite hackish and not very portable.
Also, I thought of running an install script that copies all the resources to /usr/local/share everytime I change, or add resources, but I also feel its not a good way to do it.
Could anyone tell me or point me to where it tells how this issue is usually resolved?
Thanks!
For convenience reasons (such as version control) I'd like to have all the files pertaining to the project under a same path, say, for example, project/src for the source and project/data for resource files.
You can organize your source tree as you wish — it need not bear any resemblance to the FHS layout desired of installed software.
I see that usually the executable goes under /usr/local/bin and its resources on /usr/local/share. (And the truth is that I'm not even sure of this)
The standard prefix is /usr. /usr/local is for, well, "local installations" as the FHS spec reiterates.
Is there any standard or recommended way to let me just rebuild the binary for testing and use the files on the project/data directory
Definitely. Run ./configure --datadir=$PWD/share for example is the way to point your build to the data files form the source tree (substitute by proper path) and use something like -DDATADIR="'${datadir}'" in AM_CFLAGS to make the value known to the (presumably C) code. (All of that, provided you are using autoconf/automake. Similar options may be available in other build systems.)
This sort of hardcoding is what is used in practice, and it suffices. For a development build within your own working copy, having a hardcoded path should not be a problem, and final builds (those done by a packager) will simply use the standard FHS paths.
You could just test a few locations. For example, first check if you have a data directory within the directory you're currently running the program from. If so, just go ahead and use it. If not, try /usr/local/share/yourproject/data, and so on.
For developing/testing, you can use the data directory within your project folder, and for deploying, use the stuff in /usr/local/share/. Of course, you can test for even more locations (e.g. /usr/share).
Basically the requirement for this method is that you have a function that builds the correct paths for all filesystem accesses. Instead of fopen("data/blabla.conf", "w") use something like fopen(path("blabla.conf"), "w"). path() will construct the correct path from the path determined using the directory tests when the program started. E.g. if the path was /usr/local/share/yourproject/data/, the string returned by path("blabla.conf") would be "/usr/local/share/yourproject/data/blabla.conf" - and there is your nice absolute path.
That's how I'd do it. HTH.
My preferred solution in cases like this is to use a configuration file, along with a command-line option that overrides its location.
For example, a configuration file for a fully deployed application named myapp could reside in /etc/myapp/settings.conf and a part of it could look like this:
...
confdir=/etc/myapp/
bindir=/usr/bin/
datadir=/usr/share/myapp/
docdir=/usr/share/doc/myapp/
...
Your application (or a launcher script) can parse this file to determine where to find the rest of the needed files.
I believe that you can reasonably assume in your code that the location of the configuration file is fixed under /etc/myapp - or any other location specified at compile time. Then you provide a command line option to allow that location to be overridden:
myapp --configfile=/opt/myapp/etc/settings.conf ...
It might also make sense to have options for some of the directory paths as well, so that the user can easily override any of the configuration file settings. This approach has a couple of advantages:
Your users can relocate the application very easily - just by moving the files, modifying the paths in the configuration file and then using e.g. a wrapper script to call the main application with the proper --configfile option.
You can easily support FHS, as well as any other scheme you need to.
While developing, you can have your testsuite use a specially crafted configuration file with the paths being wherever you need them to be.
Some people advocate probing the system at runtime to resolve issues like this. I usually suggest avoiding such solutions for at least the following reasons:
It makes your program non-deterministic. You can never tell at a first glance which configuration file it picks up - especially if you have multiple versions of the application on your system.
At any installation mix-up, the application will remain fat and happy - and so will the user. In my opinion, the application should look at one specific and well-documented location and abort with an informative message if it cannot find what it is looking for.
It's highly unlikely that you will always get everything right. There will always be unexpected rare environments or corner cases that the application will not handle.
Such behaviour is against the Unix philosophy. Even comamnd shells probe multiple locations because all locations can hold a file that should be parsed.
EDIT:
This method is not mandated by any formal standard that I know of, but it is the prevalent solution in the Unix world. Most major daemons (e.g. BIND, sendmail, postfix, INN, Apache) will look for a configuration file at a certain location, but will allow you to override that location and - through the file - any other path.
This is mostly to allow the system administrator to implement whetever scheme they want or to setup multiple concurrent installations, but it does help during testing as well. This flexibility is what makes it a Best Practice if not a proper standard.

Resources