Coming from Windows background here.
Is it an acceptable practice for GUI Linux applications to store their data files (not user-specific) at hard-coded locations (e. g. /etc/myapp/stuff)? I couldn't find any syscalls that would return the preferred directory for app data. Is there a convention out there as to what goes where?
/opt/appname/stuff according to the Linux Filesystem Hierarchy Standard
Your distribution's packaging system likely provides ways to handle common installation paths. What distribution are you using?
Generally speaking, yes there is a convention. On most Linux systems, application configuration files are typically located at /etc/appname/. You'll want to consult the LSB (Linux Standard Base) and the Linux FHS (Filesystem Hierarchy Standard) for their respective recommendations.
Also, if you are targeting your application towards a specific Linux distro, then that distro vendor probably has their own specific recommendations as far as packaging and related-conventions are concerned. You'll want to look at your distro vendor's developer pages for more information.
Configuration files for processes with elevated privileges are generally stored in /etc. Data files for processes with elevated privileges (Web Server, Mail Server, Chat Server, etc.) are generally stored in /var. And that's where consistency ends. Some folks say you start with the location to store them (/etc|/var) then have an appname sub-folder for your app, then continue from there as necessary.
If you're not a system daemon with elevated privileges, your only consistent choice is a dot directory in the launching user's home directory. I think the Free Desktop Standards (XDG) specify ~/.config for per-user configuration, and ~/.cache for replaceable static and/or generated data you need to save.
Looking at my Home Directory, a few key dot directories I have are:
~/.cache
~/.config
~/.irssi
~/.maildir
~/.mozilla
~/.kde
~/.ssh
~/.vnc
[edit]
While not a syscall, the XDG specifications I reference are at http://standards.freedesktop.org/basedir-spec/basedir-spec-latest.html
There are certain conventions.
System-wide, readable/editable (text-based) configuration files go in /etc/appname/.
System-wide, per-machine binary data files that change (eg. binary databases) go in /var/*/appname/ - /var/cache/appname/, /var/spool/appname/ and /var/lib/appname/ are the most common.
System-wide binary data files that could notionally be shared between machines (eg. things like graphics and sound files) go in /usr/share/appname/.
The full paths that Unix/Linux/GNU applications use to store config files and other data is usually set when an application is configured prior to compilation. These paths then get hard-coded into the compiled binary (you can see examples of this by running strings(1) over some existing executables).
That is, these types of paths are build-time configurable, not run-time configurable by default. Many apps will support command line options to specify where a configuration file is, and that configuration file will usually contain paths for other application resources. This allows an application to run with minimal configuration (built-in paths) but also allows a site to customise the paths completely.
Under Linux, only the basic services (opening a file, doing networking and interprocess communication etc) are provided as system calls. The rest is done using libraries.
If you are coding a GUI application, you should look into your toolkit's documentation to see if it provides a mechanism for managing defaults. Both KDE and Gnome have one for instance.
Related
I have an account on a set of machines which have separate processors, memory, and storage but share the same home directory. So, for instance, if I ssh into either machine, they will both source the same bashrc file.
The sysadmin does not install all of the software I wish to use so I have compiled some from source and store it in bin, lib, etc. directories in the home directory and change my PATH, LD_LIBRARY_PATH variables to include these. Each machine, at least up until recently, had different operating systems (or at least versions) installed and I was told that compiled code on one machine would not necessarily give the same result on the other. Therefore my (very hacky) solution was the following:
Create two directories in $HOME: ~/server1home and ~/server2home,
each with their own set of bin, lib, etc. with separately
compiled libraries.
Edit my .bashrc to check which server I am on and set the path variables to look in the correct directories for binaries and libraries for the server.
Lately, we moved building and the servers were rebooted and I believe they both run the same OS now. Most of my setup was broken by the reboot so I have to remake it. In principle, I don't need anything different on each machine and they could be identical apart from the fact that there are more processors and memory to run code on. They don't have the same hardware, as far as I'm aware, so I still don't know if they can safely run the same binaries. Is such a setup safe to run code that needs to be numerically precise?
Alternatively, I would redo my hack differently this time. I had a lot of dotfiles that still ended up going into $HOME, rather than my serverXhome directories and the situation was always a little messy. I want to know if it's possible to redefine $HOME on login, based on hostname and then have nothing but the two serverXhome directories inside the shared $HOME, with everything duplicated inside each of these new home directories. Is this possible to set up without administrative privileges? I imagine I could make a .profile script that runs on login and changes $HOME to point at the right directory and then sources the new .bashrc within that directory to set all the rest of the environment variables. Is this the correct way of going about this or are there some pitfalls to be wary of?
TL;DR: I have the same home directory for two separate machines. Do binary libraries and executables compiled on one machine run safely on the other? Otherwise, is there a strategy to redefine $HOME on each machine to point to a subdirectory of the shared home directory and have separate binaries for each?
PS: I'm not sure if this is more relevant in superuser stackexchange or not. Please let me know if I'd have better luck posting there.
If the two machines have the same processor architecture, in general compiled binaries should work on both.
Now there is a number of factors that come into play, and the result will hugely depend on the type of programs that you want to run, but in general, if the same shared libraries are installed, then your programs will work identically.
On different distributions, or different versions of a given distribution, it is likely that the set of installed libraries, or that the version of them will differ, which means that your programs will work on the machine on which they are built, but probably not on another one.
If you can control how they are built, you can rebuild your applications to have static linkage instead of dynamic, which means that they will embed all the libraries they need when built, resulting in a much bigger executable, but providing a much improved compatibility.
If the above doesn't work and you need to use a different set of programs for each machine, I would recommend leaving the $HOME environment variable alone, and only change your $PATH depending on the machine you are on.
You can have a short snipper in your .bashrc like so:
export PATH=$HOME/$(hostname)/bin:$PATH
export LD_LIBRARY_PATH=$HOME/$(hostname)/lib:$LD_LIBRARY_PATH
Then all you need is a folder bearing the machine's hostname for each machine you can connect to. If several machines share the same operating system and architecture, making symbolic links will save you some space.
Are the settings or configuration specifics of a printer on a *nix system using CUPS stored in a file? My assumption is yes, as *nix systems seem to use files for everything as opposed to using a registry system as does Windows. If so, where are such files located? Are they capable of having their file permissions modified, and if so, what could cause such a thing to occur in a non-manual way?
This question relates to one of my other questions in helping to explore a single, individual theory toward an answer there, but is decidedly separate.
Check on /etc/cups, for printers the file is printers.conf.
They can have permissions modified since they usually belong to the lp group, not a single user. Check cron jobs, system updates and any other cups interface that your distribution provides.
The project I am currently working on has a mix of legacy software and new development. The new dev work is being done on Linux and we have created a large domain on the Linux side. However, all of the legacy software must remain on Windows...
I haven't found any documentation indicating a mixed domain is possible although I can't see why the node managers or servers would have a problem communicating.
Can I add a Windows managed server to my Linux domain? Has anyone ever tried this? I can leave the domains separate if need be (although management won't be happy) but I was tasked with consolidating everything into a single domain.
If you don't have an exact answer, any links to documentation would be appreciated.
I do not have a practical experience with running such mixed-OS domain but I do not see a why it should not conceptually work.
Weblogic runs on Java, so that should work on both platforms.
The only problem that you may experience is that if the domain was created for a particular OS, its startup scripts will either be .sh for Linux or e.g. .cmd for Windows. In this case, you will probably need to get startup scripts for the particular OS and slightly modify them to match your target domain.
WebLogic is supported on both platforms, and startup scripts are also for both windows and linux.
The protocol they communicate is not in any way I know platform specific, so there's no reason for this to not work.
There doesn't seem to be any documentation on this however, so you need to just go for it.
We've got this up and running... it wasn't all that bad. Here's what we did:
Create a domain on Linux (NFS)
Add Weblogic .cmd start/stop scripts into <domain home>/bin folder
On Windows side:
Create a symlink under C: to the NFS domain location
mklink /D folder_name \\OUR-NFS01\path\to\domain
Update nodemanager.properties and nodemanager.domains to use the symlink path
Update nodemanager.properties to use our startManagedWebLogic.cmd for the start script
Update all of the .cmd files to reference the symlink path to the domain (e.g. DOMAIN_HOME)
Make sure in nodemanager.properties and .cmd files we reference the correct Windows JAVA_HOME location
Make sure any paths in the admin console (e.g. log file location) for the Windows managed server also reference the symlink path
That was it. Once we had the Windows nodemanager up and running we were able to start a managed server on the Windows host.
Side Note: We had issues using running the nodemanager as a Windows service when using mapped network drives. The service would not always see that mapped drive. That is why we chose to use a symlink instead (and it seems cleaner to me anyway).
The most recent WebLogic documentation is quite clear on this. A domain can mix hardware, operating system and JVM as long as all of them are supported:
Hardware, Operating System, and JVM Platform Compatibility
Oracle does recommend to use homogenous clusters as managed servers are expected to be equivalent to eachother, if this is not the case this may negatively impact load balancing and performance (see the above link).
For some unfortunate reasons, I have to convert a proprietary and binary library from a one-user per workstation to a multi-user per workstation setup.
Current setup. A user uses a program linked against a library. This library reads a system wide configuration file (using an hard-coded path, ie /usr/local/thelib/main.conf ) which itself contains several paths to several working directories. The wdir are themselves containing a bunch of user data files.
Desired outcome. Being able to manage several users on the same workstation. Of course, a user shall not be able to read nor alter any other user's data through the library, which should be taken care of by unix rights if I manage to feed the library a different working directory for each user.
The library might be used by several users at the same time so ln-ing the configuration file in /usr/local at runtime is not an option.
I was thinking of using FUSE in order to provide a different content for the file /usr/local/thelib/main.conf, depending on an environnement variable or the current unix user. The environnement var would then be used as a switch inside the code producing the configuration file.
I'm confortable using Python, Perl or C.
The workstation is running an up-to-date GNU/Linux Debian or Ubuntu distribution with a pretty recent kernel.
So. What do you think :
would you use FUSE ?
would you produce another kind of wrapper - using chroot(2) was suggested below per janneb - ?
use something else allowed by Linux ?
I kinda know that I would be able to produce something functional but I'll get the community advice since I don't want to reinvent the wheel right now.
Thanks.
Florian
you could use LD_PRELOAD to load a small stub that intercepts open() calls, and opens ~/.main.conf (assuming this is a shared object). Then in your application startup routine, check that LD_PRELOAD is set to the correct value, and if not, restart the app with the correct environment.
A simple way would be for the app to call chroot() before calling the library init function(s). E.g. if you chroot into $HOME/theapp then each user can have a private own config file in $HOME/theapp/usr/local/thelib/main.conf as well as private working dirs somewhere under $HOME/theapp.
I have one doubt. I am doing a project related to system restore concept in Linux. There i am planning to perform application wise rollback in case of failure. Is there any way to figure out what are all the files used by an application in the system?
Ok. I will make it a little clear. For instance consider the firefox application. When it is installed many files are written from the .deb file to folders like /etc, /usr, /opt etc. In windows all the files are installed in one folder under program files while in linux its not. So is there any way to figure out the files that belong to a software?
Thanks.
Well this can cover several things.
If you mean, which files are provided by the installation of your application ? Then the answer is, use decent package management, provide your software as an rpm/deb/... whatever package, and unstallation will take care of the rest.
If you mean, which libraries are being referenced by our application ? Then you can use ldd this will tell your which dynamic libraries are used when executing this application.
If you mean, which files is my application actively using ? Then take a look at the output of lsof (lsof = list open files) (or alternatively ls /proc//fd/), this will show all file descriptors open by your application (files, sockets, pipes, tty's, ...)
Or you could use all of the above.
One thing you can't track (unless you log this yourself) is which files have been created by your application during its lifetime.
To determine all the files installed along with the app depends on the package manager. All the ones I've dealt with (apt, pacman) have had this capability.
To determine all the files currently open by an application, use lsof.
Well, that depends ...
Most Linux system have some kind of packet management software, like aptitude in debian and ubuntu. There, you have information about what belongs to a packet. You might be able to use that information. That does not cover files created during runtime of apps though.
If you are using an RPM based distro
# rpm -Uvh --repackage pkg-1-1.i386.rpm
will repackage the old files and upgrade in a transaction so you can later rollback if something went wrong. To rollback to yesterday's state for example
# rpm -Uvh --rollback yesterday
See this article for other examples.