I'm writing a program for Linux that stores its data and settings in the home directory (e.g. /home/username/.program-name/stuff.xml). The data can take up 100 MB and more.
I've always wondered what should happen with the data and the settings when the system admin removes the program. Should I then delete these files from every (!) home directory, or should I just leave them alone? Leaving hundreds of MB in the home directories seems quite wasteful...
I don't think you should remove user data, since the program could be installed again in future, or since the user could choose to move his data on another machine, where the program is installed.
Anyway this kind of stuff is usually handled by some removal script (it can be make uninstall, more often it's an unsinstallation script ran by your package manager). Different distributors have got different policies. Some package managers have got an option to specify whether to remove logs, configuration stuff (from /etc) and so on. None touches files in user homes, as far as I know.
What happens if the home directories are shared between multiple workstations (ie. NFS mounted)? If you remove the program from one of those workstations and then go blasting the files out of every home directory, you'll probably really annoy the people who are still using the program on other workstations.
Related
I wrote this script that runs several executable updates in a shared network folder. Several separate machines must run these updates.
I would like to archive these updates once they are run. However, as you may see the dilemma, if the first machine runs an update and archives the executable,
the rest of connected devices won't run as they will no longer appear in the cwd. Any ideas?
It took me a while to understand what you meant by "archiving", but probably moving to another folder on a network shared mount. Also, the title should definitely be changed, I accidentally marked it as OK in Review Triage system.
You probably want to assign some ID to each machine, then have each of them create a new file once they finish the installation (e.g. empty finished1.txt for PC with ID 1, finished2.txt for PC 2 etc.). Then, one "master" PC should periodically scan for such files, and when finding all it expects, deleting/moving/archiving the installers. It may be good idea to add timeout functionality to the script on master PC, so when one of PCs will get stuck, you will get notified in some way.
I have an account on a set of machines which have separate processors, memory, and storage but share the same home directory. So, for instance, if I ssh into either machine, they will both source the same bashrc file.
The sysadmin does not install all of the software I wish to use so I have compiled some from source and store it in bin, lib, etc. directories in the home directory and change my PATH, LD_LIBRARY_PATH variables to include these. Each machine, at least up until recently, had different operating systems (or at least versions) installed and I was told that compiled code on one machine would not necessarily give the same result on the other. Therefore my (very hacky) solution was the following:
Create two directories in $HOME: ~/server1home and ~/server2home,
each with their own set of bin, lib, etc. with separately
compiled libraries.
Edit my .bashrc to check which server I am on and set the path variables to look in the correct directories for binaries and libraries for the server.
Lately, we moved building and the servers were rebooted and I believe they both run the same OS now. Most of my setup was broken by the reboot so I have to remake it. In principle, I don't need anything different on each machine and they could be identical apart from the fact that there are more processors and memory to run code on. They don't have the same hardware, as far as I'm aware, so I still don't know if they can safely run the same binaries. Is such a setup safe to run code that needs to be numerically precise?
Alternatively, I would redo my hack differently this time. I had a lot of dotfiles that still ended up going into $HOME, rather than my serverXhome directories and the situation was always a little messy. I want to know if it's possible to redefine $HOME on login, based on hostname and then have nothing but the two serverXhome directories inside the shared $HOME, with everything duplicated inside each of these new home directories. Is this possible to set up without administrative privileges? I imagine I could make a .profile script that runs on login and changes $HOME to point at the right directory and then sources the new .bashrc within that directory to set all the rest of the environment variables. Is this the correct way of going about this or are there some pitfalls to be wary of?
TL;DR: I have the same home directory for two separate machines. Do binary libraries and executables compiled on one machine run safely on the other? Otherwise, is there a strategy to redefine $HOME on each machine to point to a subdirectory of the shared home directory and have separate binaries for each?
PS: I'm not sure if this is more relevant in superuser stackexchange or not. Please let me know if I'd have better luck posting there.
If the two machines have the same processor architecture, in general compiled binaries should work on both.
Now there is a number of factors that come into play, and the result will hugely depend on the type of programs that you want to run, but in general, if the same shared libraries are installed, then your programs will work identically.
On different distributions, or different versions of a given distribution, it is likely that the set of installed libraries, or that the version of them will differ, which means that your programs will work on the machine on which they are built, but probably not on another one.
If you can control how they are built, you can rebuild your applications to have static linkage instead of dynamic, which means that they will embed all the libraries they need when built, resulting in a much bigger executable, but providing a much improved compatibility.
If the above doesn't work and you need to use a different set of programs for each machine, I would recommend leaving the $HOME environment variable alone, and only change your $PATH depending on the machine you are on.
You can have a short snipper in your .bashrc like so:
export PATH=$HOME/$(hostname)/bin:$PATH
export LD_LIBRARY_PATH=$HOME/$(hostname)/lib:$LD_LIBRARY_PATH
Then all you need is a folder bearing the machine's hostname for each machine you can connect to. If several machines share the same operating system and architecture, making symbolic links will save you some space.
I recently was in a situation when the software center on my Ubuntu installation was not starting. When I tried to launch it from console, I found that python was unable to find Gtk, although i hadn't removed it.
from gi.repository import Gtk,Gobject
ImportError: cannot import name Gtk
I came across a closely related question at Stackoverflow( i am unable to provide link to the question as of now).The solution accepted(and also worked for me) was to remove the duplicate installation of gtk from /usr/local as Gobject was present in this directory but not Gtk.
So, I removed it and again launched software-center and it launched.
While I am happy that the problem is solved, I would like to know if removing files from /usr/local can cause severe problems.
Also, echo $PATH onn my console gives:
/home/rahul/.local/bin:/home/rahul/.local/bin:/home/rahul/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/home/rahul/.local/bin:/home/rahul/.local/bin
which tells that /usr/local/bin is searched for before /usr/bin . Should $PATH be modified such that the order of lookup is reversed? If yes, then how?
You actually don't want to change that.
/usr/local is a path that, according to the Linux Filesystem Hierarchy Standard is dedicated to data specific to this host. Let's go a bit back in time: when computers were expensive, to use one you had to go to some lab, where many identical UNIX(-like) workstations were found. Since disk space was also expensive, and the machines were all identical and had more or less the same purpose (think of a university), they had /usr mounted from a remote file server (most likely through the NFS protocol), which was the only machine with big enough disks, holding all the applications that could be used from the workstation. This also allowed for ease of administration, where adding a new application or upgrading another to a newer version could just be done once on the file server, and all machines would "see" the change instantly. This is why this scheme permaned even as bigger disks become inexpensive.
Now imagine that, for any reason, a single workstation needed a different version of an application, or maybe a new application was bought with with only a few licenses and could thus be run only on selected machines: how to handle this situation? This is why /usr/local was born, so that single machines could somehow override network-wide data with local data. For this to work, of course /usr/local must point to a local partition, and things in said path must come first than things in /usr in all search paths.
Nowadays, UNIX-like Linux machines are very often stand-alone machines, so you might think this scheme no longer makes senses, but you would be wrong: modern Linux distributions have package management systems which, more or less, play the role that of the above-mentioned central file server: what if you need a different version of an application that what is available in the Ubuntu repository? You could install it manually, but if you put it in /usr, it could be overwritten by an update performed by the package management system. So you could just put it in /usr/local, as this path is usually granted not to be altered in any way by the package management system. Again, it should be clear that in this case you want anything in /usr/local to be found first than anything in /usr.
Hope you get the idea :).
I've got a few processes that talk to each other through named pipes. Currently, I'm creating all my pipes locally, and keeping the applications in the same working directory. At some point, it's assumed that these programs can (and will) be run from different directories. I need to create these pipes I'm using in a known location, so all of the different applications will be able to find the pipes they need.
I'm new to working on Linux and am not familiar with the filesystem structure. In Windows, I'd use something like the AppData folder to keep these pipes. I'm not sure what the equivalent is in Linux.
The /tmp directory looks like it probably could function just nicely. I've read in a few places that it's cleared on system shutdowns (and that's fine, I have no probably re-creating the pipes when I start back up.) but I've seen a few other people say they're losing files while the system is up, as if it's cleaned periodically, which I don't want to happen while my applications are using those pipes!
Is there a place more suited for application specific stores? Or would /tmp be the place that I'd want to keep these (since they are after all, temporary.)?
I've seen SaltStack using /var/run. The only problem is that you need root access to write into that directory, but let's say that you are going to run your process as a system daemon. SaltStack creates /var/run/salt at the installation time and changes the owner to salt so that later on it can be used without root privileges.
I also checked the Filesystem Hierarchy Standard and even though it's not really important so much, even they say:
System programs that maintain transient UNIX-domain sockets must place them in this directory.
Since named pipes are something very similar, I would go the same way.
On newer Linux distros with systemd /run/user/<userid> (created by pam_systemd during login if it doesn't already exist) can be used for opening up sockets and putting .pid files there instead of /var/run where only root has access. Also note that /var/run is a symlink to /run so /var/run/user/<userid> can also be used. For more infos check out this thread. The idea is that system daemons should have a /var/run/<daemon name>/ directory created during installation with proper permissions and put their sockets/pid files in there while daemons run by the user (such as pulseaudio) should use /run/user/<userid>/. Another option is /tmp and /var/tmp.
I have desktop computer and a notebook where I work modifying the same files with the same programs. I want to automatically synchronize the changes I make in any of those.
I'm wondering if there exists some already coded script that makes this job or, if there isn't any, the commands to compare the creation of the files of interests in both computers via ssh and replaces the older ones with the newer ones.
Example:
I modify /home/text.txt file in the notebook and before shutting it off I want to execute a script that automatically saves the text.txt file into my desktop computer /home/text.txt becoase one is newer than the other
The bi-directional synchronizer unison comes to mind, and doesn't require internet access (although it does require a network connection between your two systems).
The easiest solution would be to use Dropbox.
Install it on both, create an account, and login. Problem solved.