Best version control system for managing home directories - linux

I have 3 Linux machines, and want some way to keep the dotfiles in their home directories in sync. Some files, like .vimrc, are the same across all 3 machines, and some are unique to each machine.
I've used SVN before, but all the buzz about DVCSs makes me think I should try one - is there a particular one that would work best with this? Or should I stick with SVN?

I've had this problem for years, and I don't think version control is necessarily the right way to go. I've had good success with the the Unison file synchronizer which is designed for the express purpose of maintaining consistent home directories on two machines. I'm currently managing seven replicas with unison, and the details are a bit tricky, but it is a great tool and if you start with two you will be extremely pleased.
The key difference between Unison and a VCS is that Unison is willing to delay dealing with conflicts that have to be merged. Plus it gets all the defaults right. And it is fast: I use it daily, over a DSL line, to synchronize about 40GB of data.

Any DVCS would likely work fine. My favorite is Bazaar. It would be easiest to keep your config files in .config, version that, and then symlink as appropriate.
A benefit of DVCS is that you can version the per-machine config files as well, without interfering with versioning global configs.

I've had the same problem, and built a tool on top of Subversion that adds permission, ownership and secontext tracking, keeps the .svn directories out of the actually versioned trees, and adds a concept of layers so you can for example track all your config related to development, which you then only check out on machines you use for developing.
This has helped me organize my settings much better across the 50+ machines I log into.
Here's the project page. It's still a little rough around the edges, but we also use it at work to version system configuration for our 60+ servers.
In general, any version control system that uses some sort of metadata files to track stuff is going to cause you pain as is when actually using it.

Version control software isn't really great for home directories. Worse, some software doesn't really like the .svn folders or starts to interpret their contents. You could of course try to fix this with some very complex mirroring setup, but that's hard.

Here's a Mozilla developer that's tried to do this: Version controlling my home dir, there's a couple of suggestions in the comments.

git or Mercurials's cheap branching would work great for this situation. I started with Mercurial, because it is simpler, but have subsequently moved to git.

One way to handle this very flexibly is to have a build directory under revision control, not try and svn your actual home directory (which has its own issues)
so inside this you keep a structure like
/home/you/code/dotfiles
/home/you/code/dotfiles/dotbashrc
/home/you/code/dotfiles/dotemacs
...
/home/you/code/dotfiles/makefile
and the makefile can contain logic for specializing files (or not)
might be heavier than you need, but if your actual setup is complex (I've done this across 3 or 4 different unices at a time) then it's worth doing something like this.

I use git for this. So far, I have been able to keep the home directories on several machines synchronized, with no need for branching and merging. Instead, I use git rebase. Conflicts so far have been few and far between and easy to resolve.
I keep files that need to have separate contents out of revision control by putting them into .gitignore.
I keep configuration files for the following tools in git:
various shells
emacs and applications, i.e.
gnus
BBDB
emacs-w3m
mutt
screen
various utilities and scripts
I keep notes and such in a subdirectory which has its own git repository.

I would suggest looking into etckeeper if you haven't already. It's designed for versioning configuration files in /etc using a version control system:
etckeeper is a collection of tools to
let /etc be stored in a git,
mercurial, darcs, or bzr repository.
It hooks into apt (and other package
managers including yum and pacman-g2)
to automatically commit changes made
to /etc during package upgrades. It
tracks file metadata that revison
control systems do not normally
support, but that is important for
/etc, such as the permissions of
/etc/shadow. It's quite modular and
configurable, while also being simple
to use if you understand the basics of
working with revision control.
Although it's designed for /etc I think it would probably also work well (perhaps with some adaptation) for home directories since the basic needs are the same.

I know this is an old thread but found it while searching for some dotfiles.
My current system is using subversion. The key thing I did was check out the working copy into ~/.svnhome/ (in hindsight should have called it .dotfiles or something more generic). I then create symlinks to the files I actual use on that computer into home. For example my .procmail and .spamassassin folders are only needed on the mail server so I don't link those on my home server.
The only file that has some differences is the .bashrc file has some extra lines on my mac for macports. So at the bottom of .bashrc I have it check if .bashrc_local exists and parses that.
This is the last remaining thing I have using subversion (everything else is using git aside from work). The benefit of svn is because it's not a dvcs so I don't have to worry about accidentally committing on one server and forgetting to push it.
I have considered moving it to git so I could create branches. Using the above example I would have a branch for my main server that I would add the .procmail and .spamassassin folders but not have those in the master branch. But the current system has worked fine for years--before git even existed--and don't have any particular motivation to change it now.

Related

Moving a Gitolite (3) server that is using Git Annex

I'm currently in the process of moving a gitolite (3) installation between two
servers. Thankfully, this process is pretty well
documented on the main
project website. However, my repositories makes pretty active use of
git-annex which stores data in various
remotes as well as on the server itself.
Now, I'm not an expert on git-annex, but I know it works a bit differently from
"regular" git, so is there anything one should keep in mind when moving this
kind of installation or does it work just as outlined in the gitolite
documentation above?
After quite a bit of research, I couldn't find any details on how this should
done on a git-annex enabled repository so I decided to simply try it
out. Apparently, the steps as they are written work just fine, even for
git-annex content. That said, be cautious as you're moving stuff. Once the new
server is ready to take over, make sure the old one is disabled, I don't think
the git-annex likes to find 2 identical remotes.
As a minor anecdote: I accidently forgot to chown/chmod the repositories but
re-running step 6 and onwards without any issues what-so-ever.

Monitor changes done by small program

I need to install small programs I do not fully trust.
Therefore I would like to monitor all files for changes - whether this script places some files it is not supposed to or edits others.
As I want to monitor all folders and files I thought about using something similar to rsync - but is there an alternative to only watch for changes?
Does this way guarantee that I catch everything the software changes? Or are there some kind of "registry-entries" / changes in the configuration, I could miss?
Thanks a lot!
I would suggest you use some kind of sandbox (probably the most straightforward way nowadays is to use Docker).
You could use Git to track all the changes that are made into the sandbox/container:
Initialize a git repo in the root dir
Add all files and commit as the base version
Execute the install script you do not trust
Using git status is going to show you all the changes that were made during installation.

How to automatically merge on upload

I am searching for a solution to automatically merge files on upload.
To be more precise, we are working in small groups doing web-development, working on the same folder on our Debian Server remotely, so the Problem is of course that if we often have the situation, where up to 3 People need to write in the same php file, at the moment we are trying to coordinate when which person is allowed to work on it.
So my idea was if there is a subversion like solution, to just merge every time we save the file via sshfs.
You should use version control. Here are some options. Which one you should use depends on a variety of factors.
Mercurial
Git
Subversion
You can then have the server your site is on pull from the repository.

Drupal: do you use SVN for websites development?

Do you use Subversion while developing a website with drupal ?
I'm not talking about modules development, but websites development (i.e. adding hook functions, modifying template file.. etc)
thanks
Yes.
Anything that's got any kind of ongoing development or is going to change over time should be version controlled.
Even if you're just doing a very small project, the value of having a version history is indesputable, and being able to make changes without worrying about overwriting someone else's updates is priceless.
Yes, its's good keep a SVN repository synced with your local instance.For that purpose you can use Eclipse.
Yes, but we are moving to git in the near future because it offers a better feature set (distributed SCM ftw) and more options for managing our code base (git submodules, stashing, better hook integration, better merging support, rebasing, and so much more). For the time being we've got our repos setup like so:
/trunk
/branches/6.x/1.x/core
/branches/6.x/1.x/sitename.domain.edu
/branches/6.x/1.x/sitename2.domain.edu
/branches/6.x/1.1.x/core
/branches/6.x/1.1.x/sitename.domain.edu
...
/tags/6.x/1.x/core
/tags/6.x/1.x/sitename.domain.edu
/tags/6.x/1.x/sitename2.domain.edu
/tags/6.x/1.1.x/core
/tags/6.x/1.1.x/sitename.domain.edu
...
Each branch is a svn copy of the trunk repo (where we do most of our development) and each tag is a svn copy of it's corresponding branch. The core branch is the primary distro that we distribute to all of our sites that share the university's look and feel, and each subsite is a site with special modules, custom theme, or any other functionality that isn't part of the primary distro. It makes moving between drupal releases a lot easier, but you can start to run into problems merging occasionally. Also you run into performance issues when the repo starts to grow, which is part of the reasoning behind moving to git.
Yes. Version control is critical. Distributed version control systems such as Git, Mercurial, and Bazaar are particularly nice, and let you start committing immediately, without the need to push those changes to a central server.
My Drupal workflow: use Mercurial and its sub-repositories feature to create independent repositories for 1) Drupal + contributed modules, 2) theme, and 3) custom modules. That way, I can clone from a single URL, get my entire project, and be able to track changes to each distinct piece independently.

Manage source under git and svn simultanously - does it make sense?

This is maybe unusual so let me set the scene:
We have an SVN repo containing our project history - an embedded system based on Linux. The SVN repo contains Linux kernel, U-Boot, busybox etc. sources and all our in-house apps, filesystem and such.
The Linux kernel we have is old and crusty and I am working on porting to the mainline, which is under active development for our platform(s). I am doing the kernel-side work under git and trading patches with "The Community".
I could get things working and take a snapshot of the kernel sources and dump it into SVN, but I'd like to keep the ability to get updates, have local branches and manage patches with git. I could keep two copies of the kernel, one managed by each SCM, but that would be a bit messy. There are also risks of developing and testing using kernel sources managed under git, and forgetting to put those changes into SVN resulting in broken SVN versions where the non-kernel sources are out of sync.
Migrating the entire project to git isn't an option. Managing just the kernel source with git and having a bunch of glue scripts and stored hashes in SVN is possible but it's nicer to have a unified history / diffing ability from SVN for the whole project.
What I'm considering is trying to manage the kernel sources under both SVN and git simultaneously, in the same directory.
As a kernel dev I'd mostly use git and do an SVN commit for internal use when things look good. For other internal users they would be able to get the entire, consistent sources with one SVN checkout, see a unified history, and they could make changes to the kernel sources under SVN. Later I or another git-using person can SVN update to those changes and commit them to git as appropriate.
Some funning around getting git to ignore .svn files and vice versa will have to be done. Also I'm not quite sure how one would take a plain SVN checkout and tell git to start managing the kernel subtree as well, but I'm sure git has some obscure swiss-army-knife options to do it.
So that's my idea du jour. It means most co-workers don't have to worry about git, and we can quietly ignore git and fork away later as needed.
The question here really is, has anyone done something like this, how did it work out, or what alternate solutions did you come up with?
I've done this regularly, and it works great.
The only major thing I needed to do was add the .git folder to the subversion ignore list, and the .svn/ folders to the .gitignore file.

Resources