I'm currently in the process of moving a gitolite (3) installation between two
servers. Thankfully, this process is pretty well
documented on the main
project website. However, my repositories makes pretty active use of
git-annex which stores data in various
remotes as well as on the server itself.
Now, I'm not an expert on git-annex, but I know it works a bit differently from
"regular" git, so is there anything one should keep in mind when moving this
kind of installation or does it work just as outlined in the gitolite
documentation above?
After quite a bit of research, I couldn't find any details on how this should
done on a git-annex enabled repository so I decided to simply try it
out. Apparently, the steps as they are written work just fine, even for
git-annex content. That said, be cautious as you're moving stuff. Once the new
server is ready to take over, make sure the old one is disabled, I don't think
the git-annex likes to find 2 identical remotes.
As a minor anecdote: I accidently forgot to chown/chmod the repositories but
re-running step 6 and onwards without any issues what-so-ever.
Related
The company I work for uses Clearcase, even though it was EOL'd on the platforms in which we run it years ago. It is ancient and fragile tech, but one thing it does have is a multisite support that allows for the synchronization of air-gapped repos. Because of security issues, we use secure USB sticks to copy packets and take them to the other side, then apply them with scripts.
Developers and DevOps people want to make a business case to migrate to GitLab, but I cannot find any mention of a feature in GitLab that would allow me to do easily do this. There's something about bundles, but the info I have found is years old and it doesn't seem like too many people are using it.
Does GitLab not support this? Simple synchronization of one repo to another over an air gap using some sort of secure media? If so, it's no wonder so many teams are still using ClearCase.
While not exactly easy, air-gap updates of Git repository is possible through the git bundle command.
It produces one file (with all the history, or only the latest commits for an incremental update), that you can:
copy and distribute easily (it is just one file after all)
clone or pull from(!)
This is not tied to GitLab, and can be applied to any Git repository.
From there, I have written before on migration from ClearCase to Git, and I usually:
do not import the full history, only major labels or UCM baselines
split VObs per project, each project being one Git repository
revisit what was versioned in Vobs: some large files/binaries might need to be .gitignore'd in the new Git repository.
You would not "migrate views": they are just workspace (be it static or dynamic). A simple clone of a repository is enough to recreate such a workspace (static here).
So I'm using Visual Studio 2012, and whenever I create a new project I always leave out the "Add to source control" option, mainly because I don't what it is, what it's purpose is, or if it would be beneficial to me to use it.
I'm developing a rather large library on my own. I plan on making it open source. Is this what source control is for? Reading up online did not help me unfortunately, I'm still desperately confused. Is source control the same as version control too, by the way?
Some clarification would be greatly appreciated.
Thanks!
Alex
Source control (otherwise known as revision control: http://en.wikipedia.org/wiki/Revision_control) is a way in which you can store and manage changes and revisions to your code-base.
Google: SVN, Git, Mercurial and Google Code, GitHub, Bitbucket.
I personally never start a propject (personal or otherwise without it). Also, I like BitBucket (www.Bitbucket.com) for managing my projects. It's free for a single user and they support Git and Mercurial.
Quite simply, source control is a repository where your source code is stored. It's purpose is to provide a storage spot that is separate to the copy that you are currently working with, so you can make changes locally and then submit them back to your source control repository.
Think of it as being like a library - you can get code out, you can return (modified) code to it. It also gets more complex than that - you can use source control to handle multiple versions of the same source, and to merge changes from one copy to another. Additionally most (all) source control systems should provide a mechanism for tracing changes to code (i.e. audit tracking), and a means to "roll back" or revert to a previous version.
There are a variety of source control systems, some of which you install locally and some which you install centrally and access remotely. In all cases the underlying philosophies are mostly the same, although there are some differences around terminology and implementation of features. Source control is considered to be a system that is separate to your editor, your editor then provides a means to specify which source control provider to use and the location of the source control repository.
I have found this
Git Source Control Provider is a plug-in that integrates git with
Visual Studio
According to this, seems like source control = version control
You were asked this when starting a project because it's much easier to setup before you have many messy pieces. Github is a form of version control. An organized place for code to grow.
Version control really shines if you're collaboratively working on a project with many different pieces to it. Source or version control is the process of taking the many different parts that are being worked on simultaneously and bringing them together as a final project.
If you're working on your own it still may be beneficial to practice with, as still holds some value in allowing a process of documentation, and modular pieces being brought together for a final project.
This wonderful post explains many of the benefits of using version control for your pet projects.
Well, there are a number of good reasons:
1. It’s good to be in the habit. Sure, you may be working alone. But in
the future you may not be. Or your “weekend hobby project” might turn
into a popular project with many developers. If anything like that
happens, being in the habit of using source code control will stand
you in good stead.
2. It protects your code. Since your code is stored
in on a server apart from your development machine, you have a backup.
And then, you can even backup the code on the server. Sure, you can
zip it all up any time you want, but you don’t get all the other
benefits I’m listing here.
3. It can save your butt. Sometimes, you
might accidently delete something. You might make mistakes and change
code that you didn’t want changed. You might start off on some crazy
idea when you are feeling a bit saucy, and then regret it. Source
control can save you from all of these by making it a piece of cake to
revert to any previous state. It’s like a really powerful “undo”
feature.
"Source control" refers to the concept of storing all files that make up the source of an application in an (usually online) repository - where one can manage with fine detail exactly what has changed in the source code between versions, manage issues, involve other people with the project, and generally provide a platform to collaboratively manage a project.
The term is usually synonymous with version control - although you can have a repository and not explicitly name versions. Version control in particular just refers to labeling major versions of your project with a hierarchical numbering scheme (1.2.4 etc.)
There are several different tools that implement different kinds of source control. For example, Git is a source control tool that sets lets you manage a project. Github is a web-site that repositories managed with Git are usually stored on. Mercurial is yet another source control tool.
Typically, source control can be complicated to understand, and usually requires significant time studying tutorials. A lot of concepts are unfamiliar to solo programmers who have only worked on small projects. Changes to source code are managed through commits, and different branches of the project can be worked on by multiple people. Changes from separate branches can be merged, and the project can be forked by another user entirely (at least for Git).
Making your library open source is a good idea, and it's not too hard to give it a home on github. I would look into some resources to find out how to set it up.
Wikipedia says:
Revision control, also known as version control and source control
(and an aspect of software configuration management), is the
management of changes to documents, computer programs, large web
sites, and other collections of information. Changes are usually
identified by a number or letter code, termed the "revision number",
"revision level", or simply "revision". For example, an initial set of
files is "revision 1". When the first change is made, the resulting
set is "revision 2", and so on. Each revision is associated with a
timestamp and the person making the change. Revisions can be compared,
restored, and with some types of files, merged.
In simpler terms, source control allows multiple people to work on the same project without everyone clobbering everyone else's changes. Changes to the same file are merged together, and conflicts can be resolved.
It also allows you to keep version history. You can roll back a file to the way it existed at any point in time, as well as maintain different branches of the program; a way to organize groups of changes.
As Si puts it, source (or version) control will save your life 20 times over.
Source control keeps a detailed history of your code. This enables you and any collaborators to:
Review the history of any code
See changes between two versions
Revert any code you just goofed up
Prevent you from losing any code
Discover which changes caused bugs / crashes
Blame whoever messes up your code
Keep an up-to-date backup of all of your code
Even if you're not working collaboratively, source control is so useful when you accidentally delete your code. I speak from experience.
I would recommend using Git or Mercurial, as they allow branching and offer a more isolated workspace for collaborators to experiment without fear of harming any code.
For online collaboration, Github and Bitbucket are great and integrate well with Git and Mercurial, respectively. They also allow you to track collaborators' changes, choose and merge changes into a central repository, and keep a detailed list of issues.
Do you use Subversion while developing a website with drupal ?
I'm not talking about modules development, but websites development (i.e. adding hook functions, modifying template file.. etc)
thanks
Yes.
Anything that's got any kind of ongoing development or is going to change over time should be version controlled.
Even if you're just doing a very small project, the value of having a version history is indesputable, and being able to make changes without worrying about overwriting someone else's updates is priceless.
Yes, its's good keep a SVN repository synced with your local instance.For that purpose you can use Eclipse.
Yes, but we are moving to git in the near future because it offers a better feature set (distributed SCM ftw) and more options for managing our code base (git submodules, stashing, better hook integration, better merging support, rebasing, and so much more). For the time being we've got our repos setup like so:
/trunk
/branches/6.x/1.x/core
/branches/6.x/1.x/sitename.domain.edu
/branches/6.x/1.x/sitename2.domain.edu
/branches/6.x/1.1.x/core
/branches/6.x/1.1.x/sitename.domain.edu
...
/tags/6.x/1.x/core
/tags/6.x/1.x/sitename.domain.edu
/tags/6.x/1.x/sitename2.domain.edu
/tags/6.x/1.1.x/core
/tags/6.x/1.1.x/sitename.domain.edu
...
Each branch is a svn copy of the trunk repo (where we do most of our development) and each tag is a svn copy of it's corresponding branch. The core branch is the primary distro that we distribute to all of our sites that share the university's look and feel, and each subsite is a site with special modules, custom theme, or any other functionality that isn't part of the primary distro. It makes moving between drupal releases a lot easier, but you can start to run into problems merging occasionally. Also you run into performance issues when the repo starts to grow, which is part of the reasoning behind moving to git.
Yes. Version control is critical. Distributed version control systems such as Git, Mercurial, and Bazaar are particularly nice, and let you start committing immediately, without the need to push those changes to a central server.
My Drupal workflow: use Mercurial and its sub-repositories feature to create independent repositories for 1) Drupal + contributed modules, 2) theme, and 3) custom modules. That way, I can clone from a single URL, get my entire project, and be able to track changes to each distinct piece independently.
This is somewhat related to my security question here. Is it a bad idea to use an hg / mercurial repository for a live website? If so, why?
Furthermore, we have dev, test and production installations of our website, like dev.example.com, test.example.com and www.example.com. If it's a bad idea to use a repository for a live/production website, would it be OK to use an hg repository for the dev and test sites?
I'm also concerned about ease of deployment. We have technical and less technical co-workers who will be working with the site. The technical people (software engineers) won't have any problem working with the command line or TortoiseHG. I'm more concerned about the less technical people (web designers). They won't be comfortable working on the command line, and may even find TortoiseHG daunting. These co-workers mostly upload .css files and images to the server. I'd like for these files (at least the .css files) to be under version control, but I want this to be as transparent as possible for the non technical team members.
What's the best way to achieve this?
Edit:
Our 'site' is actually a multi-site CMS setup with a main repository and several subrepositories. Mock-up of the repository structure:
/root [main repository containing core files and subrepositories]
/modules [modules subrepository]
/sites/global [subrepository for global .css and .php files]
/sites/site1 [site1 subrepository]
...
/sites/siteN [siteN subrepository]
Software engineers would work in the root, modules and sites/global repositories. Less technical people (web designers) would work only in the site1 ... siteN subrepositories.
Yes, it is a bad idea.
Do not have your repository as your website. It means that things checked in, but unworking, will immediately be available. And it means that accidental checkins (it happens) will be reflected live as well (i.e. documents that don't belong there, etc).
I actually address this "concept" however (source control as deployment) with a tool I've written (a few other companies are addressing this topic now, as well, so you'll see it more). Mine is for SVN (at the moment) so it's not particularly relevant; I mention it only to show that I've considered this previously (not on a Repository though; a working copy, in that scenario the answer is the same: better to have a non-versioned "free" are as the website directory, and automate (via user action) the copying of the 'versioned' data to that directory).
Many folks keep their sites in repositories, and so long as you don't have people live-editing the live-site you're fine. Have a staging/dev area where your non-revision control folks make their changes and then have someone more RCS-friendly do the commit-pull-merge-push cycle periodically.
So long as it's the conscious action of a judging human doing the staging-area -> production-repo push you're fine. You can even put a hook into the production clone that automatically does a 'hg update' of the working directory within that production clone, so that 'push' is all it takes to deploy.
That said, I think you're underestimating either your web team or tortoiseHg; they can get this.
me personally (i'm a team of 1) and i quite like the idea of using src control as a live website. more so with hg, then with svn.
the way i see it, you can load an entire site, (add/remove files) with a single cmd
much easier then ftp/ssh this, delete that etc
if you are using apache (and probably iis as well) you can make a simple .htaccess file that will block all .hg files (or .svn if you are using svn)
my preferred structure is
development site is on local machine running directly out of a repository (no security is really required here, do what you like commit as required)
staging/test machine is a separate box or vm running a recent copy of the live database
(i have a script to push committed changes to staging server and run tests)
live machine
(open ssh connection, push changes to live server, test again, can all be scripted reasonably easily, google for examples)
because of push/pull nature of hg, it means you can commit changes and test without the danger of pushing a broken build to the live website. like you say in your comments, only specific people should have permission to push a version to the live site. (if it fails, you should easily be able to revert to the previous version, via src control)
Why not have a repo also be an active web server (for dev or test/QA environment anyway)?
Here's what I am trying to implement:
Developers have local test environments in which they can build and test their code
Developers make a clone of the dev environment on their local dev machine
Developers commit as often as they want to their local repo
When chunk of work is done and tested, then developer pushes working change sets to dev repo
Changes would be merged and tested on Dev, then pushed to Test/QA, and so on.
BTW, we're using Mercurial. I believe this model would only work using a distributed source code management tool.
I have 3 Linux machines, and want some way to keep the dotfiles in their home directories in sync. Some files, like .vimrc, are the same across all 3 machines, and some are unique to each machine.
I've used SVN before, but all the buzz about DVCSs makes me think I should try one - is there a particular one that would work best with this? Or should I stick with SVN?
I've had this problem for years, and I don't think version control is necessarily the right way to go. I've had good success with the the Unison file synchronizer which is designed for the express purpose of maintaining consistent home directories on two machines. I'm currently managing seven replicas with unison, and the details are a bit tricky, but it is a great tool and if you start with two you will be extremely pleased.
The key difference between Unison and a VCS is that Unison is willing to delay dealing with conflicts that have to be merged. Plus it gets all the defaults right. And it is fast: I use it daily, over a DSL line, to synchronize about 40GB of data.
Any DVCS would likely work fine. My favorite is Bazaar. It would be easiest to keep your config files in .config, version that, and then symlink as appropriate.
A benefit of DVCS is that you can version the per-machine config files as well, without interfering with versioning global configs.
I've had the same problem, and built a tool on top of Subversion that adds permission, ownership and secontext tracking, keeps the .svn directories out of the actually versioned trees, and adds a concept of layers so you can for example track all your config related to development, which you then only check out on machines you use for developing.
This has helped me organize my settings much better across the 50+ machines I log into.
Here's the project page. It's still a little rough around the edges, but we also use it at work to version system configuration for our 60+ servers.
In general, any version control system that uses some sort of metadata files to track stuff is going to cause you pain as is when actually using it.
Version control software isn't really great for home directories. Worse, some software doesn't really like the .svn folders or starts to interpret their contents. You could of course try to fix this with some very complex mirroring setup, but that's hard.
Here's a Mozilla developer that's tried to do this: Version controlling my home dir, there's a couple of suggestions in the comments.
git or Mercurials's cheap branching would work great for this situation. I started with Mercurial, because it is simpler, but have subsequently moved to git.
One way to handle this very flexibly is to have a build directory under revision control, not try and svn your actual home directory (which has its own issues)
so inside this you keep a structure like
/home/you/code/dotfiles
/home/you/code/dotfiles/dotbashrc
/home/you/code/dotfiles/dotemacs
...
/home/you/code/dotfiles/makefile
and the makefile can contain logic for specializing files (or not)
might be heavier than you need, but if your actual setup is complex (I've done this across 3 or 4 different unices at a time) then it's worth doing something like this.
I use git for this. So far, I have been able to keep the home directories on several machines synchronized, with no need for branching and merging. Instead, I use git rebase. Conflicts so far have been few and far between and easy to resolve.
I keep files that need to have separate contents out of revision control by putting them into .gitignore.
I keep configuration files for the following tools in git:
various shells
emacs and applications, i.e.
gnus
BBDB
emacs-w3m
mutt
screen
various utilities and scripts
I keep notes and such in a subdirectory which has its own git repository.
I would suggest looking into etckeeper if you haven't already. It's designed for versioning configuration files in /etc using a version control system:
etckeeper is a collection of tools to
let /etc be stored in a git,
mercurial, darcs, or bzr repository.
It hooks into apt (and other package
managers including yum and pacman-g2)
to automatically commit changes made
to /etc during package upgrades. It
tracks file metadata that revison
control systems do not normally
support, but that is important for
/etc, such as the permissions of
/etc/shadow. It's quite modular and
configurable, while also being simple
to use if you understand the basics of
working with revision control.
Although it's designed for /etc I think it would probably also work well (perhaps with some adaptation) for home directories since the basic needs are the same.
I know this is an old thread but found it while searching for some dotfiles.
My current system is using subversion. The key thing I did was check out the working copy into ~/.svnhome/ (in hindsight should have called it .dotfiles or something more generic). I then create symlinks to the files I actual use on that computer into home. For example my .procmail and .spamassassin folders are only needed on the mail server so I don't link those on my home server.
The only file that has some differences is the .bashrc file has some extra lines on my mac for macports. So at the bottom of .bashrc I have it check if .bashrc_local exists and parses that.
This is the last remaining thing I have using subversion (everything else is using git aside from work). The benefit of svn is because it's not a dvcs so I don't have to worry about accidentally committing on one server and forgetting to push it.
I have considered moving it to git so I could create branches. Using the above example I would have a branch for my main server that I would add the .procmail and .spamassassin folders but not have those in the master branch. But the current system has worked fine for years--before git even existed--and don't have any particular motivation to change it now.