How to maintain two Gitolite repositories? - gitolite

As a newbie to Gitolite would like to know how to handle or maintain two gitolite instance setup at 2 different servers which are at different location as an gitolite-admin.
Its like:
# VM 'A' at location X we have one Gitolite instance and
# VM 'B' at location Y we have another Gitolite instance.
but the repository is the same on both of the instance,say "projectrepo.git" but different set of users will be committing actively # both of the locations.Additional requirement is like both the repos should be in sync.
pls. advise.

Related

How to use multiple different puppet masters from one puppet agent?

There is the need that one puppet agent contacts some different puppet masters.
Reason: there are different groups that create different and independent sets of manifests.
Possible groups and their tasks
Application Vendor: configuration of application
Security: hardening
Operations: routing tables, monitoring tools
Each of these groups should run it's own puppet master - the data (manifests and appropriate data) should be strictly separated. If it is possible, one group should even not see / have access to the manifests of the others (we are using MAC on the puppet agent OSes).
Thoughts and ideas that all failed:
using (only) hira is not flexible as needed - there is the need to have different manifests.
r10k: supports more than one environment, but in each environment can only access one set of manifests.
multi but same puppet server using e.g. DNS round robin: this is the other way round. We need different puppet masters.
Some ways that might be possible but...
running multiple instances of puppet agents. That 'feels' strange. Advantage: the access rights can be limited in the way as needed (e.g. the application puppet agent can run under the application user).
patching puppet that it can handle more than one puppet master. Disadvantage: might be some work.
using other mechanisms to split responsibility. Example: use different git-repositories. Create one puppet master. The puppet master pulls all the different repositories and serves the manifests.
My questions:
Is there a straight forward way implementing this requirement with puppet?
If not, is there some best practice how to do this?
While I think what you are trying to do here is better tackled by incorporating all of your modules and data onto a single master, and that utilizing environments will be effectively the exact same situation (different masters will provide a different set of modules/data) this can be achieved by implementing a standard multi-master infrastructure (one CA master for cert signing, multiple compile masters with certs signed by the same CA master, configured to forward cert traffic elsewhere) and configure each master to have whatever you need. You then end up having to specify which master you want to check in to on each run (a cronjob or some other approach), and have the potential for one checkin to change settings set by another (kinda eliminating the hardening/security concept).
I would urge you to think deeper on how to collaborate your varied aspects (git repos for each division's hiera data and modules that have access control) so that a central master can serve your needs (and access to that master would be the only way to get data/modules from everywhere).
This type of setup will be complex to implement, but the end result will be more reliable and maintainable. Puppet inc. may even be able to do consultation to help you get it right.
There are likely other approaches too, just fyi.
I've often found it convenient to multi-home a puppet agent for development purposes, because with a localĀ puppet server you can instantly test manifest changes - there's no requirement to commit, push and r10k deploy environment like there is if you're just using directory environments and a single (remote) puppet server.
I've found the best way to do that is to just vary the path configuration (otherwise you run into problems with e.g. the CA certs failing to verify against the other server) - a form of your "running multiple instances of puppet agents" suggestion. (I still run them all privileged, so they can all use apt package {} etc.)
For Puppet 3, I'd do this by varying the libdir with --libdir (because the ssldir was under the libdir), but now (Puppet 4+) it looks more sensible to vary the --confdir. So, for example:
$ sudo puppet agent -t # Runs against main puppet server
$ sudo puppet agent -t \
--server=puppet.dev.example.com \
--confdir=/etc/puppetlabs/puppet-dev # Runs against dev puppet server

Giltolite cloning from the slave end

In my project I have two Gitolite instances, installed on 2 servers, say location A & location B.
Repository at location A is the master and repo at B is the slave.
My requirement is like: my project team members at location B should be able to clone from the slave gitolite instance(at location B) directly.
Is it possible ? If so how?
(I'm here concerned about a single repository say TEST at both the locations)
You need to make sure that:
the access rules for that git repo is duplicated between the two gitolite-admin/conf/gitolite.conf files of the two gitolite server
the users have their public ssh keys registered in both server gitolite-admin/keys folders
Those users should then be able to clone that single repo from B.
You might want to consider hook or mirroring in order to synchronize that repos between the two servers though.

Restricting remote communication in Git

I have Git installed in our company's Linux server. All the developers work on the same server. Recently we had to move to Git, which is hosted on some other server. Before we create a Git repository we create SSH keys and then start ssh-agent and finally add the private key using ssh-add.
My problem is I created a Git repository in the Linux machine, set my keys and everything and also did a push to remote Git server. But if some other developer also has his key added he can also perform a git push on my local repository.
Is there any way I can restrict push by other developers on the same Linux machine?
If you want to prevent others from pushing to your personal development machine, set up a firewall. If you want to prevent people from pushing to remote server, remove their keys, or add per-ip firewall rules (so that they can still use SSH). At least that's what I'd do, since it looks like the git itself doesn't offer any access control facilities and leaves it to the OS/networking layer.
In any case, my opinion is that rather than setting up some security facilities, you should trust your coworkers not to screw things up. After all, it's not some public repository - it's a company, where screw ups (intentional or not) should be dealt with accordingly.

Using Git with multiple servers

I have a set of servers with code bases on them. lets call them p1, p2, p3. I have a development server which i use to store the code d1. Each p server is different with a different code base.
I'm trying to figure out how to manage the git repo's correctly so that each of the "p" servers keep the "d1" server up to date.
here's what i did.
Create a git repo on p1 and initial commit.
created a --bare clone of the repo and scp'd it to the d1 server.
Repeated this for all servers.
my d1 server now has a /git/ folder with subfolders
p1, p2,p3. each of these has the normal content of
HEAD Branches config description hooks info objects refs.
I can clone these repos to another machine or folder and i get to see the actual files which is exactly what i wanted.
OK so here is my problem.
How to I keep the p1 repo up to date when someone clones the d1 copy and commits to it.
Do I need to run git fetch on p1
or should i have people change p1 and then git push to d1.
you can implement mirroring with gitolite to keep a central server with all the latest code from the others
from http://gitolite.com/gitolite/mirroring.html
Mirroring is simple: you have one "master" server and one or more
"slave" servers. The slaves get updates only from the master; to the
rest of the world they are at best read-only.
In the following pictures, each box (A, B, C, ...) is a repo. The
master server for a repo is colored red, slaves are green. The user
pushes to the repo on the master server (red), and the master server
-- once the user's push succeeds -- then does a git push --mirror to the slaves. The arrows show this mirror push.
The first picture shows what gitolite mirroring used to be like a long
time ago (before v2.1, actually). There is exactly one master server;
all the rest are slaves. Each slave mirrors all the repos that the
master carries, no more and no less.
This is simple to understand and manage, and might actually be fine
for many small sites. The mirrors are more "hot standby" than anything
else.
But when you have 4000+ developers on 500 repos using 25 servers in 9
cities, that single server tends to become a wee bit stressed.
Especially when you realise that many projects have highly localised
development teams. For example, if most developers for a project are
in city X, with perhaps a few in city Y, then having the master server
in city Z is... suboptimal :-)
And so, for about 3 years now, gitolite could do this:
You can easily see the differences in this scenario, but here's a more
complete description of what gitolite can do:
Different masters and sets of slaves for different repos.
This lets you do things like:
Use the server closest to most of its developers as the master for
that repo. Mirror a repo to only some of the servers. Have repos that
are purely local to a server (not mirrored at all). Push to a slave on
demand or via cron (helps deal with bandwidth or connectivity
constraints). All this is possible whether or not the gitolite-admin
repo is mirrored -- that is, all servers have the exact same
gitolite-admin repo or not.
Pushes to a slave can be transparently forwarded to the real master.
Your developers need not worry about where a repo's master is -- they
just write to their local mirror for all repos, even if their local
mirror is only a slave for some.

How to clone a Git repo from a VM?

I am currently developing inside a virtual Ubuntu box with Git, and I need to clone this repo to another CentOS VM. I don't know how to describe the git repo's location using the user#server:/path.git syntax.
Anyone can point me to the right direction? Thanks!
Can you ping one VM from the other? If so, then the IP you can ping you should be able to ssh to.
If you cannot ping, then perhaps you have a host which is reachable from both VMs. You could create a server repo there. For instance, github.com or bitbucket.com or the many many others might be a suitable third party host. Perhaps you could install a proxy (squid or dante-socks or something similar) to allow the VMs to talk to each other.
If you have email connectivity, perhaps you could mail git-bundles back and forth instead of using normal live git connections. There are many ways to do this, but we really need to know more about the networking and communications environment of these Vms to say more.

Resources