Synchronizing two git clones without push access - linux

There's a server which hosts a git repo. I have two computers which I code from and have created a git clone on both. I frequently pull to get updates and new files. However, I cannot push to this repo (I can only pull).
I would like the files to be in sync across thee two devices so I can pick up where I left off on the other. How can I accomplish this (without creating another repo)?

You cannot 'synchronize' two repositories if you do not have push rights.

If one of your computers can access the other over ssh then you can push or pull directly between them.
If you don't have direct network access then you could in principle use Git's features for sending patches over email, but this will likely be more inconvenient than just setting up a third repository.

Related

Is it possible to have multiple gitlab-runners all execute the same jobs?

I'm hoping to leverage GitLab CI/CD / gitlab-runner to keep custom code up to date on a fleet of servers.
Desired effect is that when a commit is made against a certain project in GitLab, several servers then automatically pull those changes down.
Is it possible to leverage gitlab-runner's in this way, so that every runner registered with the project executes the contents of the .gitlab-ci.yml file? Or is there a better tool to accomplish this?
I could use Ansible to push updates files down to each server, but I was looking for something easier to solve for - something inherent in GitLab.
Edit: Alternative Solution
I decided to go the route of pre- and post-hook files in my repos as described here:
https://gist.github.com/noelboss/3fe13927025b89757f8fb12e9066f2fa
Basically I will be denoting a primary server as the main source for code pushes into the master repo, and have defined my entire fleet as remote repos in .git/config there. Using post-hooks inside of bare repo's on all of my servers, I can then copy my code into the proper execution path.
#ahxn81 Runners aren't really intended to be used in the pull way you describe. The Ansible push method you proposed is more in line with typical deploy flow. I can see why you might prefer the simplicity of the pull method over pushing via script. I guess a fleet of servers these days is via kubernetes or docker swarm which can simplify deployment after an initial setup headache.

How to maintain a single code base on a multi server node app

I have multiple servers on a load balanced environment running the same Node application. I want the code on these servers be the same everywhere. I currently maintain a git repo for the code on these servers but have to manually SSH into each of these and pull the code from the git repo. Is there any simple way I can push the code onto all the servers?
Straightforward solutions that came in my mind:
1) use a cron job on the servers that makes the work you're doing manually, i.e., git pull (need linux), or
2) use git hooks to trigger the pull on the other servers. With this solution you need to have the list of servers to trigger the update. Hooks are basically scripts that are executed before/after events like commits, push, etc.
It looks your question is how to deploy Node.js app into multiple server. Here related question

Restricting remote communication in Git

I have Git installed in our company's Linux server. All the developers work on the same server. Recently we had to move to Git, which is hosted on some other server. Before we create a Git repository we create SSH keys and then start ssh-agent and finally add the private key using ssh-add.
My problem is I created a Git repository in the Linux machine, set my keys and everything and also did a push to remote Git server. But if some other developer also has his key added he can also perform a git push on my local repository.
Is there any way I can restrict push by other developers on the same Linux machine?
If you want to prevent others from pushing to your personal development machine, set up a firewall. If you want to prevent people from pushing to remote server, remove their keys, or add per-ip firewall rules (so that they can still use SSH). At least that's what I'd do, since it looks like the git itself doesn't offer any access control facilities and leaves it to the OS/networking layer.
In any case, my opinion is that rather than setting up some security facilities, you should trust your coworkers not to screw things up. After all, it's not some public repository - it's a company, where screw ups (intentional or not) should be dealt with accordingly.

Using Git with multiple servers

I have a set of servers with code bases on them. lets call them p1, p2, p3. I have a development server which i use to store the code d1. Each p server is different with a different code base.
I'm trying to figure out how to manage the git repo's correctly so that each of the "p" servers keep the "d1" server up to date.
here's what i did.
Create a git repo on p1 and initial commit.
created a --bare clone of the repo and scp'd it to the d1 server.
Repeated this for all servers.
my d1 server now has a /git/ folder with subfolders
p1, p2,p3. each of these has the normal content of
HEAD Branches config description hooks info objects refs.
I can clone these repos to another machine or folder and i get to see the actual files which is exactly what i wanted.
OK so here is my problem.
How to I keep the p1 repo up to date when someone clones the d1 copy and commits to it.
Do I need to run git fetch on p1
or should i have people change p1 and then git push to d1.
you can implement mirroring with gitolite to keep a central server with all the latest code from the others
from http://gitolite.com/gitolite/mirroring.html
Mirroring is simple: you have one "master" server and one or more
"slave" servers. The slaves get updates only from the master; to the
rest of the world they are at best read-only.
In the following pictures, each box (A, B, C, ...) is a repo. The
master server for a repo is colored red, slaves are green. The user
pushes to the repo on the master server (red), and the master server
-- once the user's push succeeds -- then does a git push --mirror to the slaves. The arrows show this mirror push.
The first picture shows what gitolite mirroring used to be like a long
time ago (before v2.1, actually). There is exactly one master server;
all the rest are slaves. Each slave mirrors all the repos that the
master carries, no more and no less.
This is simple to understand and manage, and might actually be fine
for many small sites. The mirrors are more "hot standby" than anything
else.
But when you have 4000+ developers on 500 repos using 25 servers in 9
cities, that single server tends to become a wee bit stressed.
Especially when you realise that many projects have highly localised
development teams. For example, if most developers for a project are
in city X, with perhaps a few in city Y, then having the master server
in city Z is... suboptimal :-)
And so, for about 3 years now, gitolite could do this:
You can easily see the differences in this scenario, but here's a more
complete description of what gitolite can do:
Different masters and sets of slaves for different repos.
This lets you do things like:
Use the server closest to most of its developers as the master for
that repo. Mirror a repo to only some of the servers. Have repos that
are purely local to a server (not mirrored at all). Push to a slave on
demand or via cron (helps deal with bandwidth or connectivity
constraints). All this is possible whether or not the gitolite-admin
repo is mirrored -- that is, all servers have the exact same
gitolite-admin repo or not.
Pushes to a slave can be transparently forwarded to the real master.
Your developers need not worry about where a repo's master is -- they
just write to their local mirror for all repos, even if their local
mirror is only a slave for some.

Git slow when cloning to Samba shares

We are deploying a new development platform.
We have a really complicated environment that we cannot reproduce on developer's computers so people cannot clone the GIT repository on their computer.
Instead, they clone the repository into a Mapped network drive (SAMBA share) thats is the DocumentRoot of a Website for the developer in our servers
Each developer has is own share+DocumentRoot/website and so they cannot impact people this way.
Developers have Linux or Windows as Operating system.
We are using 1Gbits/sec connection and GIT is really slow comparing to local use.
Our repository size is ~900 MB.
git status on samba share takes about 3mins to accomplish, that's unusable.
We tried some SAMBA tuning, but still, it's really slow.
Has someone an idea?
Thank you for time.
Emmanuel.
I believe git status works by simply looking for changes in your repository. It does this by examining all of the files and checking for ones that changed. When you execute this against a samba, or any other share, it's having to do the inspection over a network connection.
I don't have any intimate knowledge of the git implementation but my imagination is that it essentially boils down to
Examine all files in directory
Repeat for every directory
So instead of creating a single persistent connection to the share it's creating one for every single file in the repository and with a 900MB share that's going to be slow even with a fast connection.
Have you considered having the following work flow instead?
Have every developer clone to their local machine
Do work on the local machine
Push changes to their share when they need to deploy / test / debug
This would avoid the use of git on the actual share and eliminate this problem.

Resources