How do I create a Git repo out of dev and live versions of an existing site on my webserver? - linux

Some info:
Running Apache 2.2.3 on CentOS 5.6 64bit (dunno if Apache version is relevant here)
Git is installed
It's a VPS, so I have all the access I need
I have an existing website (located at /home/username/public_html/) and a dev version of the site (at /home/username/dev/) both on the same machine.
I work directly on the dev version (actually working local, but using Sublime Text 2's SFTP plugin to automatically upload on save, so for all intents and purposes, I can be considered to be working directly on the files at /home/username/dev/).
What I would like to do is use Git to push changes from dev to live. Currently I am having to keep track of which files are edited and then FTP or copy them over manually. I want to be able to revert changes if something doesn't go right, things like that.
However, I have never used Git (or any kind of version control) before. I've been googling Git tutorials, but everything I have found assumes something different from my actual setup. The main thing is that most tutorials assume brand new projects. I am terrified of messing up my live site by trying to set up a repo for it.
Anyhow, my question: how can I use my existing folders to create a repo where I can push from /home/username/dev/ to /home/username/public_html/ (or pull vise versa)?

I ended up following this tutorial and it does exactly what I wanted.

Assuming that your live version came from some version of your dev version you could get everything into a git repository with
make a copy of your live code
$ cp -axf /home/username/public_html/ git_repo
putting everything in live under git version control
$ cd git_repo
$ git init
$ git add .
$ git commit -m 'Initial import from live version'
now copy over all files from the dev version and put them on a branch dev in git
$ rm *
$ cp /home/username/dev/* .
$ git checkout -b dev
$ git add .
$ git commit -m 'Initial import from dev version'
After that your live code will be on the master branch in the repository and the dev code in branch dev.
If when you are done you always put everything from dev to live, you could achieve this in git with merging your dev into your master branch (which holds your live code) when you are done.
Your live version could be a clone of that repository from the master branch, and deploying would be just a
$ git pull
there.

Related

Git Update Files on Repo Change

I have a VPS running a Node.js / React app.
I want to update the files in the vps each time I push data to the git(hub).
I found out, using this answer, that I can add some hooks in git, executing commands on "post-receive".
What I didn't quite understand :
Why did he init another git ? Couldn't he have done this in the .git directory and create the hooks/post-receive file?
Why git checkout -f ? If the goal is to update local files, so nodemon / create-react-app restarts the server / app, why not execute a git pull instead ?
Is there a better way of doing this ?
In the recommended answer there, nobody is using GitHub and there is no other Git repository yet. So the answer to your question:
Couldn't he have done this in the .git directory and create the hooks/post-receive file?
is: No, there was no .git directory in the first place. The target machine had nothing at all, no Git repository, no working tree, etc. The git init --bare created the Git repository (the ".git directory").
The git checkout -f is a poor-man's / low-quality implementation of push to deploy. A receiving repository is normally "bare", and this one is no exception.
why not execute a git pull instead ?
That would require creating a third Git repository. That would have been an option.
"Better" is in the eye of the beholder. There are many ways of doing this, each with its own pluses and minuses. See also Deploy a project using Git push, which notes that since Git 2.3, receive.denyCurrentBranch = updateInstead is available; it was not available prior to 2015 (and in 2015, many people had older versions of Git installed).
Note further that if you're using GitHub as a hosting system, this changes a number of variables. The questions and answers you and I have linked here are aimed at those not using GitHub.

repo init stop always check latest repo

Is it possible to stop verify/download newer repo from internet, such as
test $ repo init -u git#1.1.1.1/test/iot_manifest.git
Get https://gerrit.googlesource.com/git-repo/clone.bundle
Get https://gerrit.googlesource.com/git-repo
remote: Counting objects: 1, done
remote: Finding sources: 100% (36/36)
...
You can't, the repo installed on your computer (the one in your $PATH) is not the full version of repo it is only a launcher.
From the repo documentation on source.android.com:
Repo comes in two parts: One is a launcher script you install, and it communicates with the second part, the full Repo tool included in a source code checkout.
When you run repo init for the first time it gets the full repo and store it in the .repo/repo directory. Every time you'll run repo init again in a brand new repository, the full repo will be downloaded again in .repo/repo.
One thing though you can stop getting the clone.bundle line with repo init --no-clone-bundle
Get repo from your own computer (without internet)
You can use a local version of repo, you need the internet at least to get the git-repo code once. After that you can use this version stored locally in place of the remote ones on Google server.
cd workspace
git clone https://gerrit.googlesource.com/git-repo
mkdir repo_init_no_internet && cd repo_init_no_internet
repo init --repo-url=/home/<user>/workspace/git-repo

What is wrong with my git 1.8.4.2-1?

I have an old Synology DS-106j server where I tried to install git using ipkg command. The installation went smoothly, but git failed to work correctly. I am currently learning how to use git, so I don't know if it is a bug from git with the version I am using or something else is wrong.
What I did was create a new local repository with a specified name, add a new file, commit it, and got an error:
NAS_SERVER> git init Test
Initialized empty Git repository in /root/Test/.git/
NAS_SERVER> ls
Packages.gz git_1.8.4.2-1_powerpc.ipk
Test
NAS_SERVER> cd Test
NAS_SERVER> git status
# On branch master
#
# Initial commit
#
nothing to commit (create/copy files and use "git add" to track)
NAS_SERVER> touch Test.cs
NAS_SERVER> ls
Test.cs
NAS_SERVER> git add *
NAS_SERVER> git status
# On branch master
#
# Initial commit
#
# Changes to be committed:
# (use "git rm --cached <file>..." to unstage)
#
# new file: Test.cs
#
NAS_SERVER> git commit -m "Test"
fatal: 57e2b8c52efba71d84c56bf6f37581686b9061a3 is not a valid object
I thought...maybe I did something wrong, so I used git on Windows OS and try a push. Still an error. Transfer the whole repository to the server and check the status. It seems fine. Try a commit. Still the same result. What worse is that I can't update git version without having to compile it, which I don't even know how to do so. Any suggestion to what might be wrong?
If your goal is to push into a git repo located on the synology disk(s) for backup purposes I'd recommend a different approach which would avoid having to install a rather old git version on the synology box itself (which could lead to problems if/when using a newer git version on the windows machine).
Export a samba share from synology, mount it on windows and use the windows git to create the backup repo (maybe even a bare repo, eventually group shared if you plan to share work with other people). Then push from your working repo into this backup repo - all on the windows box. In this scenario the synology box doesn't need git installed, it just serves files (i.e. its original job).
I'm using such setup but with a linux machine instead of a windows one and with the bare repo on the synology disks exported via NFS instead of Samba.

Export a repository from a Gitlab server to another Gitlab server

Just installed a fresh new 6.8 Gitlab on a brand new high performance server.
Before considering to forget my repositories history (comments, issues, etc...), do one know of a way to export a repository data from a Gitlab server to another Gitlab Server ?
I just failed to found anything on the documentation for exporting/migrating the whole project data (not just the git repository and its wiki).
For GitLab versions >= 8.9 (released in June 2016) you can use the built-in export and import project feature.
Please not that for the existing installations of GitLab, this option has to be enabled in the application settings (URL: /admin/application_settings) under 'Import sources'. You have to be a GitLab admin user to enable and use the import functionality.
Here is the feature complete documentation: https://gitlab.com/help/user/project/settings/import_export.md
I have actually done this recently, we were upgrading our instance of gitlab and needed to save and import repositories to the new installation.
First, create a bundle of the checked-out repository. For example, say you checked out a repository we will call myrepository
To check out the repository use git clone (let's assume your repository is under the root account and the ipaddress is 192.168.1.1)
git clone http://192.168.1.1/root/myrepository.git (or match your environment)
Now this step is somewhat important; you need to change into the working directory that has the .git folder of your checked out repository.
cd myrepository
Next, you create a bundle file:
git bundle create myrepository.bundle --all
Import the bundle file into the new instance of gitlab.
Create a new 'myrepository' on the gitlab gui interface
clone the empty repository; let's say this new gitlab has the ipaddress 192.168.1.2:
git clone http:\\192.168.1.2\root\myrepository.git (or match your environment)
You will get warnings that you cloned an empty repository. This is normal.
Change into the working directory of your checked out repository and do a git pull:
cd myrepository
git pull file/path/to/myrepository.bundle
this will pull the repository into your clone. Next you can do a git add, git commit and git push
This should work assuming you have the gitlab server settings set up correctly; you may have issues such as needing to add a client_max_body_size parameter in your nginx.conf file and also a 'git config --global http.postBuffer' to push large files.
Another way to do this is to make patch files of each commit and then deploy them:
This involves doing a 'git format-patch -C 0badil..68elid -o patch_directory_path' and reference the range of all your commits and have them pushed to an output directory; this should give you one patch file per commit. Next would involve git cloning the new empty repository, changing into the working directory of the clone and applying the patches to the new repository using 'git am patch_directory_path'
For GitLab versions < 8.9, without built-in export/import feature, I recommend a great tool from Marcus Chmelar, gitlab-migrator. I used it successfully many times with older GitLab versions so you should too. Just be aware of its limitations.
For the repos themselves, you can use git bundle: that will generate one file, that it is easy to copy around.
(as I described in "Backup a Local Git Repository")
But another way is simply to git clone --mirror your repos from the first server on a local workstation, and git push --mirror to the new server.
This is what GitHub details in its help page "Duplicating a repository".
In both cases, you need first to declare those repos on the new GitLab server, in order for them to be initialized, and ready to receive commits.
But for the rest... not easily. There is a feature request pending:
(Update August 2016, 2 years later: GitLab 8.9 has that feature implemented)
(for GitLab version older than 8.9, see and upvote Greg Dubicki's answer)
I agree that issues are the main thing to make exportable first.
They are stored in the database. Storing them in git is not an option. Maybe export them as a formatted file (SQL, YAML or something else).
This blog post illustrates the export of a mysql database.
Use mysqldump to create dump of old database, then create a new database on the new server and import this.
On old:
mysqldump gitlab | gzip > gitlab.sql.gz
On new:
gunzip < gitlab.sql.gz | mysql gitlab
Run the db migrate command to make sure the schema is updated to the latest version.
sudo -u gitlab -H bundle exec rake db:migrate RAILS_ENV=production

Git - Syncing a Github repo with a local one?

First off, forgive me if this is a duplicate question. I don't know anything but the basic terminology, and it's difficult to find an answer just using laymen's terms.
I made a project, and I made a repository on Github. I've been able to work with that and upload stuff to it for some time, on Windows. The Github Windows application is nice, but I wish there was a GUI for the Linux git.
I want to be able to download the source for this project, and be able to edit it on my Linux machine, and be able to do git commit -m 'durrhurr' and have it upload it to the master repository.
Forgive me if you've already done most of this:
The first step is to set up your ssh keys if you are trying to go through ssh, if you are going through https you can skip this step. Detailed instructions are provided at https://help.github.com/articles/generating-ssh-keys
The next step is to make a local clone of the repository. Using the command line it will be git clone <url> The url you should be able to find on your github page.
After that you should be able to commit and push over the command line using git commit -am "commit message" and git push
You can use SmartGit for a GUI for git on Linux: http://www.syntevo.com/smartgit/index.html
But learning git first on the command line is generally a good idea:
Below are some basic examples assuming you are only working from the master branch:
Example for starting a local repo based on what you have from github:
git clone https://github.com/sampson-chen/sack.git
To see the status of the repo, do:
git status
Example for syncing your local repo to more recent changes on github:
git pull
Example for adding new or modified files to a "stage" for commit
git add /path/file1 /path/file2
Think of the stage as the files that you explicitly tell git to keep track of for revision control. git will see the all the files in the repo (and changes to tracked files), but it will only do work on the files that you add to a stage to be committed.
Example for committing the files in your "stage"
git commit
Example for pushing your local repo (whatever you have committed to your local repo) to github
git push
What you need to do is clone your git repository. From terminal cd to the directory you want the project in and do
git clone https://github.com/[username]/[repository].git
Remember not to use sudo as you will mess up the remote permissions.
You then need to commit any changes locally, i.e your git commit -m and then you can do.
git push
This will update the remote repository.
Lastly if you need to update your local project cd to the required directory and then:
git pull
To start working on the project in linux, clone the repo to linux machine. Add the ssh public key to github. Add your username and email to git-config.
For GUI you can use gitg.
PS : Get used to git cli, It is worth to spend time on it.

Resources