Say I have two web servers, one local development and one live.
Under SVN I would checkout the website files to my local webserver's public_html directory and also to the live webserver's public_html directory. I would then work on the files directly on the local server and commit any changes to the central repository.
When I'm ready for those changes to go live on the live server, I would SSH in and perform an SVN update.
Essentially I have two working copies, one on live and one locally, though other users may also have working copies on their local machines. But there will only ever be one working copy on the live server. The reason for this is so that we can just perform SVN update on live server every time we want changes to be published.
How can a simiar workflow be accomplished using GIT?
To model your current work flow almost exactly do:
Set up a git repo.
Clone the repo on the server and locally.
Work locally
git push to the git repo
ssh to server
git pull.
Another way to do it would be to set up a "production" branch in git, have a cron job that continually pulls this branch on the server, and then just merge and push to the "production" branch any time you want to publish your changes. Sounds like you need a more concrete branching strategy.
See: Git flow branching model && git flow cli tool
Good luck! This is a very solvable problem with git.
You might find this useful: http://joemaller.com/990/a-web-focused-git-workflow/
In your local working copy:
git push ssh://you#yourserver/path/to/your/wc
will push the commited changes in your local version to yourserver.
Having a setup that triggers automatically pulling like leonbloy and codemac suggested may seem like a good idea at first but it tends to be very fragile. I suggest a different alternative.
http://toroid.org/ams/git-website-howto
Related
I currently work on solutions / projects within a single GIT repository, in Visual Studio. The commits I make are to a local folder on the Visual Studio server, and then I use the command 'git push origin master' (after having changed directory to my local folder / repository) to push commits to a Gitlab in my company's corporate space. The purpose of this is less about using branches and software development (as I am the only person who does any work on this), and more about having a way to rollback changes and keep a master copy off the server.
I now want a fresh copy of this GIT repository, so I can use that as a new baseline for an application migration. I will still continue to work on the existing repository too.
What is the best way to make a copy of the existing repository, that I can treat as a totally separate thing, without accidently messing up my existing config on the server? Should I do the clone from the Gitlab? Or clone locally and then push that up to the new space in my Gitlab? Honestly, I'm a bit confused at this point about the proper model for this stuff.
....................
Sounds like you'd like to fork the project: keep the existing repo and start a new, separate repo based on the old one.
Browse to your project in Gitlab
On the main repo screen, click "fork" in the top right
Select a new/ the same organisation as you'd like
The project is now forked!
From here, you can clone the fork to your local machine in a new folder. These are now separate projects, and code updates can be added, committed and pushed to the separate repos.
Right now I use scp-action to copy some files to one server.
How secure is this method?
What are the alternatives to using GitHub Actions, I was thinking of running a custom action of scp and setting up a local runner on my side.
One other alternative is to configure a bare repository on your server, and adding it as a second remote on your local repository.
Now every time you want to deploy code to your server, you push to this remote. You then create a git hook on your server that fires post-push, and automatically executes a script that restarts a service, for example.
Read more here
For me, I am having a hard time choosing between these two alternatives because I have some unanswered questions :
for github actions, how secure is the SSH key being ran from a github runner ? and given that my codebase is huge, isn't it a bit overkill to scp all my files after a hotfix commit where I changed only 1 or 2 files ?
for git bare repo: would the size of the git folder be a problem ? and how to secure my server so it would not serve the .git folder ?
I am looking for a way to easily deploy a nodejs app via a command line script.
I found one solution:
https://github.com/Skookum/nimbus
I also heard that the whole thing can be done with git and post commit hooks.
What would people recommend?
edit: i am deploying it to my own box where i have root
You have two options on a self hosted setup.
Do it all yourself
This entails git post-receive hooks. In short you setup your production box to host a copy of your repository, on your local machine you setup a remote, let's call the remote production.
Now when you run git push production master on your local machine, the updates are sent and the server executes the post-receive hook on your server which runs whatever you wish.
Actions you may want are: checking out/writing the data in the repo to files/folders (the git repo on the server is stored as a bare repo); restarting your webserver; notifying you that there's been a deployment etc.
I'd suggest reading up on it at http://git-scm.com/book/en/Customizing-Git-Git-Hooks and taking a look at a few tutorials, this one (http://ryanflorence.com/deploying-websites-with-a-tiny-git-hook/) looks prety legit.
Use a service to manage it for you, http://www.deployhq.com/ is the only one that springs to mind but I'm sure there's other.
Good Luck and Happy Hacking :)
There is a tool called shipit.js (https://github.com/shipitjs/shipit) which allows you to perform different deployment tasks like:
moving code from the repo to the server
restarting server
installing node_modules
etc.
You create a config file, and then runs: npx shipit deploy and all tasks you specify are performed. In case of failure, it has a rollback mechanism.
There is a nice screencast about it: https://youtu.be/8PpBySjkWEM.
just trying to write my 1st git hook. I have a remote origin and when I push to it I would like a post-receive hook to fire which will ssh into my live server and git pull. Can this be done, and is it a good approach?
Ok i've got the hook firing and the live server is doing the git pull but it's saying its already up to date? any ideas?
yes, that can be done and yes, in my opinion, that is good practice. Here is our use-case:
I work in a small development group that maintains a few sites for different groups of people.
Each site has several environments (beta, which is a sandbox for all developers, staging, which is where we showcase changes to the content owners before going live, training which is what our training dude use to train new content managers and live, where everyone goes to consume content).
We control deployment to all these environments via post-receive hooks based on branch names. We may have a 'hot fix' branch that doesn't get deployed anywhere, but when we merge it with, say, the 'beta' branch, it then gets auto-deployed to the beta server so we can test how our code interacts with the code of the other developers.
There are many ways you can do this, what we do is setup your ssh keys so that the git server can ssh into your web server and do a git pull. This means you gotta add the public key for git#gitserver into your git#webserver authorized_keys file and vice-versa, then, on the post-receive hook you parse out the branch and write a 'case' statement for it, like so:
read line
echo "$line" | . /usr/share/doc/git-core/contrib/hooks/post-receive-email
BRANCH=`echo $line | sed 's/.*\///g'`
case $BRANCH in
"beta" )
ssh git#beta "cd /var/www/your-web-folder-here; git pull"
;;
esac
Hope that helps.
That can certainly be done, but it isn't the best approach. When you use git pull, git will fetch and then merge into your working copy. If that merge can be guaranteed to always be a fast-forward, that may be fine, but otherwise you might end up with conflicts to resolve in the deployed code on your live server, which most likely will break it. Another problem with deploying with pull is that you can't simply move back to an earlier commit in the history, since the pull will just tell you that your branch is already up-to-date.
Furthermore, if you're pulling into a non-bare repository on your live server, you'll need to take steps to prevent data in your .git directory from being publicly accessible.
A similar but safer approach is to deploy via a hook in a bare repository on the live server which will use git checkout -f but with the working directory set to the directory that your code should be deployed to. You can find a step-by-step guide to setting up such a system here.
I have been going through documentation and such and have SVN working, but I want to put it on an existing directory. I imported that directory, so do I rename/delete the non SVN directory and then checkout the SVN to the non SVN directory location? I am just trying to understand how to get it to start posting to our website URL.
If so, is there any way to keep the current non SVN and make it SVN rather than import and overwrite?
Thanks, I am trying to understand SVN, but find a lot of the tutorials and such on the web to be confusing.
Yes, you have it exactly. Once your code has been added to the repository, you can get rid of or rename your original code directory. Then checkout the project from the repository into the same location as your previous code and continue working from there.
UPDATE
To make it so that your website is updated from the repository, you actually need two working directories, and a repository.
Repository: The repository stores the code and changesets, but isn't directly accessible as a file system. Keep a backup!.
Working directory 1: You develop and test your code from a working directory checked out from the repository. Commit changes back to the repository.
Working directory 2: Rename the code directory on your webserver. Checkout a copy of the code to your web server in its place. Technically it is now a working directory, since it contains the .svn metadata directories, though you won't usually make changes here.
Make changes to your code from your development working directory (1) and commit them back to the repository. When you are satisfied that they are working correctly and have been properly tested, on the web server's code copy (2) do svn update (or if you're using Tortoise SVN on the web server, do an update). This will synchronize the server code with the current development version.
Subversion will not automatically push updates to your web server. You will need to pull them in with an update when you need to. It is possible to use what's called a "post-commit hook" to cause Subversion to execute a script when commits are made, and that script could update or export code to your production web server. However, you would need to write the script and it's kind of an advanced usage of Subversion. I would recommend trying out the method I described with a working copy on the web server to get accustomed to the workflow befrore trying anything more complicated.
Addendum If you really want to do this (and I don't really recommend it unless you really test well) a very easy method would be to schedule a cron job that does svn update every couple of hours (or minutes) on your production site.
Don't forget that if you do happen to modify your code directly on the web server, you must commit it back to the repository from there, and do an update on your development working copy.