A convenient way to update a project on server - node.js

There is a project (node.js - although it's not important), which is developed on the local machine and is periodically transferred to the server.
In principle, I could simply erase the project folder on the server each time and replace it with a new one - uploaded from the local machine.
The matter is, that some folders (specifically: node_modules), I do not need to rewrite. So I have to manually create an archive, excluding unnecessary folders from it. And on the server, too, pre-erase everything except for some folders and only then replace.
How can I automate this procedure?
(On the local machine - windows, on the server - Linux)

You can pull the changes directly from your repo.
I do it like this:
I have different branches for different environments like dev, stage and production.
I commit the changes to the branch and pull that on the server.
This way, you don't need to commit unnecessary stuff (like node_modules, credentials etc) to your repo.
You can also easily automate this using CI tools. Look up CI tools.

Related

Good practices for pulling from git repo into production server

I have a DigitalOcean VPS with ubuntu and a few laravel projects, for my projects initial setup I do a git clone to create a folder with my application files from my online repository.
I do all development work in my local machine, where I have two branches (master and develop), what I do is merge develop into my local master, then I push from master into my local repository.
Nw back into my production server, when I want to add all the changes added into production I do a git pull from origin, so far this has resulted into git telling me to stash my changes, why is this?
What would be the best approach to pull changes into production server? take in mind that my production server has no working directory perse, all I do in my VPS is either clone or push upgrades into production.
You can take a look at the CI/CD (continuous integration / continuous delivery) systems. GitLab for example offer free-to-use plan for small teams.
You can create a pipeline with a manual deploy step (you have to press a button after the code is merged to the master branch) and use whatever tool you like to deploy your code (scp, rsync, ftp, sftp etc.).
And the biggest benefit is that you can have multiple intermediate steps (even for the working branches) where you can run unit tests which would prevent you to upload failing builds (whenever you merge non-working code)
For the first problem, do a git status on production to see which files that git sees as changed or added and consider adding them to your .gitignore file (which itself should be a part of your repo). Laravel generally has good defaults for these, but you might have added things or deviated from them in the process of upgrading Laravel.
For the deployment, the best practice is to have something that is consistent, reproducible, loggable, and revertable. For this, I would recommend choosing a deployment utility. These usually do pretty much the same thing:
You define deployment parameters in code, which you can commit as a part of your repo (not passwords, of course, but things like the server name, deploy path, and deploy tasks).
You initiate a deploy directly from your local computer.
The script/utility SSH's into your target server and pulls the latest code from the remote git repo (authorized via SSH key forwarded into the server) into a 'release' folder.
The script does any additional tasks you define (composer install, npm run prod, systemctl restart php-fpm, soft-linking shared files like .env, and etc.)
The script soft-links the document root to your new 'release' folder, which results in an essentially zero-downtime deployment. If any of the previous steps fail, or you find a bug in the latest release, you just soft-link to the previous release folder and your site still works.
Here are some solutions you can check out that all do this sort of thing:
Laravel Envoyer: A 1st-party (paid) service that allows you to deploy via a web-based GUI.
Laravel Envoy: A 1st-party (free) package that allows you to connect to your prod server and script deployment tasks. It's very bare-bones in that you have to write all of the commands yourself, but some may prefer that.
Capistrano: This is (free) a tried-and-tested popular ruby-based deployment utility.
Deployer: The (free) PHP equivalent of Capistrano. Easier to use, has a lot of built-in tasks (including a Laravel one), and doesn't require ruby.
Using these utilities is not necessarily exclusive of doing CI/CD if you want to go that route. You can use these tools to define the CD step in your pipeline while still doing other steps beforehand.

How to update repository with built project?

I’m trying to set up GitLab CI/CD for an old client-side project that makes use of Grunt (https://github.com/yeoman/generator-angular).
Up to now the deployment worked like this:
run ’$ grunt build’ locally which built the project and created files in a ‘dist’ folder in the root of the project
commit changes
changes pulled onto production server
After creating the .gitlab-ci.yml and making a commit, the GitLab CI/CD job passes but the files in the ‘dist’ folder in the repository are not updated. If I define an artifact, I will get the changed files in the download. However I would prefer the files in ‘dist’ folder in the to be updated so we can carry on with the same workflow which suits us. Is this achievable?
I don't think commiting into your repo inside a pipeline is a good idea. Version control wouldn't be as clear, some people have automatic pipeline trigger when their repo is pushed, that'd trigger a loop of pipelines.
Instead, you might reorganize your environment to use Docker, there are numerous reasons for using Docker in a professional and development environments. To name just a few: that'd enable you to save the freshly built project into a registry and reuse it whenever needed right with the version you require and with the desired /dist inside. So that you can easily run it in multiple places, scale it, manage it etc.
If you changed to Docker you wouldn't actually have to do a thing in order to have the dist persistent, just push the image to the registry after the build is done.
But to actually answer your question:
There is a feature request hanging for a very long time for the same problem you asked about: here. Currently there is no safe and professional way to do it as GitLab members state. Although you can push back changes as one of the GitLab members suggested (Kamil Trzciński):
git push http://gitlab.com/group/project.git HEAD:my-branch
Just put it in your script section inside gitlab-ci file.
There are more hack'y methods presented there, but be sure to acknowledge risks that come with them (pipelines are more error prone and if configured in a wrong way, they might for example publish some confidential information and trigger an infinite pipelines loop to name a few).
I hope you found this useful.

How do I package my node project files and deploy them to my server?

I have a nodejs non open source web application project that I want to deploy to a production server and/or a staging server. Is there a default way for this, or some tool that does it? I want to package all files needed and exclude the files not needed like the .git folder, the tests and other files like Gruntfile, package.json and so on.
I could of course manually package the files in a tar.gz file and send them to the correct server. But I was hoping to find a more complete and configurable tool that can do it for me.
Might be not exactly what you're asking for, but i like git for automated deployment.
You could have branches like staging and production, which are checked out on the remote server.
You can set up a git hook like post-receive to update those remotely, every time you merge changes into those branches.
Here's a tutorial: http://wekeroad.com/2011/09/17/deploying-a-site-with-git-hooks

SVN Setup Of Existing Directory

I have been going through documentation and such and have SVN working, but I want to put it on an existing directory. I imported that directory, so do I rename/delete the non SVN directory and then checkout the SVN to the non SVN directory location? I am just trying to understand how to get it to start posting to our website URL.
If so, is there any way to keep the current non SVN and make it SVN rather than import and overwrite?
Thanks, I am trying to understand SVN, but find a lot of the tutorials and such on the web to be confusing.
Yes, you have it exactly. Once your code has been added to the repository, you can get rid of or rename your original code directory. Then checkout the project from the repository into the same location as your previous code and continue working from there.
UPDATE
To make it so that your website is updated from the repository, you actually need two working directories, and a repository.
Repository: The repository stores the code and changesets, but isn't directly accessible as a file system. Keep a backup!.
Working directory 1: You develop and test your code from a working directory checked out from the repository. Commit changes back to the repository.
Working directory 2: Rename the code directory on your webserver. Checkout a copy of the code to your web server in its place. Technically it is now a working directory, since it contains the .svn metadata directories, though you won't usually make changes here.
Make changes to your code from your development working directory (1) and commit them back to the repository. When you are satisfied that they are working correctly and have been properly tested, on the web server's code copy (2) do svn update (or if you're using Tortoise SVN on the web server, do an update). This will synchronize the server code with the current development version.
Subversion will not automatically push updates to your web server. You will need to pull them in with an update when you need to. It is possible to use what's called a "post-commit hook" to cause Subversion to execute a script when commits are made, and that script could update or export code to your production web server. However, you would need to write the script and it's kind of an advanced usage of Subversion. I would recommend trying out the method I described with a working copy on the web server to get accustomed to the workflow befrore trying anything more complicated.
Addendum If you really want to do this (and I don't really recommend it unless you really test well) a very easy method would be to schedule a cron job that does svn update every couple of hours (or minutes) on your production site.
Don't forget that if you do happen to modify your code directly on the web server, you must commit it back to the repository from there, and do an update on your development working copy.

Using hg repository as web site

This is somewhat related to my security question here. Is it a bad idea to use an hg / mercurial repository for a live website? If so, why?
Furthermore, we have dev, test and production installations of our website, like dev.example.com, test.example.com and www.example.com. If it's a bad idea to use a repository for a live/production website, would it be OK to use an hg repository for the dev and test sites?
I'm also concerned about ease of deployment. We have technical and less technical co-workers who will be working with the site. The technical people (software engineers) won't have any problem working with the command line or TortoiseHG. I'm more concerned about the less technical people (web designers). They won't be comfortable working on the command line, and may even find TortoiseHG daunting. These co-workers mostly upload .css files and images to the server. I'd like for these files (at least the .css files) to be under version control, but I want this to be as transparent as possible for the non technical team members.
What's the best way to achieve this?
Edit:
Our 'site' is actually a multi-site CMS setup with a main repository and several subrepositories. Mock-up of the repository structure:
/root [main repository containing core files and subrepositories]
/modules [modules subrepository]
/sites/global [subrepository for global .css and .php files]
/sites/site1 [site1 subrepository]
...
/sites/siteN [siteN subrepository]
Software engineers would work in the root, modules and sites/global repositories. Less technical people (web designers) would work only in the site1 ... siteN subrepositories.
Yes, it is a bad idea.
Do not have your repository as your website. It means that things checked in, but unworking, will immediately be available. And it means that accidental checkins (it happens) will be reflected live as well (i.e. documents that don't belong there, etc).
I actually address this "concept" however (source control as deployment) with a tool I've written (a few other companies are addressing this topic now, as well, so you'll see it more). Mine is for SVN (at the moment) so it's not particularly relevant; I mention it only to show that I've considered this previously (not on a Repository though; a working copy, in that scenario the answer is the same: better to have a non-versioned "free" are as the website directory, and automate (via user action) the copying of the 'versioned' data to that directory).
Many folks keep their sites in repositories, and so long as you don't have people live-editing the live-site you're fine. Have a staging/dev area where your non-revision control folks make their changes and then have someone more RCS-friendly do the commit-pull-merge-push cycle periodically.
So long as it's the conscious action of a judging human doing the staging-area -> production-repo push you're fine. You can even put a hook into the production clone that automatically does a 'hg update' of the working directory within that production clone, so that 'push' is all it takes to deploy.
That said, I think you're underestimating either your web team or tortoiseHg; they can get this.
me personally (i'm a team of 1) and i quite like the idea of using src control as a live website. more so with hg, then with svn.
the way i see it, you can load an entire site, (add/remove files) with a single cmd
much easier then ftp/ssh this, delete that etc
if you are using apache (and probably iis as well) you can make a simple .htaccess file that will block all .hg files (or .svn if you are using svn)
my preferred structure is
development site is on local machine running directly out of a repository (no security is really required here, do what you like commit as required)
staging/test machine is a separate box or vm running a recent copy of the live database
(i have a script to push committed changes to staging server and run tests)
live machine
(open ssh connection, push changes to live server, test again, can all be scripted reasonably easily, google for examples)
because of push/pull nature of hg, it means you can commit changes and test without the danger of pushing a broken build to the live website. like you say in your comments, only specific people should have permission to push a version to the live site. (if it fails, you should easily be able to revert to the previous version, via src control)
Why not have a repo also be an active web server (for dev or test/QA environment anyway)?
Here's what I am trying to implement:
Developers have local test environments in which they can build and test their code
Developers make a clone of the dev environment on their local dev machine
Developers commit as often as they want to their local repo
When chunk of work is done and tested, then developer pushes working change sets to dev repo
Changes would be merged and tested on Dev, then pushed to Test/QA, and so on.
BTW, we're using Mercurial. I believe this model would only work using a distributed source code management tool.

Resources