Angular 4 Production Deployment Workflow - node.js

I have a lengthy discussion with a colleague about how we should deploy our angular 4 app to production server.
Would like practical advise and guide on this issue from the community, if possible.
Premise 1
At production server,
git pull
npm install
{set up production configuration}
ng build --prod --aot
build and compile on production server
production server hardware specs need to support the build process
addition space required on hosting server to house node_modules
git repo master branch does not have compiled codes, therefore is a "clean source repo"
Premise 2
At production server,
git pull
build and compile production codes on local development workstation will be faster
git repo master branch will keep snapshot of the compiled codes for deployment
production server remains as a 128MB RAM with limited space, since it is to serve html, js and css.
faster deployment to another server when required for recovery or scaling, since it is only a git pull

The best way to do it, is to build and compile in the local development Workstation, and deploy only the output of the build.
The git repo master branch needs to contain the source code not the only compiled&built one.
You can deploy in the production server using other method rather then the git pull command but if you insist on using it, you can init a new repo in the /build file and pull this repo to the production server.

If you really can't afford the build farm yet and if it enables the testing or other activites, yeah sure. Test the workflow you describe and see it for yourself, but it's definitely not a good long term practice.
If eventually you're gonna use the CI in your workflow, I would suggest to just start trying to set it up instead of wasting time/money setting up something temporary. Moreover, 5 minutes isn't a big deal trust me.
As a side note: If you would have spent time trying your suggestion instead of writing your SO question and talking with your colleague, you probably would already have figured out the answer yourself.

Related

What workflow should I use to deploy a NodeJS app to a fixed server?

I work at a tiny company where deployment is mainly done by pulling master to a production server and running several scripts. We use PM2 which has some simple deployment features, but I'm not sure how to arrange the moving parts together to make everyone happy.
What I want to accomplish is having one fire-and-forget command to run on my dev machine that will result in everything being where it's supposed to be. What my boss wants to accomplish is to at no point have to do any lengthy step on the server to minimize downtime - so no building and no NPM install. To this end he wants builds and node_modules in Git and I think that's an abomination before the gods, so I'm trying to figure out how to avoid those.
Things that are a no-go for now: building and deploying Docker images, CI/CD. We don't have the infrastructure set up for those, and I'm more interested in using tools we already have and getting rid of human intervention in those. (Their popularity unfortunately also makes it hard to research best practices for more legacy environments.)
I'm not very familiar with PM2 and most of my experience is in environments where deployment was somebody else's job, so what I'm looking for is an outline of what gets done to what where provided by somebody that actually knows what they're talking about.
My current rough idea is:
Switch the project to Yarn 2 and use it's zero-install capability to have dependencies in Git but sane.
In a Docker container on the dev machine, do a clean checkout, install, and build. (This is to get rid of "works on my machine" issues and avoid inadvertently checking in macOS binaries.)
Push this to a release branch - these are the only ones where build outputs and such are allowed.
Then use PM2 to pull this specific branch and reload on the target machine.
Is this something that looks workable? Am I missing something? Is it possible to somehow avoid the release branches?

Good practices for pulling from git repo into production server

I have a DigitalOcean VPS with ubuntu and a few laravel projects, for my projects initial setup I do a git clone to create a folder with my application files from my online repository.
I do all development work in my local machine, where I have two branches (master and develop), what I do is merge develop into my local master, then I push from master into my local repository.
Nw back into my production server, when I want to add all the changes added into production I do a git pull from origin, so far this has resulted into git telling me to stash my changes, why is this?
What would be the best approach to pull changes into production server? take in mind that my production server has no working directory perse, all I do in my VPS is either clone or push upgrades into production.
You can take a look at the CI/CD (continuous integration / continuous delivery) systems. GitLab for example offer free-to-use plan for small teams.
You can create a pipeline with a manual deploy step (you have to press a button after the code is merged to the master branch) and use whatever tool you like to deploy your code (scp, rsync, ftp, sftp etc.).
And the biggest benefit is that you can have multiple intermediate steps (even for the working branches) where you can run unit tests which would prevent you to upload failing builds (whenever you merge non-working code)
For the first problem, do a git status on production to see which files that git sees as changed or added and consider adding them to your .gitignore file (which itself should be a part of your repo). Laravel generally has good defaults for these, but you might have added things or deviated from them in the process of upgrading Laravel.
For the deployment, the best practice is to have something that is consistent, reproducible, loggable, and revertable. For this, I would recommend choosing a deployment utility. These usually do pretty much the same thing:
You define deployment parameters in code, which you can commit as a part of your repo (not passwords, of course, but things like the server name, deploy path, and deploy tasks).
You initiate a deploy directly from your local computer.
The script/utility SSH's into your target server and pulls the latest code from the remote git repo (authorized via SSH key forwarded into the server) into a 'release' folder.
The script does any additional tasks you define (composer install, npm run prod, systemctl restart php-fpm, soft-linking shared files like .env, and etc.)
The script soft-links the document root to your new 'release' folder, which results in an essentially zero-downtime deployment. If any of the previous steps fail, or you find a bug in the latest release, you just soft-link to the previous release folder and your site still works.
Here are some solutions you can check out that all do this sort of thing:
Laravel Envoyer: A 1st-party (paid) service that allows you to deploy via a web-based GUI.
Laravel Envoy: A 1st-party (free) package that allows you to connect to your prod server and script deployment tasks. It's very bare-bones in that you have to write all of the commands yourself, but some may prefer that.
Capistrano: This is (free) a tried-and-tested popular ruby-based deployment utility.
Deployer: The (free) PHP equivalent of Capistrano. Easier to use, has a lot of built-in tasks (including a Laravel one), and doesn't require ruby.
Using these utilities is not necessarily exclusive of doing CI/CD if you want to go that route. You can use these tools to define the CD step in your pipeline while still doing other steps beforehand.

How to deploy Go program from windows to CentOS server

I have a Go package running on Windows and is working fine but now I'm at a stage where I would like to test this on production CentOS 6.5 server.
What is the best practice to deploy this from Windows to CentOS?
Would I have to use my Git repo to distribute to Linux operating system, compile then deploy the binary to the server?
Also I have multiple files, so I would imagine go build *.go would suffice or are there better options for doing compilation?
What is the best practice to deploy this from Windows to CentOS?
As far as best practices go I would recommend using continuous integration. You can setup jenkins, or there are some cloud options out there: codeship.io, travis-ci.org, drone.io, wercker.com, ... Some of them have free plans available.
Basically you'd commit your code to git and push that out to Github (or Bitbucket if you want free private repos). The continuous integration server will be notified whenever you push out changes, and will build, test and create a release tar archive of your project. You can then take this resulting tar and download it to your CentOS box. In 6.5 you'll need to create an init.d script to keep your program up and running. You can see an example here (the system v script).
CentoOS 7 uses systemd now which would be slightly easier to setup.
Taking this one step further it's also possible to setup continuos deployment, in which the download, extraction and installation can also be automated. Depending on your project it may or may not make sense to set up continuous deployment. (Auto-pushing to production might be a little too automatic) You can find an example in wercker here.
Although there is an an up-front cost to setting up continuous integration if this is a project that other people will contribute too, or one that you intend to work on long-term, the cost will definitely be worth it. (Future you will be greatful when you come back to this project 6 months from now, change 1 line of code, and don't have to remember all the manual steps it took to deploy)

remote deploy scripts for nodejs?

I am looking for a way to easily deploy a nodejs app via a command line script.
I found one solution:
https://github.com/Skookum/nimbus
I also heard that the whole thing can be done with git and post commit hooks.
What would people recommend?
edit: i am deploying it to my own box where i have root
You have two options on a self hosted setup.
Do it all yourself
This entails git post-receive hooks. In short you setup your production box to host a copy of your repository, on your local machine you setup a remote, let's call the remote production.
Now when you run git push production master on your local machine, the updates are sent and the server executes the post-receive hook on your server which runs whatever you wish.
Actions you may want are: checking out/writing the data in the repo to files/folders (the git repo on the server is stored as a bare repo); restarting your webserver; notifying you that there's been a deployment etc.
I'd suggest reading up on it at http://git-scm.com/book/en/Customizing-Git-Git-Hooks and taking a look at a few tutorials, this one (http://ryanflorence.com/deploying-websites-with-a-tiny-git-hook/) looks prety legit.
Use a service to manage it for you, http://www.deployhq.com/ is the only one that springs to mind but I'm sure there's other.
Good Luck and Happy Hacking :)
There is a tool called shipit.js (https://github.com/shipitjs/shipit) which allows you to perform different deployment tasks like:
moving code from the repo to the server
restarting server
installing node_modules
etc.
You create a config file, and then runs: npx shipit deploy and all tasks you specify are performed. In case of failure, it has a rollback mechanism.
There is a nice screencast about it: https://youtu.be/8PpBySjkWEM.

Deployment after CI builds

Im pretty new to CI so bear with me here. I have just setup an instance of Team City in on a local machine, and I can clearly see the benefits.
The one thing we do want understand is how we can managed the deployment aspect of CI. What we really want to achieve are two builds:
1) We check in to our source repository and the CI server notices the change and compiles the code, tests etc.
2) We manually trigger a build that compiles the code, copies the code to a remote server and update its IIS mappings.
Now the first build is pretty much wrapped up with TeamCity. But I assume that the deployment aspect of this is going to involve some scripting (Nant, MsBuild, Rake etc) is this correct?
If this is the case, I can see that transferring files from the build machine to a remote server will be ok, but will we be able to update IIS mappings without being on the same network? For that matter where is THE correct place to deploy a CI server, should is live on the same network as the apps we deploy?
Finally, we have been (rather unorthadoxily) using IronRuby to run rake scripts as our build runner. This is simply because we like Rake, but if we were to look at Nant/Msbuild do they have any baked in tasks that would simplify what we are trying to achieve?
Cheers, Chris.
We use MSBuild exclusively, just a choice. I am sure Nant and the others do things just as well. We only publish to a dev environment (for dev testing) and a stage environment (Where QA actually tests). I would not suggest that you put the production system push on this as the temptation to force builds might be too great for some people.
We use some of the MSBuild Community Tasks

Resources