SVN to deployable folder - tortoisesvn

I have an SVN repository on my server(windows-7) and my application is runnin on Tomcat.
On server every time i check out my application(which was committed from different machines) and then manually place the files in webapps folder to deploy the app onto tomcat.
Is there a way where i can set up a svn linking with webapps folder, wherein whenever the users commit there code it should be directly deployed on to tomcat webapps folder.

This question comes up often enough that it's in the FAQ
Aside from doing it strictly via a hook script (which requires admin-level access to the repository, on the repository server), you can set up a Continuous Integration server to monitor the repository and perform your deployments that way.

Related

Good practices for pulling from git repo into production server

I have a DigitalOcean VPS with ubuntu and a few laravel projects, for my projects initial setup I do a git clone to create a folder with my application files from my online repository.
I do all development work in my local machine, where I have two branches (master and develop), what I do is merge develop into my local master, then I push from master into my local repository.
Nw back into my production server, when I want to add all the changes added into production I do a git pull from origin, so far this has resulted into git telling me to stash my changes, why is this?
What would be the best approach to pull changes into production server? take in mind that my production server has no working directory perse, all I do in my VPS is either clone or push upgrades into production.
You can take a look at the CI/CD (continuous integration / continuous delivery) systems. GitLab for example offer free-to-use plan for small teams.
You can create a pipeline with a manual deploy step (you have to press a button after the code is merged to the master branch) and use whatever tool you like to deploy your code (scp, rsync, ftp, sftp etc.).
And the biggest benefit is that you can have multiple intermediate steps (even for the working branches) where you can run unit tests which would prevent you to upload failing builds (whenever you merge non-working code)
For the first problem, do a git status on production to see which files that git sees as changed or added and consider adding them to your .gitignore file (which itself should be a part of your repo). Laravel generally has good defaults for these, but you might have added things or deviated from them in the process of upgrading Laravel.
For the deployment, the best practice is to have something that is consistent, reproducible, loggable, and revertable. For this, I would recommend choosing a deployment utility. These usually do pretty much the same thing:
You define deployment parameters in code, which you can commit as a part of your repo (not passwords, of course, but things like the server name, deploy path, and deploy tasks).
You initiate a deploy directly from your local computer.
The script/utility SSH's into your target server and pulls the latest code from the remote git repo (authorized via SSH key forwarded into the server) into a 'release' folder.
The script does any additional tasks you define (composer install, npm run prod, systemctl restart php-fpm, soft-linking shared files like .env, and etc.)
The script soft-links the document root to your new 'release' folder, which results in an essentially zero-downtime deployment. If any of the previous steps fail, or you find a bug in the latest release, you just soft-link to the previous release folder and your site still works.
Here are some solutions you can check out that all do this sort of thing:
Laravel Envoyer: A 1st-party (paid) service that allows you to deploy via a web-based GUI.
Laravel Envoy: A 1st-party (free) package that allows you to connect to your prod server and script deployment tasks. It's very bare-bones in that you have to write all of the commands yourself, but some may prefer that.
Capistrano: This is (free) a tried-and-tested popular ruby-based deployment utility.
Deployer: The (free) PHP equivalent of Capistrano. Easier to use, has a lot of built-in tasks (including a Laravel one), and doesn't require ruby.
Using these utilities is not necessarily exclusive of doing CI/CD if you want to go that route. You can use these tools to define the CD step in your pipeline while still doing other steps beforehand.

Running Kentico continuous integration on Azure app services

We have a new project in which we are trying to make use of the built in continuous integration in Kentico for tracking changes to templates, page types, transformations etc.
We have managed to get this working locally between two instances of a Kentico database, making changes in one, syncing the changes through CI and then restoring them up to the second database using the Continuous integration application that sits in the bin folder in the Kentico site.
The issue we are having is when it comes to deploying our changes to our dev and live environments.
Our sites are hosted as Azure App services and we deploy to them using VSTS (Azure DevOps) build and release workflows however, as these tasks run in an agent, any powershell script we try to run to trigger the CI application fails because it is not running in the site / server context.
My question is, has anyone managed to successfully run Kentico CI in the context of an Azure app service? Alternatively, how can I trigger a powershell script on the site following a deployment?
Yes, I've got this running in Azure DevOps within the release pipeline itself. It's something that we're currently rolling out as a business where I work.
The key steps to getting this working for me were as follows:
I don't want to deploy the ContinuousIntegration.exe or the repository folders, so I need to create a second artefact set from source control (this is only possible at the moment with Azure Repos and GitHub to my knowledge).
Unzip your deployment package and copy the CMS folder to a working directory, this is where you're going to run CI. I did this because I need the built assemblies available.
From the repo artefact in step 1, copy the ContinuousIntegration.exe and CI repository folders into the correct place in your unzipped working folder.
Ensure that the connection string actually works for the DB in your unzipped folder, if necessary, you may want to change your VS build options in regards to how the web.config is handled.
From here, you should be able to run CI in the new working folder against your target database.
In my instance, as I don't have CI running on the target site it means that everything is restored every time.
I'm in to process of writing this up in more detail, so I'll share here when I've done that.
Edit:
- I finally wrote this up in more detail: https://www.ridgeway.com/blog/web-development/using-kentico-12-mvc-with-azure-devops
We do, but no CI. VSTS + GIT. We store virtual objects in the file system and using git for version control. We have our own custom library that does import export of the Kentico objects (the ones are not controlled by Git).Essentially we have a json file "publishing manifest" where we specify what objects need to be exported (i.e. moved between environments).
There is a step from Microsoft 'Powershell on Target Machines', you guess you can look into that.
P.S. Take a look also at Three Ways to Manage Data in Kentico Using PowerShell
Deploy your CI files to the Azure App Service, and then use a Azure Job to run "ContinuousIntegration.exe"
If you place a file called KenticoCI.bat in the directory \App_Data\jobs\triggered\ContinuousIntegration - this will automatically create a web job that you can can trigger:
KenticoCI.bat
cd D:\home\site\wwwroot
ren App_Offline.bak App_Offline.htm
rem # run Kentico CI Integraton
cd D:\home\site\wwwroot\bin
ContinuousIntegration.exe -r
rem # Removes the 'App_Offline.htm' file to bring the site back online
cd D:\home\site\wwwroot
ren App_Offline.htm App_Offline.bak

A convenient way to update a project on server

There is a project (node.js - although it's not important), which is developed on the local machine and is periodically transferred to the server.
In principle, I could simply erase the project folder on the server each time and replace it with a new one - uploaded from the local machine.
The matter is, that some folders (specifically: node_modules), I do not need to rewrite. So I have to manually create an archive, excluding unnecessary folders from it. And on the server, too, pre-erase everything except for some folders and only then replace.
How can I automate this procedure?
(On the local machine - windows, on the server - Linux)
You can pull the changes directly from your repo.
I do it like this:
I have different branches for different environments like dev, stage and production.
I commit the changes to the branch and pull that on the server.
This way, you don't need to commit unnecessary stuff (like node_modules, credentials etc) to your repo.
You can also easily automate this using CI tools. Look up CI tools.

Build web application before or after deployment?

Context
Web application project has a /build (or /dist) folder with front-end files, generated during build (by Gulp). This folder is not under the source control (see, for example: React.js Starter Kit)
The server-side code doesn't require bundling or compilation step, so the /src folder from your project can be deployed as it is (these source files are used to run Node.js or ASP.NET vNext server)
Web application is deployed via Git (see Git-based deployment options in Heroku or Windows Azure as an example)
Questions
Is it better to build (bundle and minify) front-end files before or after deployment?
If before, you may end-up having a separate repository (or branch), with the /build folder under the source control alongside with the rest of the project files. This repo is used solely for deployment purposes.
If after, the deployment time may increase - time needed to download additional npm modules used in the build process, the server's CPU may spike up to 100% during the build, potentially harming your web application's responsiveness.
Is it better to build front-end files on the remote server before or after running KuduSync command?
If you deploy your web application to Windows Azure with Kudu, should the deployment script copy only the contents of the /build folder (with public, front-end files like .js, .html, .css) to /wwwroot? As opposed to copying all the project files (server-side source code and front-end bundles), which it does by default.
By default Azure's deployment script copies all the project files, from D:\home\site\repository folder to D:\home\site\wwwroot folder, and then Node.js app is started from there. Is it a necessary step? Why not to start the Node.js (or ASP.NET vNext) app from the D:\home\site\repository folder? And if it indeed should be copied to a separate folder, why source files are placed in wwwroot, maybe it's better to copy them to another folder, outside wwwroot?
I am not familiar with both Azure and Heroku so I can't give any ideas about those specific deployment options.
I am using (4 dedicated servers with 2 of them solely for serving static files), the option to build the bundled and minified javascript files (for front-end) and add all those files to the main repository has several advantages
You only need to run it once (either on your dev machine or on staging server, whatever way you want). This is particularly helpful when you have to run multiple static servers since you don't have to run the build command on each server. One might argue that they can use something like Glusterfs to synchronise files from one static server to all other servers and the build process only needs to be run once. However, it is a whole different story when it comes to this kind of setup
It makes your deployment process simple, just pull new code and restart the server(s) if necessary (assuming that you have some mechanism to increase the static file version so that all your clients will receive the latest version)
Avoid unnecessary dependencies on your production servers. This might sound weird for some people but I just don't want to install any extra libraries on my production servers unless they are absolutely necessary. With the build process run locally on my dev machine, my production servers only have what they need to run the production code and nothing else
However this approach also has some disadvantages:
When more than one developer in your team (accidentally) run the build process and commit the code, then you will have a crazy list of conflicts. However, it can be solved by simply running the build process again after you merge all the changes from other guys. This is more about the workflow
Your repository will be bigger. I personally don't think this is a big issue considering few extra MB of my bundled and minified files. If your front-end javascript is big enough for this to be an issue then it is another story

Subversion Repository on Linux Dev

What's the best practice for setting up a subversion repository on a linux development machine. External users need to be able to access a specific repository, but nothing else on the machine. I know one answer is to set up a dedicated repository, but I'm looking for a single machine solution: location of repositories, accounts, backup procedures.
One of the popular access methods to Subversion is via Apache module. You can set put different rights at the directory level to control access. See Choosing a Server Configuration and httpd, the Apache HTTP Server. For authentication, I recommend using external authentication source like Microsoft AD via mod_auth_sspi.
If you need to mix and match rights, see my answer for How can I make only some folders show up for certain developers with SVN.
I work for an IT operations infrastructure automation company; we do this all the time.
Location of repository: We use "/srv/svn" by default to store all SVN repositories, unless a customer has a specific requirement, for example an existing repository might be stored on a ReadyNAS shared filesystem.
Accounts: All our customers use LDAP. Either OpenLDAP server running on a master host, but sometimes Active Directory because some customers have a Windows domain in their office, which we can configure as well. Developers get access to the "SCM" group (usually svn, git or devel), and the 'deploy' group. These groups only have permissions to log in and perform SCM related activities (ie, write commits to the repo based on group ownership), or do application deployments to production.
Backup procedures: We use svnadmin hotcopy unless the customer already has something in place (usually svnadmin dump, heh).
svnadmin hotcopy /srv/svn /srv/svn_backups/$(date +%Y%m%d)
For access to the repo, it's usually simple svn+ssh. Some customers already have an Apache setup, but not many. I recommend SSH. Developers push their public ssh keys out and all is well. There's little to no maintenance with LDAP user management (the only way to go).
I'd recommend looking at the chapter on server configuration in the subversion book. It makes suggestions about which configuration is more appropriate for your use.
For what it's worth, setting up a repository using the stand alone svn daemon is very straight forward. The annoying thing is managing user rights.
I have a blog posting that describes the steps necessary to set up and initiate a Linux-based Subversion server in order to maintain code repositories etc.
Basically the steps are:
Download the Subversion tarball.
Unzip and install Subversion.
Deal with any installation problems that arise when running ./configure, if any.
Create the Subversion repository using svnadmin create.
Edit the repository configuration file using your text editor of choice.
Ditto the password file.
Import your code, projects etc into the repository using svn import.
Start the server as a daemon eg svnserve -d. It is also possible to get it to do this automatically
upon reboot.
Start using it using standard Subversion commands to eg check out, check in, backup etc...

Resources