Multiple checkout locations from Perforce with CruiseControl [.Net] - cruisecontrol.net

I'm employing a production-test server, that is, a test server in the production environment, which is only accessible from my company's network. This is for doing smoke tests and regression tests (for example making sure that our 3rd party web services are accessible), before we actually deploy the project to production (the project is a web site).
We use a Perforce Source Control Server and CruiseControl.Net, and I'd like to configure the CruiseControl to check out our production code to two different file system locations (on our build server) so it can build it twice with different build configurations, i.e. one build configuration for production, one for production-test. Then I'm gonna robocopy the production-test build to the production-test server.
How do I specify multiple checkout directories for "production-test" and "production", without having to create two different branches for it?

Create two separate projects. Use Configuration Preprocessor to extract all common parts into a template and include it twice, changing only the project name, working/artifacts directories and the configuration. That's the simplest and most bulletproof solution.

Related

Good practices for pulling from git repo into production server

I have a DigitalOcean VPS with ubuntu and a few laravel projects, for my projects initial setup I do a git clone to create a folder with my application files from my online repository.
I do all development work in my local machine, where I have two branches (master and develop), what I do is merge develop into my local master, then I push from master into my local repository.
Nw back into my production server, when I want to add all the changes added into production I do a git pull from origin, so far this has resulted into git telling me to stash my changes, why is this?
What would be the best approach to pull changes into production server? take in mind that my production server has no working directory perse, all I do in my VPS is either clone or push upgrades into production.
You can take a look at the CI/CD (continuous integration / continuous delivery) systems. GitLab for example offer free-to-use plan for small teams.
You can create a pipeline with a manual deploy step (you have to press a button after the code is merged to the master branch) and use whatever tool you like to deploy your code (scp, rsync, ftp, sftp etc.).
And the biggest benefit is that you can have multiple intermediate steps (even for the working branches) where you can run unit tests which would prevent you to upload failing builds (whenever you merge non-working code)
For the first problem, do a git status on production to see which files that git sees as changed or added and consider adding them to your .gitignore file (which itself should be a part of your repo). Laravel generally has good defaults for these, but you might have added things or deviated from them in the process of upgrading Laravel.
For the deployment, the best practice is to have something that is consistent, reproducible, loggable, and revertable. For this, I would recommend choosing a deployment utility. These usually do pretty much the same thing:
You define deployment parameters in code, which you can commit as a part of your repo (not passwords, of course, but things like the server name, deploy path, and deploy tasks).
You initiate a deploy directly from your local computer.
The script/utility SSH's into your target server and pulls the latest code from the remote git repo (authorized via SSH key forwarded into the server) into a 'release' folder.
The script does any additional tasks you define (composer install, npm run prod, systemctl restart php-fpm, soft-linking shared files like .env, and etc.)
The script soft-links the document root to your new 'release' folder, which results in an essentially zero-downtime deployment. If any of the previous steps fail, or you find a bug in the latest release, you just soft-link to the previous release folder and your site still works.
Here are some solutions you can check out that all do this sort of thing:
Laravel Envoyer: A 1st-party (paid) service that allows you to deploy via a web-based GUI.
Laravel Envoy: A 1st-party (free) package that allows you to connect to your prod server and script deployment tasks. It's very bare-bones in that you have to write all of the commands yourself, but some may prefer that.
Capistrano: This is (free) a tried-and-tested popular ruby-based deployment utility.
Deployer: The (free) PHP equivalent of Capistrano. Easier to use, has a lot of built-in tasks (including a Laravel one), and doesn't require ruby.
Using these utilities is not necessarily exclusive of doing CI/CD if you want to go that route. You can use these tools to define the CD step in your pipeline while still doing other steps beforehand.

Build Definition: Copying the build output to the server?

When configuration the build definition, I have the option of copying the build output to the server:
We are using the VSO Host Build Controller and Azure's Continuous Integration build template to release to our development environment after every check-in.
Is there any reason why we need to have this value set? How could it ever be useful?
The Copy build output to the server will put the output of the build as a zip file attached the build that can be downloaded later.
In a situation where you don't care about the build output because you have it setup to continuously deploy to Azure (or some other build based deployment) you would not use this option.
If you however needed to download the output, for example a Windows Store App that you need to publish to the Store manually, then you could use this to get the application.
In VSO you most likely don't have a Drop Server (unless you have invested in Azure heavily) so you have 2 choices:
Put them in source control. Only in TFVC, not Git. Also fills up your Repo with Large Files.
Attach them to the build as a Zip.
The second scenario is exactly where you would use this.

Build web application before or after deployment?

Context
Web application project has a /build (or /dist) folder with front-end files, generated during build (by Gulp). This folder is not under the source control (see, for example: React.js Starter Kit)
The server-side code doesn't require bundling or compilation step, so the /src folder from your project can be deployed as it is (these source files are used to run Node.js or ASP.NET vNext server)
Web application is deployed via Git (see Git-based deployment options in Heroku or Windows Azure as an example)
Questions
Is it better to build (bundle and minify) front-end files before or after deployment?
If before, you may end-up having a separate repository (or branch), with the /build folder under the source control alongside with the rest of the project files. This repo is used solely for deployment purposes.
If after, the deployment time may increase - time needed to download additional npm modules used in the build process, the server's CPU may spike up to 100% during the build, potentially harming your web application's responsiveness.
Is it better to build front-end files on the remote server before or after running KuduSync command?
If you deploy your web application to Windows Azure with Kudu, should the deployment script copy only the contents of the /build folder (with public, front-end files like .js, .html, .css) to /wwwroot? As opposed to copying all the project files (server-side source code and front-end bundles), which it does by default.
By default Azure's deployment script copies all the project files, from D:\home\site\repository folder to D:\home\site\wwwroot folder, and then Node.js app is started from there. Is it a necessary step? Why not to start the Node.js (or ASP.NET vNext) app from the D:\home\site\repository folder? And if it indeed should be copied to a separate folder, why source files are placed in wwwroot, maybe it's better to copy them to another folder, outside wwwroot?
I am not familiar with both Azure and Heroku so I can't give any ideas about those specific deployment options.
I am using (4 dedicated servers with 2 of them solely for serving static files), the option to build the bundled and minified javascript files (for front-end) and add all those files to the main repository has several advantages
You only need to run it once (either on your dev machine or on staging server, whatever way you want). This is particularly helpful when you have to run multiple static servers since you don't have to run the build command on each server. One might argue that they can use something like Glusterfs to synchronise files from one static server to all other servers and the build process only needs to be run once. However, it is a whole different story when it comes to this kind of setup
It makes your deployment process simple, just pull new code and restart the server(s) if necessary (assuming that you have some mechanism to increase the static file version so that all your clients will receive the latest version)
Avoid unnecessary dependencies on your production servers. This might sound weird for some people but I just don't want to install any extra libraries on my production servers unless they are absolutely necessary. With the build process run locally on my dev machine, my production servers only have what they need to run the production code and nothing else
However this approach also has some disadvantages:
When more than one developer in your team (accidentally) run the build process and commit the code, then you will have a crazy list of conflicts. However, it can be solved by simply running the build process again after you merge all the changes from other guys. This is more about the workflow
Your repository will be bigger. I personally don't think this is a big issue considering few extra MB of my bundled and minified files. If your front-end javascript is big enough for this to be an issue then it is another story

Team City with Visual Studio solution build steps

I have a Visual Studio 2012 solution containing a Windows service project and a web application project.
I want Team City (version 8.0.3) to create two zip files (one for the service and one for the web app) that I will deploy manually.
Should I create a build step to build the entire solution, followed by a build step to publish the Windows service and a build step to publish the web site (via publish profiles). Then use Artifact paths in General Settings to zip up these two published folders?
Or should I have just one build step to build the solution and then use the Artifact paths to create the two zip files?
Or is there a better way than either of the above?
You have to ask yourself if these two projects are linked and in which manner they should be built together.
My feeling is : if your projects are in the same solution, they are linked in some ways and have to be built together.
Then, you should build your solution (sln) and not projects (*proj).
Application organization
Generally, your build server should not redefine -too much- the way your applications are organized. You should always use your plateform application descriptor to build your applications.
In case of .NET and Visual Studio, the application descriptor is your solution (sln). It defines the needs and how your application have to be built.
If your project have to be built separately, they should be in differents solution unless you prefer to create specific solution configuration (in addition to Release & Debug).
Anyway, the solution is still the build entry point.
TeamCity
Speaking about TeamCity, different and standalone applications should be in separate build configurations.
The pure build (code compilation) should be in one build step and you should not use too much code compilation runners in one build configuration.
Your build configurations should reflect your applications farm logic.
If you need to link them in some ways which are related to packaging for example, you can link your build configurations through snapshot or artifact dependency.

Deployment after CI builds

Im pretty new to CI so bear with me here. I have just setup an instance of Team City in on a local machine, and I can clearly see the benefits.
The one thing we do want understand is how we can managed the deployment aspect of CI. What we really want to achieve are two builds:
1) We check in to our source repository and the CI server notices the change and compiles the code, tests etc.
2) We manually trigger a build that compiles the code, copies the code to a remote server and update its IIS mappings.
Now the first build is pretty much wrapped up with TeamCity. But I assume that the deployment aspect of this is going to involve some scripting (Nant, MsBuild, Rake etc) is this correct?
If this is the case, I can see that transferring files from the build machine to a remote server will be ok, but will we be able to update IIS mappings without being on the same network? For that matter where is THE correct place to deploy a CI server, should is live on the same network as the apps we deploy?
Finally, we have been (rather unorthadoxily) using IronRuby to run rake scripts as our build runner. This is simply because we like Rake, but if we were to look at Nant/Msbuild do they have any baked in tasks that would simplify what we are trying to achieve?
Cheers, Chris.
We use MSBuild exclusively, just a choice. I am sure Nant and the others do things just as well. We only publish to a dev environment (for dev testing) and a stage environment (Where QA actually tests). I would not suggest that you put the production system push on this as the temptation to force builds might be too great for some people.
We use some of the MSBuild Community Tasks

Resources