How to deploy software builds from Azure DevOps to internal servers? - azure

We have our software hosted in Git on Azure DevOps and built using a build pipeline (which primarily uses a Cake script). We are now looking to deploy this software using the Azure DevOps release pipeline. However, all of our application servers are behind our firewall, inside of our network, and don't have any port open except for 80 and 443 for the web applications. We have dev, staging, and production servers for our apps (including some for load balancing). All I really need is to copy the artifact, backup the current code to a separate folder on the server, deploy and unzip the artifact file in the root deployment folder, and restart IIS on those servers.
My company is rather large and bureaucratic so there are some hoops we have to jump through for due diligence before we even attempt this new process. In that spirit, I am trying to find the best solution. If you can offer your advice, and in particular, offer any other solution we did not think of, that would be helpful:
The obvious solution would be to stand up servers on Azure cloud and move completely to the cloud. I know this is a solution, and this may be where we go, but my request is for non-cloud solution options so I can present this properly and make a recommendation.
Use a Hyper VPN tunnel to securely transfer the files and restart IIS. Probably the easiest and simplest method in regards to our already built build process on AzDO. Technically, this is the one I am least comfortable with.
Use build agents inside the network, connect to them from AzDO, have them build the software, and then have them deploy it or other agents. Lots of work to set it up but so far the least intrusive to our security. I also am not a fan because I wanted AzDO to handle builds and deployments.
Open the SFTP and SSH ports for each server and transfer the files that way. Maybe the least secure way but very simple?
If you have a better solution for this problem or a more common solution, let me know. If you think I should one of the 4 above solutions, let me know. If you can expand on any of the options above, please do.

ADO agents only require external connectivity, so they talk to ADO, not vice versa. So you only need 443 outbound to a couple of ADO urls.
Reading: https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/agents?view=azure-devops#communication
https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/v2-windows?view=azure-devops#im-running-a-firewall-and-my-code-is-in-azure-repos-what-urls-does-the-agent-need-to-communicate-with

You could use Environments. Create Environment for each VM (that includes creating agent on your machine) and then use the environment parameter in YAML pipelines deployment job. The deployment job can then do whatever you need (deploy webapp, move files, backup, etc..) on your target machine, regardless whether it's on private network.
More reading - Azure DevOps Environments: https://learn.microsoft.com/en-us/azure/devops/pipelines/process/environments?view=azure-devops
Using deployment job: https://learn.microsoft.com/en-us/azure/devops/pipelines/process/deployment-jobs?view=azure-devops

Related

Simplest setup for a staging server and a production server -

What's the simplest way to manage a staging server vs production?
What's the point of having a staging server if you could just push changes to a different branch in production?
What's the best way to merge the staging server with production? Cron job?
Current setup is staging server which we don't use we are just pushing straight to production, but trying to improve the process
What's the simplest way to manage a staging server vs production?
The simplest and cheapest way is to get rid of your staging server. Staging servers don't inherently make deploys safer, but generally developers want at least a dev environment (functionally not necessarily distinct from the idea of staging server) to host their code in a prod-like environment before they push it to prod.
What's the point of having a staging server if you could just push changes to a different branch in production?
If you have 2 branches running in production simultaneously, that's functionally equivalent to a staging server. Most shops prefer to have a staging environment not server so that their data tier, 3rd party integrations, etc. are completely separate between staging and prod.
Simply deploying another copy of your application in prod is deceptively dangerous because if you mess up the data tier or 3rd party integrations you can easily effect prod.
trying to improve the process
Feature flags. if you can enable new features or even fixes for specific users you can then roll it out to your QA team (or the devs, whomever is going to test) and then when you're happy with it, roll it out to the general user base. This isn't inherently safer than anything else, but it has the advantage that it front loads the work of planning for multiple concurrent code paths and makes that planning more explicit.
Unfortunately there's no magic bullet for having testing (dev ,staging , whatever you want to call them) environments increase reliability.
What's the best way to merge the staging server with production? Cron job?
For code, usually the preferred method is to "promote" the artifact you deployed to staging over to prod without rebuilding, guaranteeing the same thing is shipped.
for runtime environment, using containerization makes most of that part of the code artifact and that's the simplest way. If you're running on a container-centric hosting like ECS Fargate or Google's docker oriented service, there's nothing else on the app side to ship. This is what I recommend, it's straight forward and easy to reason about. Adding virtual servers into the mix just adds an OS level to manage and there's little benefit to that. If you can make your app serverless so it's not sitting waiting for connections but instead is invoked when connections come in, the same thing applies, no OS to manage (AWS lambda for example has serverless docker image support)
Data is generally considered the tricky bit to having test environments by those who have experience with them. If your production data is not at all sensitive you can copy it over, but that may or may not actually work depending on what's in the data and how distributed your data ends up being. Generally production data is sensitive enough that you don't want to expose it to dev environments, which makes it tricky to ensure the dev data is appropriate for testing features. One common methodology for overcoming that obstacle is automating end-to-end tests via something like selenium for web broswers, and automated API tests for non-browser-centric endpoints. This allows you to write the test along with the app to prove it's working.

Host multiple services that need same ports open on gitlab ci

this is an issue I've been postponing for a while but I need to get fixed at some point.
Basically I have two services which I have containerized and registered in my gitlab registers. The two services represent different versions of the same program. In my automated tests I have 2 test suites which are testing for backwards compatibility with these services. The issue that I have is that when my automated tests run, only one service can run fine because there is a conflict over the ports they use I assume and so gitlab can't run all of the services at the same time.
Is there a way to get around this without making the ports specifiable in the code? This option would take the most amount of time and I'd rather leave it as a last resort.
It seems after some digging that making the ports configurable is my only option. As per the kubernetes documentation "You cannot use several services using the same port (e.g., you cannot have two mysql services at the same time)." https://docs.gitlab.com/runner/executors/kubernetes.html

Remote Execution platform

I’m looking for a framework/platform that would allow me to execute remote commands on a Windows machine and report back the results.
These machines would be public outside our company network, probably behind firewalls, proxies, etc. We have complete access over them and can configure them in any way we want. Think ATMs with 3G network.
I guess what i’m looking is something like SaltStack remote execution. But that enterprise plan has a hight cost per minion, and I need to install it on the thousands.
Another possible solution would be something like Octopus Deploy, Azure DevOps or any CD tool for that matter but without the need for environments.
I’ve looked also at ansible, but without an agent to overcome the target being behind firewalls, routers, proxies, I’m not sure how the reverse connection would work.
I would like to avoid Puppet or Chef for now. Ideally a cloud based solution would be wonderful, especially in azure.
Any recommends, directions?
Octopus Deploy is currently working on "Ops Processes" which sounds like it might fit what you are looking for. It's on our roadmap if you are interested, and we are planning to have the first round of features from this ready to ship in the next 8 weeks or so.
Caveat: I work at Octopus so read into that what you will

How to secure Ant builds?

Our company uses ANT to automate build scripts.
Now somebody raised the question how to secure such build scripts agains (accidental or intended) threats?
Example 1: someone checks in a build script that deletes everything under Windows drive T:\ because that is where the Apache deployment directory is mounted for a particular development machine. Months later, someone else might run the build script and erase everything on T:\ which is a shared drive on this machine.
Example 2: an intruder modifies the default build target in a single project to scan the entire local hard disk. The Continuous Integration machine (e.g. Jenkins) is configured to execute the default build target and will therefore send its entire local directory structure to the intruder, even for projects that the intruder should not have access to.
Any suggestions how to prevent such scenarios (besides "development policies" or "do not mount shared drives")?
My only idea is to use chroot enviroments for builds?!
The issues you describe are the same for any code that you execute on the build machine - you could do the same thing using a unit test.
In this case the best solution may be to place your build scripts under source control and have a code review prior to check in.
At my company, the build scripts (usually a build folder) are an svn:external to another subversion repository that is only controlled by build/release engineers. Developers can control variables such as servers it can deploy to, but not what those functions do. This same code is reused amongst multiple projects in flight, and only a few devops folks can alter it, not the entire development staff.
Addition: When accessing shared resources, we use a system account that has only read access to those resources. Further: jenkins,development projects and build/deploy code are written to handle complete loss of jenkins project workspace and deploy environments. This is basic build automation/deploy automation that leads to infrastructure automation.
Basic rule: Murphy's law is going to happen. You should write scripts that are robust and handle cold start scenarios and not worry about wild intruder theories.

Best solution to host a (command line) Windows application?

I have a Windows application that does some calculations and is called from command line. On my Windows machine, I have a PHP script running under Apache that executes the application and shows the output.
Is there any hosting solution that I can use to do the same? I can't figure out if EC2 or Azure are the right solutions. Basically, I need a web server + ability to execute my application.
Suggestions? Thanks.
You can host your application on AppHarbor, the .NET Platform-as-a-Service. You can either port your web frontend to .NET or try to get your PHP stuff working with Phalanger. AppHarbor is working on Background Tasks, which might be a good match for your workload.
I would just run the PHP script you already have under IIS in a Windows Azure web role.
If it is a Windows Application and you have the source code I would go with an Azure Worker Role. The advantage of using a PaaS (as Azure) instead of an IaaS (as Amazon) is that you wont have to bother of keeping the server up to date.
The real investment in time will be when you rewrite your application to make it work as a Worker Role. The time needed to do this work depends on how your application works right now. If is uses a lot of disc access it might be difficult and perhaps an Amazon server would be better. But if it only crunches numbers in memory an Azure Worker Role is a very good candidate.
The real advantage of using an Amazon server is that you probably wont need to do any work at all. Except maintaining the server.
As described in the question both Azure and EC2 will do the job very well. This is the kind of task both systems are designed for.
So the question becomes really: which is best? That depends on two things: what the application needs to do and your own experience and preference.
As it's a Windows application there should probably be a leaning towards Azure. While EC2 supports Windows, the tooling and support resources for Azure are probably deeper at this point.
If cost is a factor then a (somewhat outdated) resource is here: http://blog.mccrory.me/2010/10/30/public-cloud-hourly-cost-comparison/ -- the conclusion is that, by and large, Azure and Amazon are roughly similar for compute charges.
Steve Marx has a blog post that describes how to run another web server (i.e not IIS) on Azure
This potentially has everything you need - you can deploy Apache and your executable and run it in exactly the same way.
Alternatively - you can deploy your executable along side a bit of code in a worker role that would run that application periodically, all depending on your exact requirements

Resources