Remote Execution platform - azure

I’m looking for a framework/platform that would allow me to execute remote commands on a Windows machine and report back the results.
These machines would be public outside our company network, probably behind firewalls, proxies, etc. We have complete access over them and can configure them in any way we want. Think ATMs with 3G network.
I guess what i’m looking is something like SaltStack remote execution. But that enterprise plan has a hight cost per minion, and I need to install it on the thousands.
Another possible solution would be something like Octopus Deploy, Azure DevOps or any CD tool for that matter but without the need for environments.
I’ve looked also at ansible, but without an agent to overcome the target being behind firewalls, routers, proxies, I’m not sure how the reverse connection would work.
I would like to avoid Puppet or Chef for now. Ideally a cloud based solution would be wonderful, especially in azure.
Any recommends, directions?

Octopus Deploy is currently working on "Ops Processes" which sounds like it might fit what you are looking for. It's on our roadmap if you are interested, and we are planning to have the first round of features from this ready to ship in the next 8 weeks or so.
Caveat: I work at Octopus so read into that what you will

Related

Host multiple services that need same ports open on gitlab ci

this is an issue I've been postponing for a while but I need to get fixed at some point.
Basically I have two services which I have containerized and registered in my gitlab registers. The two services represent different versions of the same program. In my automated tests I have 2 test suites which are testing for backwards compatibility with these services. The issue that I have is that when my automated tests run, only one service can run fine because there is a conflict over the ports they use I assume and so gitlab can't run all of the services at the same time.
Is there a way to get around this without making the ports specifiable in the code? This option would take the most amount of time and I'd rather leave it as a last resort.
It seems after some digging that making the ports configurable is my only option. As per the kubernetes documentation "You cannot use several services using the same port (e.g., you cannot have two mysql services at the same time)." https://docs.gitlab.com/runner/executors/kubernetes.html

How to deploy software builds from Azure DevOps to internal servers?

We have our software hosted in Git on Azure DevOps and built using a build pipeline (which primarily uses a Cake script). We are now looking to deploy this software using the Azure DevOps release pipeline. However, all of our application servers are behind our firewall, inside of our network, and don't have any port open except for 80 and 443 for the web applications. We have dev, staging, and production servers for our apps (including some for load balancing). All I really need is to copy the artifact, backup the current code to a separate folder on the server, deploy and unzip the artifact file in the root deployment folder, and restart IIS on those servers.
My company is rather large and bureaucratic so there are some hoops we have to jump through for due diligence before we even attempt this new process. In that spirit, I am trying to find the best solution. If you can offer your advice, and in particular, offer any other solution we did not think of, that would be helpful:
The obvious solution would be to stand up servers on Azure cloud and move completely to the cloud. I know this is a solution, and this may be where we go, but my request is for non-cloud solution options so I can present this properly and make a recommendation.
Use a Hyper VPN tunnel to securely transfer the files and restart IIS. Probably the easiest and simplest method in regards to our already built build process on AzDO. Technically, this is the one I am least comfortable with.
Use build agents inside the network, connect to them from AzDO, have them build the software, and then have them deploy it or other agents. Lots of work to set it up but so far the least intrusive to our security. I also am not a fan because I wanted AzDO to handle builds and deployments.
Open the SFTP and SSH ports for each server and transfer the files that way. Maybe the least secure way but very simple?
If you have a better solution for this problem or a more common solution, let me know. If you think I should one of the 4 above solutions, let me know. If you can expand on any of the options above, please do.
ADO agents only require external connectivity, so they talk to ADO, not vice versa. So you only need 443 outbound to a couple of ADO urls.
Reading: https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/agents?view=azure-devops#communication
https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/v2-windows?view=azure-devops#im-running-a-firewall-and-my-code-is-in-azure-repos-what-urls-does-the-agent-need-to-communicate-with
You could use Environments. Create Environment for each VM (that includes creating agent on your machine) and then use the environment parameter in YAML pipelines deployment job. The deployment job can then do whatever you need (deploy webapp, move files, backup, etc..) on your target machine, regardless whether it's on private network.
More reading - Azure DevOps Environments: https://learn.microsoft.com/en-us/azure/devops/pipelines/process/environments?view=azure-devops
Using deployment job: https://learn.microsoft.com/en-us/azure/devops/pipelines/process/deployment-jobs?view=azure-devops

Hashicorp Vault production hardening. 'Run bare metal' vs 'disable remote access'

I'm currently playing with Hashicorp's Vault but this is probably a question with more general security implications.
I've been reading through the Production Hardening recommendations for Vault here for some pointers.
While the advice there makes perfect sense, taken individually, there were a couple of elements which seem to contradict. I'm not sure how I'd go about implementing it all in production.
The two points I'm struggling with are the suggestion to "Disable SSH / Remote Desktop" and that "running on bare metal should be preferred to a VM".
It seems to me that deploying an instance to a bare metal machine and subsequently disabling remote access (I'm assuming if we're ssh is discouraged then out of band management technologies like IPMI should also be off limits) effectively make the host unmanageable without physical access? Is this even achievable for cloud-based deployments?
Is my analysis correct or am I missing something? How do I choose which I should implement if I can't do both?
Thanks!
You cant do both at the same time, so your struggle is correct.
In an ideal world, you should be building images with vault and then use those to run your cluster. Then you can disable ssh/remote access so that the server running Vault is in a very well defined state (ie running the exact same thing as the image template and no-one has logged into it which could result in manual changes).
How do I choose which I should implement if I can't do both?
If you are in the cloud, forget bare metals and go with the VMs. If you are not in the cloud and you can tailor your VM size, you could go with bare metal (for example our current server size is 128GB RAM which is a waste for just one Vault instance, and we need 3 of them for HA).

NodeJS Managed Hostings vs VPS [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
There are a bunch of managed cloud based hosting services for nodejs out there which seem relatively new and some still in Beta.
Yet another path to host a nodejs app is setting up a stack on a VPS like Linode.
I'm wondering what's the basic difference here between these two kinds of deployment.
Which factors should one consider in choosing one over another?
Which one is more suitable for production considering how young these services are.
To be clear I'm not asking on choosing a provider but to decide whether to host on a managed nodejs specific hosting or on an old fashioned self setup VPS.
Using one of the services is for the most part hands off - you write your code and let them worry about managing the box, keep your process up, creating the publishing channel, patching the OS, etc...
In contrast having your own VM gives you more control but with more up front and ongoing time investment.
Another consideration is some hosters and cloud providers offer proprietary or distinct variations on technologies. They have reasons for them and they offer value but it does mean that if you want to switch cloud providers, it might mean you have to rewrite code, deployment scripts etc... On the other hand using VMs with standard OS as the baseline is pretty generic. If you automate/script/document the configuration of your VMs and your code stays generic, then your options stay open. If you do take a dependency on a proprietary cloud technology then it would be good to abstract it away behind an interface so it's a decoupled component and not sprinkled throughout your code.
I've done both. I did the VM path recently mostly because I wanted the learning experience. I had to:
get the VM from the cloud provider
I had to update and patch the OS
I had to install and configure git as a publishing channel
I had to write some scripts and use things like forever to keep it running
I had to configure the reverse http-proxy to get it to run multiple sites.
I had to configure DNS with the cloud provider, open ports for git etc...
The list goes on. In the end, it cost me more up front time not coding but I learned about a lot more things. If those are important to you, then give it a shot. If you want to focus on writing your code, then a node hosting provider may be for you.
At the end of it, I had also had more options - I wanted to add a second site. I added an entry to my reverse proxy, append my script to start up another app with forever, voila, another site. More control. After that, I wanted to try out MongoDB - simple - installed it.
Cost wise they're about the same but if you start hosting multiple sites with many other packages like databases etc..., then the VM can start getting cheaper.
Nodejitsu open sourced their tools which also makes it easier if you do your own.
If you do it yourself, here's some links that may help you:
Keeping the server up:
https://github.com/nodejitsu/forever/
http://blog.nodejitsu.com/keep-a-nodejs-server-up-with-forever
https://github.com/bryanmacfarlane/svchost
Upstart and Monit
generic auto start and restart through monitoring
http://howtonode.org/deploying-node-upstart-monit
Cluster Node
Runs one process per core
http://nodejs.org/docs/latest/api/cluster.html
Reverse Proxy
https://github.com/nodejitsu/node-http-proxy
https://github.com/nodejitsu/node-http-proxy/issues/232
http://blog.nodejitsu.com/http-proxy-middlewares
https://github.com/nodejitsu/node-http-proxy/issues/168#issuecomment-3289492
http://blog.argteam.com/coding/hardening-node-js-for-production-part-2-using-nginx-to-avoid-node-js-load/
Script the install
https://github.com/bryanmacfarlane/svcinstall
Exit Shell Script Based on Process Exit Code
Publish Site
Using git to publish to a website
IMHO the biggest drawback of setting up your own stack is that you need to manage things like making Node.js run forever, start it as a daemon, bring it behind a reverse-proxy such as Nginx, and so on ... the great thing about Node.js - making firing up a web server a one-liner - is one of its biggest drawbacks when it comes to production-ready systems.
Plus, you've got all the issues of managing and updating and securing your server yourself.
This is so much easier with the hosters: Usually it's a git push and that's it. Scaling? Easy. Replication? Easy. ...? Easy. All within a few clicks.
The drawback with the hosters is that you can not adjust the environment. Okay, you can probably choose which version of Node.js and / or npm to run, but that's it. You have no control over what 3rd party software is installed. You've got no control over the OS. You've got no control over where the servers are located. And so on ...
Of course, some hosters allow you access to some of these things, but there is rarely a hoster that supports all.
So, basically the question regarding Node.js is the same as with each other technology: It's a pro vs con of individualism, pricing, scalabilty, reliability, knowledge, ...
I personally chose to go with a hoster: The time and effort I save easily outperform the disadvantages. Mind you: For me, personally.
This question needs to be answered individually.
Using Docker is another way to simplify the setup on single Linux VPS. With Docker both development and production setups are faster, more robust, and more secure.
The setup is faster and more robust because you will be deploying ready Node.js image at once, without running any installation scripts. And it would be more secure because internal dependencies, such as database, can be hidden from outside world completely and accessible only from Docker internal network. On top of it, Docker significantly simplifies the upgrade process for underlying OS and Node.js runtime.
There are two ways to setup Node.js Docker environment. The first one – follow the instruction published here how to dockerize your application and deploy it with Docker, alongside with databases when needed. The guide gives the instructions for the development setup, the production setup will be similar.
Another way would be deploying official Node.js docker image and mounting application code as a volume or a folder to Node.js image. That would allow to update Node.js image going forward without re-building and re-deploying the application. Such approach solves long-standing problem with security patching of Docker images.
To help out with the setup of Docker on single machine - you can use Abberit Admin Panel. It will set up Node.js environment for you with a click of a button, including databases if you need them. The tool is free, and you can turn it off after you have completed initial setup. On the other hand, if later you decide to reduce maintenance tax of the production - you can migrate into managed service without any changes in the app.
Disclaimer: I am one of the founders of Abberit.

Best solution to host a (command line) Windows application?

I have a Windows application that does some calculations and is called from command line. On my Windows machine, I have a PHP script running under Apache that executes the application and shows the output.
Is there any hosting solution that I can use to do the same? I can't figure out if EC2 or Azure are the right solutions. Basically, I need a web server + ability to execute my application.
Suggestions? Thanks.
You can host your application on AppHarbor, the .NET Platform-as-a-Service. You can either port your web frontend to .NET or try to get your PHP stuff working with Phalanger. AppHarbor is working on Background Tasks, which might be a good match for your workload.
I would just run the PHP script you already have under IIS in a Windows Azure web role.
If it is a Windows Application and you have the source code I would go with an Azure Worker Role. The advantage of using a PaaS (as Azure) instead of an IaaS (as Amazon) is that you wont have to bother of keeping the server up to date.
The real investment in time will be when you rewrite your application to make it work as a Worker Role. The time needed to do this work depends on how your application works right now. If is uses a lot of disc access it might be difficult and perhaps an Amazon server would be better. But if it only crunches numbers in memory an Azure Worker Role is a very good candidate.
The real advantage of using an Amazon server is that you probably wont need to do any work at all. Except maintaining the server.
As described in the question both Azure and EC2 will do the job very well. This is the kind of task both systems are designed for.
So the question becomes really: which is best? That depends on two things: what the application needs to do and your own experience and preference.
As it's a Windows application there should probably be a leaning towards Azure. While EC2 supports Windows, the tooling and support resources for Azure are probably deeper at this point.
If cost is a factor then a (somewhat outdated) resource is here: http://blog.mccrory.me/2010/10/30/public-cloud-hourly-cost-comparison/ -- the conclusion is that, by and large, Azure and Amazon are roughly similar for compute charges.
Steve Marx has a blog post that describes how to run another web server (i.e not IIS) on Azure
This potentially has everything you need - you can deploy Apache and your executable and run it in exactly the same way.
Alternatively - you can deploy your executable along side a bit of code in a worker role that would run that application periodically, all depending on your exact requirements

Resources