I would like to connect Azure Functions to a repository where on each master push functions would be redeployed along with artifacts (Project.json, function.json, etc). Is there anything already in place? If not, anything planned?
Sean,
Since Azure Functions sits on top of App Service/Web Apps, this is a fully supported scenario and you can find detailed information on the deployment process here
I also have deployment scripts you can use with your function to make sure your packages are properly restored on deployment here (you can find more information about how to use the script here)
Hope this helps!
Related
I have a deployment process that places everything needed within a repository which my azure AppService is already configured to pull from.
This deployment process is fully automated and works well.
I would like to amend this deployment process to include one or more console applications which would then be configured to be run as WebJobs, either when triggered, or on a continuous basis.
However the configuration for webjobs appears to want me to upload the .exe during configuration, rather than point at a pre-existing .exe.
This seems less than optimal, because it suggests that I'll have to reupload each time said console app changes.
It would be far more convenient to be able to point to a known location within the AppService which contained the full deployment of the WebJob console App.
Is there a way to achieve this?
As I know, the deployment process you want couldn't be done. No matter on which way WebJob is deployed, Job is copied to the file system on the Kudu in essence. And WebJob is a function depending on Web App Service, so the deployment couldn't be processed as a whole. You could read the Wiki.
From your description, I suggest you using Azure Functions. You also could use TimeTriggerăBlobTriggerăHTTPTrigger etc. You could write just the code you need for the problem at hand, without worrying about a whole application or the infrastructure to run it.
If you still have questions, please let me know.
What I want to accomplish:
I want to deploy an Azure Cloud Service via Release Management. I managed to get this working by following the steps outlined in this post. In the post the Azure publishsettings file is added to the project and used in Release Management to deploy the Azure package to a Cloud Service. So far so good.
What is the issue:
The Azure publishsettings file will also contain information about the production environment. I don't want that information to be available to all the developers and therefor I would like to have a more secure alternative.
What did I try:
I created a custom action which takes 3 arguments: subscription id, subscription name and certificate key. This way the Azure information stays in Release Management and can be passed to a script. This didn't work because the action is not shown in the Release Template Toolbox.
What is my question:
What is the best way to pass Azure credentials to a deployment script via Release Management on a secure manner?
We have a solution for Build today that will work for RM in the future.
Publish Settings file is an important one with which anybody can get access to certain activities. And once how ever the way you pass on the publish settings file, it can be misused (if tried).
So along with the publish settings file, you need to add a bit of process to the deployment like -
Inactive or remove the management certificate which will in turn invalidate the given publish settings and anyone should request for a new set of publish settings file before they actually start any release procedures.
Even though it adds a rough edge to your smooth flow of deployment process, as it is a live or production system, it is always better to tight the process and make it idiot proof.
I am investigating ways to automate deployment of a specific build of a product to a specific Azure Cloud Service or VM.
The following steps would be automated, with as little manual intervention as possible:
Create a Cloud Service or VM
Install a specific build of the product (as a standalone exe or
Windows service, not IIS)
Tweak the configuration files(s)
Set up user account(s)
Run the exe/service
The code is currently in Visual Studio Online / TFS. We have Cruise Control .NET CI set up and we are looking at moving to TeamCity.
This will be used for the usual QA & Production type environments, but also for ad-hoc deployment e.g. if a trial feature has been added to the product and we want to deploy that to a new VM for a specific customer to play around with. Ideally we would be able to use the command line or a UI to pick the build, create the VM and specify any configuration changes.
One possible solution might be Octopus Deploy although I don't think this would be able to actually create an Azure VM. I will probably also look at the Azure API, and also TFS deploy.
Basically is this feasible, and are there any proven alternatives that I'm missing, in order to narrow down my research?
Thanks in advance!
While Octopus Deploy can do many things, in this particular scenario of yours, you're asking it to do three types of work - release management, automated provisioning and configuration management. It's a fine line between automation awesomeness and a really sticky situation.
Of the tasks you're asking, almost all of them can be done within Octopus today. I'd argue that it may be possible to Create a cloud service or VM. If there's some PowerShell cmdlet/library that allows you to spin up VMs with authentication, odds are you can do it Octopus - but it may not be the right tool to do that job today. Why?
In my opinion, it distorts the barrier between Developers, DevOps and SysAdmins. Whether you use Chef, Puppet, Salt, etc. whatever configuration management you have, that needs a whole layer of users with the expertise to back it up - often said expertise of system which the very developers who want such flexibility may not have. Secondly, right now this isn't a focus within Octopus (yet). I'd be hard pressed to say whether to use a tool such as Octopus on what it can do vs what it should do or not.
It's really nice that Azure now has support for preinstalling the Octopus tentacle for VMs. But that requires additional info such as, the Server thumbprint, port other supplementary configuration info in order to automate vm provisioning. That configuration management - should it be under Octopus's control, or something like Chef or Puppet? I honestly don't have an answer to this but my feeling as of now is not Octopus. Someday, perhaps, but until this is really ready and fully tested and vetted, I'd wait it out (a little) at least with Octopus.
If you're the adventurous type, then by all means try out Octopus. I may do a PoC (proof of concept) of this infrastructure automation later this year, but to rely on it today for business/production usage as the primary means of infrastructure automation will be risky and require a lot of work and experimentation. Again, I'm not saying it cannot be done, I'm questioning whether it should be done within Octopus as of this response today.
If anything, from the Octopus Deploy side of things is this feasible? Yes - it just hasn't quite been worked out yet. Looking at what you want to do, I'd say it's a two-phase process: 1. spinning up the new VM, attaching the tentacle to the environment and 2. running the deployment process on that new VM.
I'd also recommend checking out the Octopus blog. They're publicly talking about infrastructure automation. You can read about it here: http://octopusdeploy.com/blog/rfc-cloud-and-infrastructure-automation-support
I hope this response helps in some way.
The solution to the automated deployment in Azure is use ElasticBox.
I will skip the details of all the configuration options for Azure supported by ElasticBox, as they are detailed in the documentation section: http://elasticbox.com/documentation/deploying-and-managing-instances/using-azure/.
You only need to create a box (abstraction unit that ElasticBox uses to define the installation and configuration of the deployment of a service or application in any cloud) that takes care of the steps you need to be automated. So finally you will deploy the vm with near no manual intervention, just one click or a command with some parameters.
A box includes the variables necessary for your deployment and your scripts (In this case probably PowerShell, but they could be bash, python, perl, java, etc.)
When you deploy the box you create to deploy your application, ElasticBox will:
Create a Cloud Service or VM. (ElasticBox takes care of provision the vm in your Azure provider, or any of your preferred cloud provider).
Install a specific build of the product (as a standalone exe or Windows service, not IIS) -> This should be your install event script.
Tweak the configuration files(s) -> This should be part of your configure event script.
Set up user account(s) -> This should be part of your configure event script.
Run the exe/service -> This should be part of your start event script.
ElasticBox has a command line tool that enables to do VM deployments of your boxes and also you can manage your deployed vms with it: https://pypi.python.org/pypi/ebcli
It also support automatic termination of the vm after a custom time value.
This is quite a broad question, but certainly the goal is achieveable via one of a number of methods. While a bit old, Tom Hollander's blog on automated deployments is a good starting place. I've seen a lot of OctopusDeploy used as well as TeamCity but they all ultimately rely on Azure's PowerShell Cmdlets, Management Libraries in custom code or pure REST API calls.
Just an FYI; One option is to do everything by using the Azure Management API. I also like to reference the Azure Client Libraries in a VS project and do everything is C# code.
Just getting used to VS2012 publishing of Cloud Services. At present I have a one instance webrole which contains a MVC3 application. I can publish it to Azure without issue, and it creates the Cloud Service>Web Role>VMs. Fine. Takes a little while.
However when I do a little code change how can I migrate just this code change without replacing all the VMs that implement the WebRole etc.
It seems that Code and infrastructure are inseparable, or have I misunderstood. Is there a way to just update the code bit?
Thanks.
When you roll out an update, you upload an entire package containing not only your code files, but also the configuration for the VM, such as # of instances, ports to open on the firewall, local resources to allocate, etc. These configuration settings are part of the code package - so there is more going on than just updating code files.
However, there are a couple of methods you can use to have more granular control over updates.
Use Web Deploy. One thing to keep in mind, is that any automatic service updates will restore your website to the last fully-deployed package, which may not be as up-to-date. You would only want to use this in staging, then do a full package update for production rollout.
Use an Azure Web Site instead, which allows continuous integration with your source control provider, and direct updates to the code.
Use an Iaas VM instead. These are basically the same as running your own custom server in the Azure cloud, and you have full control over the OS. However, you also have full responsibility for keeping the OS updated and secure.
You can also enable RDP to your Azure Web Role VM's. You will find all your code files there and IIS, but I wouldn't recommend updating your code this way for the same reasons listed in #1.
The code and infrastructure, in a cloud service, are actually separate. All you upload is a deployment package containing just your code and supporting libraries / files. You don't upload a vhd. Azure provides that for you, spinning up a vhd, and then accessing your code on a file folder on that vhd. Same process happens each time you scale out to more instances.
when you make a code change, you build a new deployment package and deploy that. If you do it as an in-place update (vs delete+redeploy), each role is updated on each instance (when you have multiple instances of a role, they're not all updated at the same time). You can even specify that you only want a single role within the deployment to be updated (helpful if, say, you have a worker role in addition to your web role, and want to leave all the worker role instances running).
when the code update happens, the VMs aren't replaced, but they are recycled, and when they start back up, they are running the updated code.
You can use WebDeploy with Cloud Services in production across multiple servers using the AzureWebFarm project (disclaimer: I maintain it).
Alternatively, you can also use the excellent Octopus Deploy deployment technology in conjunction with the AzureWebFarm.OctopusDeploy project (disclaimer: I maintain this one too).
To be honest though, if you just have a simple web app then I wouldn't both with cloud services - I'd just use Web Sites. Feel free to check out my blog post to see the situations which might force you to use cloud services though.
If you enable WebDeploy on the cloud service, you can use web deploy to publish the MVC application.
See http://msdn.microsoft.com/en-us/library/windowsazure/ff683672.aspx for details.
All of the above answers are correct and if you are trying to change your code for a production service you definitely want to do an in place upgrade as described. However, frequently during the dev/test phase or troubleshooting I want to make one small change and test it out quickly. To do this check out http://blogs.msdn.com/b/kwill/archive/2013/09/05/how-to-modify-a-running-azure-service.aspx which describes how to modify the code via RDP to the Azure VM.
I have two Azure VM's running in a cloud service. They contains almost the same thing. Some TCP port's are also opened between them.
Is it possible to create a deploy package from this existing setup so that at a later time can deploy this setup in an easy way. I.e. I want to be able to do this:
1. Create deploy package from existing setup *
2. Delete whole existing cloud service including VM's
3. Deploy the package from step 1 and have everything created again.
*I can save one of the VM's to my Azure storage and use it as template for both of them if that is easier.
How to accomplish this if it is possible?
Yes, you can take what you have as a template and use it to stand up multiple silos. But in IaaS, there isn't a notion of a deployment package. There's a few things you'll need to do...
1) understand how to take an existing VM and turn it into an image
2) use Powershell or another DevOps style automation suite (Chef/Puppet/etc..) to define deploy your silo.
You seem specifically interested in how to create an image so I'd recommend using the tutorial we have published on this. http://www.windowsazure.com/en-us/documentation/articles/virtual-machines-capture-image-windows-server/ This does of course presume you're running Windows Server. But a Linux version it can be found at: http://www.windowsazure.com/en-us/documentation/articles/virtual-machines-linux-capture-image/
The automation of a deployment depends on a great many things, so I'd suggest at a starting point, familiarizing yourself with the management API: http://msdn.microsoft.com/en-us/library/windowsazure/ee460799.aspx
With the implementation of Resource Manager, you can now easily use JSON template to deploy and redeploy resources in Azure. There are also starter templates available - https://azure.microsoft.com/en-us/documentation/templates/