Is there any way to save the complete state of my azure configuration?
Basically, I just created a demo for a project I'm working on. This demo has a website/webjob, scheduler, storage queue, storage blob, redis cache and documentDB. I have configured these components in terms of size/scale/schedules but now the demo is done.
I don't want to pay for these services and I don't need them online for now. However, I don't want to have to recreate and reconfigure them if I need to relaunch the demo in a month.
Is there a way to save my current azure configurations to a file and then to be able to recreate all the services again automatically (with a script or a small program)?
Thanks!
This is a very good question, that sums up a historical problem we're in the process of making easier and more flexible. I'll answer this question with two parts.
First and foremost, you have tools like the PowerShell cmdlets now, that you can script out the creation of an entire "world" in Azure and then re-run whenever you want, against a subscription, to scaffold out a whole architecture. You can also use the management libraries for .NET to do this from a .NET application. When we embarked on the VS WebJobs tooling, for instance, I worked up a prototype for my developers on using MAML to create WebJobs and scheduler job collections. You can see the demo code for that here: https://github.com/bradygaster/maml-demo-scheduled-webjob-creator
We've also recently embarked on new mission of re-creating a lot of the management APIs so that they support the notion of templates and resource groups, to marry up with the new portal experience. Here's a great MSDN article that discusses how the PowerShell cmdlets for the Gallery could be used to pull down a list of the various templates that could then be pushed back up as fully-baked architectures running in Azure. You have the capability of building these templates yourself, then you could use these cmdlets to fan out and create things that you write up in your own custom templates. http://msdn.microsoft.com/en-us/library/dn654596.aspx
Hope this helps!
For Azure websites you can use the Back and Restore option to store the site and restore it back when you want to demo again, But all you have to do is Stop the services and you should be able to keep the demo without incurring cost.
Related
Can someone pls help me to automate build and release pipelines in Azure DevOps for Azure DB for PostgreSQL database (Single Server) so that I can create a Database and run different scripts in that database for creating/altering tables, functions, indexes etc. ?
I googled and found nothing in Microsoft documentation for this purpose but I did find it can be done using Zapier
As per organizational policies I can not use Zapier or any third-party tools/sites.
Is it doable only using Microsoft build and release tasks in Azure DevOps, can someone pls guide me with any steps for this purpose ?
Database DevOps is difficult because you have to manipulate existing objects, not simply replace them like you do for application deployments. To do this, you have to add a tool that manages your Data Definition Language queries. Or you can build one. We did that a long time ago. I don't recommend it. Tons of work, lots of issues.
For PostgreSQL, I'd suggest you start testing Flyway. It works really well with Azure DevOps. I have a short video you can use to see it in action. Flyway is open source, so getting started with it is license free. You can install the software, but it also runs through containers, so it makes it really simple to implement through the Azure DevOps agents. The concept is pretty simple. It acts as a marshalling tool to run your DDL in the correct order, like a manifest. Then, it marks the database so it knows which scripts it has already run. You go from there.
I want to create an automation demo for customers, where I have a single page web app with a couple of input text fields, and the inputs get used as parameters in the creation of an Azure Resource Group and VNETs/VMs/etc within the Resource Group.
I can do all of the above with Azure CLI (v2.0) on my laptop, and also from CLI using a Azure CLI in a bash script on a Linux server, but I wanted something web-based. I considered standing up a web page on the Linux server to call the bash script, but that seems a bit painful (especially with permissions etc). I also thought maybe Azure Functions could provide a solution to host the single web page app and call the Azure CLI commands, but I've never used Functions before so not sure if Functions can do this; the description of Functions' capabilities aren't clear to me.
What is the best way to achieve what I'm after, quickly?
Note I'm not a developer, I'm a network engineer, so whilst I can hack around in a few languages from Notepad and vi, I'm not looking to build something in a full SDK, or have something with enterprise-level reliability, version control, etc. This is really all about proof of concept and web-based demo of something I already have in Azure CLI / bash script.
Thanks in advance :-)
For a quick and relatively dirty way, you could create an Azure Runbook (using the scripts created from the Azure Portal) and invoke using the Automation API This could use the scripts (or close to) what you already have.
When you roll out a new service in Azure you get the option now to download the Automation Script, you can then follow this article to deploy the generated script via a runbook
To follow on from Jamie's idea.
You can code your Azure Cli script (or Powershell) into an Azure Automation Runbook, you can have variables etc to access it with.
You can then attach a webhook to that runbook, and call it from a standard HTTP Post request.
Meaning you could create a HTML form, that would pass whatever variables are required and build whatever is needed.
The downside of this would be that you will be creating it on your infrastructure.
You can have a solution that will deploy to someone else's infrastructure with a deploy to Azure button
This lets you host it in Github etc, it takes a bit more knowledge to make it work but saves your account dollars!
My company developed an Azure Resource Manager-based solution that deploys a set of resources (essentially a Storage, SQL DB and Web App), and it is already implemented as our provisioning process for new customers.
However, we are now studying the best way to perform updates, and one of the hypotheses we are considering is having a specific template that updates the binaries of this application.
The idea is to have a separate template, that only has the web app, an app host and a MSDeploy resource that gets the latest version of our package and reuploads it to that web app.
The only problem I'm seeing with this solution is the ability to handle any changes in configuration that are necessary with newer version of the binaries - we do not want users to have to re-input any parameters they placed for the original deploy (done via a Deploy To Azure button), so, any configurations will have to be performed within the application - the plan is for it to use the Microsoft.WindowsAzure.Management.WebSites library.
The major limitation with using Microsoft.WindowsAzure.Management.WebSites is that you are restricted to authenticating with either a certificate or a service principal. Ideally we would like to find a way for the updates to not need any authentication other than the one you provide when you are deploying the update.
Is there any recommendation of best practices to follow for this kind of scenario?
Thank you.
Link to the equivalent discussion on TechNet
It is possible to update only via ARM templates.
For example connection strings can be added automatically to the application settings even when creating the dependent resources themselves.
Ex. Account storage connection string.
Only first time creation of your web sites will take a bit more time, something like 30 sec.
ARM will not destroy your WebApps if they exist already. it will update only.
If there are no changes, then the deployment is very fast.
If your changes require a new Appsettings parameter, you can enter it in ARM , check in to your repository.
and next deployment will pick up and update the WebApp.
So no need for anyone to log-in and update.
Our final decision was to give up on using ARM exclusively. The Service Principal solution, through the SDK, would allow us to use a Web Job or a Site Extension to perform (automatic or prompted) updates that included configuration changes. However, it would require "too many" privileges - why would a customer trust an application that can, at will, create new resources or update existing ones to increase his Azure bill?
The decision was made to utilize Powershell only for updates - if the customer can see the scripts and authenticate himself, this is not a concern. Sadly, this increases update complexity, but we found it to be a necessary evil.
I'm doing some research to migrate virtual machines(vmware) to the new Azure Resource Manager portal.
I already succeeded doing this with powershell. But I was wondering if there where other methods to do this faster and more efficient with less downtime?
There is the Azure Site Recovery:
https://azure.microsoft.com/en-us/blog/azure-site-recovery-ga-move-vmware-aws-hyper-v-and-physical-servers-to-azure/
"Customers can replicate on-premises workloads to Azure with Azure Site Recovery for 31 days at no charge, effectively making migration to Azure free."
I do enjoy the cunning of #alexandr's solution! However the official solution from Microsoft is that they are currently building the tools to automate this for you.
from the Transitioning to the Resource Manager model blog. There are a basic set of scripts and a timeline for the more complex migrations (no VNet etc)
To directly answer your question, it seems like Powershell is the tool of choice for managing the migration. (I imagine if you leave it for long enough there will be a 'migrate' button in the portal, as they seem very eager to move everyone away from ASM)
I created my custom Azure Worker Role. This code is ready. What I'm trying to do is to create instances of this Azure-Worker-Role in specific Azure data-center, at the requested time. For example, I'm want to send command to Azure to create 10 instances of my Custom-Azure-Worker in West-Europe data-center - now.
It's important to pass this command also a parameter that will be the input problem to be solved by my workers.
I pretty sure that this automation task must be covered by Azure automation. Is that true? Looking for more information\directions.
Thank you!
You can use Azure Management Libraries to create and deploy your cloud services from C# code. Just create application (eg ASP.NET MVC) to manage your cloud services by sending commands and deploy it also on Azure or even keep it locally.
See this article for more details http://www.bradygaster.com/post/getting-started-with-the-windows-azure-management-libraries
You'll want to leverage the service management API to spin up and tear down roles. It can be accessed any number of way, including directly via REST.
RE: providing a parameter to the worker role, one option is leveraging the cloud service configuration file that you provide with the cspkg. Define specifics for the role there.
Depending on the complexity or simplicity of your scenario, you may also get away with simply having a table in storage that you personally poke with desired configuration values and that the worker can read to retrieve.
The Azure Automation service should definitely be able to automate this task for you. Anything you can script via the Azure PowerShell module, can be imported as a runbook and called manually, via a third-party system, or on a schedule in Azure Automation.
Whether there is an existing runbook for the specific task you are looking to automate, I do not know. But Azure Automation has a gallery of community-contributed content for many common processes, so this may be available there.