We are testing a set of PS scripts to create several Azure artifacts:
- Storage Accounts
- SQL Servers
- Service Bus
- App Services
- AppInsights
- IOT Hub
The performance is very variable. For example, sometimes creating the StorageAccounts take a second or three (we create 2), and sometimes it takes 20 minutes+. All our resources are in US East; we are using the New-AzureRmxxx PS commands.
Is this typical? How do others investigate such issues in addition to confirming there are no outages via the Azure Health dashboards?
First off all, Azure had this problem with storage account creation lately, but it appears to be gone (at least at the time of writing).
The other problem you might be facing - creating storage accounts (or any resources for that matter) in place of the ones you've just deleted. So you have a storage account named mystorageacct, you deleted it in the script and right after that you are starting to create it with the same name, that can take a while because of how Azure works.
And generally you don't troubleshoot behavior like this in the cloud. Cloud is meant to throw timeouts every now and then. If you would be creating unique named storage accounts every time this wouldn't be an issue. And if Azure has got some problems, those are temporary and nothing you can do about them (but knowing about them helps). Here's the RSS feed.
I have observed this behavior when using DevOps to recreate a Resource Group.
If I purge(Remove) the resources and then attempt to recreate them using the same name, this causes slowness.
I found if I delete the Resource Group in it's entirety, then recreate it and the accompanying resources, the slowness does not bite as often.
Related
As an MSP, we manage multiple customer subscriptions through Azure Lighthouse.
Historically we've used a single Automation Account per subscription to contain solutions such as runbooks related to the Start/Stop v1 solution, Automation-based Update Management, Inventory, and Change Tracking. This Automation Account is also linked to a single Log Analytics workspace per subscription.
We've since deployed Start/Stop v2, which uses LogicApps and Azure Functions. We now have a requirement to, as part of stopping and starting some VMs, stop and start some services on the machines itself. I plan on doing this through (PowerShell) Azure Automation Runbooks, which would only stop a VM if the runbook has successfully stopped a service on it.
My question relates to whether a single monolithic Automation Account is the way to go, or whether there are any considerations to be taken if we were to implement multiple Automation Accounts.
(I've noticed Best practice to deploy Azure Automation Account Runbooks, but that's over a year ago. Things might have changed in the mean time)
The best practice related question which you have mentioned still holds good i.e., 2 major attributes to consider are pricing and logical resource allocation. One other attribute to keep in mind while deciding whether to go with single or multiple automation accounts is the limits i.e., if you go with single automation account then does the traffic in your environment or the activities that your automation account does reach the limits mentioned here? If yes, then go for multiple automation accounts approach.
I have the following requirements, where I consider using Azure LogicApp:
Files placed in Azure Blob Storage must be migrated into a custom place (it can be different from case to case)
Amount of files is something about 1 000 000
When the process is over, we should have a report saying how many records (files) failed
If the process stopped somewhere in the middle, the next run must take only files that have not been migrated
The process must be fast as it can be and files must be migrated within N hours
But what makes me worried is the fact that I cannot find any examples or articles (including official Azure Documentation) where the same thing is achieved by Azure LogicApp.
I have some ideas about my requirements and Azure Logic App:
I think that I must use pagination for dealing with this amount of files because Azure Logic App will not be able to read millions of file names - https://learn.microsoft.com/en-us/azure/logic-apps/logic-apps-exceed-default-page-size-with-pagination
I can add a record into Azure Table Storage to track failed migrations (something like creating a record to say that the process started and updating it when the file is moved to the destination)
I have no ideas how I can restart the Azure Logic App without using a custom tracking mechanism (for instance it can be the same Azure Table Storage instance)
And the question about splitting the work across several units is still open
Do you think that Azure Logic App is the right choice for my needs or I should consider something else? If Azure LogicApp can work for me, could you please share your thoughts and ideas on how I can achieve the given requirements?
I don't think logic app is a good solution for you to implement the requirement because the amount of files is about 1000000, that's too much. For this requirement, I suggest you to use Azure Data Factory.
To migrate data in azure blob according data factory, you can refer to this document
We are using an Azure Storage account to store some files that shall be downloaded by our app on the users demand.
Even though there should be no write operations (at least none I could think of), we are exceeding the included write operations just some days into the billing period (see image).
Regarding the price it's still within limits, but I'd still like to know whether this is normal and how I can analyze the matter. Besides the storage we are using
Functions and
App Service (mobile app)
but none of them should cause that many write operations. I've checked the logs of our functions and none of those that access the queues or the blobs have been active lately. There are are some functions that run every now and then, but only once every few minutes and those do not access the storage at all.
I don't know if this is related, but there is a kind of periodic ingress on our blob storage (see the image below). The period is roundabout 1 h, but there is a baseline of 100 kB per 5 min.
Analyzing the metrics of the storage account further, I found that there is a constant stream of 1.90k transactions per hour for blobs and 1.3k transactions per hour for queues, which seems quite exceptional to me. (Please not that the resolution of this graph is 1 h, while the former has a resolution of 5 minutes)
Is there anything else I can do to analyze where the write operations come from? It kind of bothers me, since it does not seem as if it's supposed to be like that.
I 've had the exact same problem; after enabling Storage Analytics and inspecting the $logs container I found many log entries that indicate that upon every request towards my Azure Functions, these write operations occur against the following container object:
https://[function-name].blob.core.windows.net:443/azure-webjobs-hosts/locks/linkfunctions/host?comp=lease
In my Azure Functions code I do not explicitly write in any of container or file as such but I have the following two Application Settings configured:
AzureWebJobsDashboard
AzureWebJobsStorage
So I filled a support ticker in Azure with the following questions:
Are the write operation triggered by these application settings? I
believe so but could you please confirm.
Will the write operation stop if I delete these application settings?
Could you please describe, in high level, in what context these operations occur (e.g. logging? resource locking, other?)
and I got the following answers from Azure support team, respectively:
Yes, you are right. According to the logs information, we can see “https://[function-name].blob.core.windows.net:443/azure-webjobs-hosts/locks/linkfunctions/host?comp=lease”.
This azure-webjobs-hosts folder is associated with function app and it’s created by default as well as creating function app. When function app is running, it will record these logs in the storage account which is configured with AzureWebJobsStorage.
You can’t stop the write operations because these operations record necessary logs to storage account used by Azure Functions runtime. Please do not remove application setting AzureWebJobsStorage. The Azure Functions runtime uses this storage account connection string for all functions except for HTTP triggered functions. Removing this Application Settings will cause your function app unable to start. By the way, you can remove AzureWebJobsDashboard and it will stop Monitor rather than the operation above.
These operations is to record runtime logs of function app. These operations will occur when our backend allocates instance for running the function app.
Best place to find information about storage usage is to make use of Storage Analytics especially Storage Analytics Logging.
There's a special blob container called $logs in the same storage account which will have detailed information about every operation performed against that storage account. You can view the blobs in that blob container and find the information.
If you don't see this blob container in your storage account, then you will need to enable storage analytics on your storage account. However considering you can see the metrics data, my guess is that it is already enabled.
Regarding the source of these write operations, have you enabled diagnostics for your Functions and App Service? These write diagnostics logs to blob storage. Also, storage analytics is also writing to the same account and that will also cause these write operations.
For my case, I have a Azure App Insight which took 10K transactions on its storage per mintues for functions and app services, even thought there are only few https requests among them. I'm not sure what triggers them, but once I removed app insights, everything becomes normal.
I am planning to migrate my Azure resources from EA type of subscription to C-S-P subscription. I found a tool called as Migrate Azure(sorry cannot write the short form as this does not allow me to post). I am aware of its working, however i am just worried about one that.
Can the migration cause a downtime into any working of any resources like Azure Virtual machine, storage...etc. Point is i will be directly working on the production resources to migrate to the C-S-P subscription.
Can anyone give me an idea ?
As indicated by #HariHaran, you should follow the documentation mentioned in here
The subscription switching should not affect your workloads, since it's a subscription migration and not workload related(Assuming that all of your resources already reside in Azure). The safe approach to follow during your migration process is to do it incrementally. Then if all functions as normal, increase the migration rate. Do it all at once may not be the best. Redeployment issues can occur at times.
We currently have a window service which send some notification emails to users after doing some processing on database(SQL database). Runs once in day.
We want to move this on azure cloud. One alternate is to put it on Azure VM as is. but I am finding some other best possible solution for that.
I study about recurring and on demand Web jobs but I am not sure is this is best solution.
Also is there any possibility to update configuration of service code in App.config without re-deploy the code of service on cloud. I means we can manage configuration from Azure portal.
Thanks in advance.
Update 11/4/2016
Since this was written, there are 2 additional features available in Azure that are both excellent choices depending on what functionality you need:
Azure Functions (which was based on the WebJobs described below): Serverless code that can be trigger/invoked in various ways, and has scaling support.
Azure Service Fabric: Microservice platform, with support for actor model, stateful and stateless services.
You've got 3 basic options:
Windows service running on VM
WebJob
Cloud service
There's a lot of information out there on the tradeoffs between these choices, but here's a brief summary.
VM - Advantages: you can move your service basically as it is without having to change much or any of your code. They also have the easiest connectivity with other resources in Azure (blob storage, virtual networks, etc). The disadvantage is you're giving up all the of PaaS advantages and are still stuck managing your own VM infrastructure
WebJob - Advantages: Multiple invocation options (queues, blobs, manually, queue receive loops, continuous while-loop style, etc), scheduled (would cover your case). Easy to deploy (can go with website, as a console app, automatically through Kudu), has some built in logging in Azure portal - and yes, to answer your question, you can alter the configuration in the portal itself for connection strings and app settings.
Disadvantages - you'll need to update code, you don't have access to underlying resources (if you need that), and more of something to keep in mind than a disadvantage - it uses the same resources as the webapp it's deployed with.
Web Jobs are the newest of the options, but at the same time appear to have active development going on to increase the functionality and usefulness.
Cloud Service - like a managed VM, has some deployment options, access to underlying VM if needed. Would require some code changes from your existing service.
There's nothing you've mentioned in your use case that makes me think a Web Job shouldn't be first thing you try.
(Edit: Troy Hunt has a great and relatively recent blog post illustrating most of the points I've mentioned about Web Jobs above: http://www.troyhunt.com/2015/01/azure-webjobs-are-awesome-and-you.html)