Architecture decisions relating to multiple Automation Accounts per subscription - azure

As an MSP, we manage multiple customer subscriptions through Azure Lighthouse.
Historically we've used a single Automation Account per subscription to contain solutions such as runbooks related to the Start/Stop v1 solution, Automation-based Update Management, Inventory, and Change Tracking. This Automation Account is also linked to a single Log Analytics workspace per subscription.
We've since deployed Start/Stop v2, which uses LogicApps and Azure Functions. We now have a requirement to, as part of stopping and starting some VMs, stop and start some services on the machines itself. I plan on doing this through (PowerShell) Azure Automation Runbooks, which would only stop a VM if the runbook has successfully stopped a service on it.
My question relates to whether a single monolithic Automation Account is the way to go, or whether there are any considerations to be taken if we were to implement multiple Automation Accounts.
(I've noticed Best practice to deploy Azure Automation Account Runbooks, but that's over a year ago. Things might have changed in the mean time)

The best practice related question which you have mentioned still holds good i.e., 2 major attributes to consider are pricing and logical resource allocation. One other attribute to keep in mind while deciding whether to go with single or multiple automation accounts is the limits i.e., if you go with single automation account then does the traffic in your environment or the activities that your automation account does reach the limits mentioned here? If yes, then go for multiple automation accounts approach.

Related

Downgrading Azure Resources during weekends using PowerShell Automation

In order to save expenses in Azure DevOps, I'm trying to scale the resources, which can scale depending on the requirement. Team leads will update the resource requirements in SharePoint, and the runbook needs to be executed with SharePoint datat. Team leads will update the resource requirements in SharePoint, and the runbook needs to be executed with SharePoint data. If such resources are not required on weekends but must be operational on weekdays, they should be stopped or reduced in size. I need to use automation to do it for all of the VMs and App Services at a subscription level every Friday. If there is a method to automate this procedure using PowerShell.
I'm glad to receive input. Thanks in advance.
I'm looking for feedback on Start/Stop VMs and Scaling Azure App Services. On weekends, the same may be said for other relevant resources. How can we accomplish this with Azure PowerShell?
 
The best way to do is by using 'Azure Automation Runbook' scheduled to run every specified day or date by time. To target the VM's, Azure Tags will be much helpful.
Your script must check:
A VM has a specific Tags (e.g., StopVM:Friday 11:00PM)
Maintenance Enabled in your monitoring solution
VM is stopped already or not.
Backup required?
Confirm the VM is Deallocated (not stopped)
Auto-Shutdown option is also available to do this activity.
Because sometime all you need is a quick and dirty way to save money :
And if you wan't to build something there's an API to shutdown and start VMs

Azure resource Created By & Timestamp

We have Azure environment with 3 different subscription and around 5 project resources are deployed in this environment.
Each project team has rights to create resources under specific Resource Group (RG) within Azure.
Now from Azure Admin perspective, i would like to know Who, When
This is basic requirements for any organization to track their cost, resource information. When i looked in Azure, this information is not available directly at resource level.
Few posts are mentioning to use Tagging for this or use logs (2 years back, really?)? Is it? I am surprised.
Can i use Application Insight for this? or only available for App Service kind of services?
Please help me to get this information in efficient way
Your only option is to implement some sort of logging (like poll Azure Subscription events) and save it somewhere. You can use Azure Monitor to achieve that rather easily. But by itself Azure doesnt offer anything like that out of the box.
you can use tagging, but with obvious challenges. logs only go 3 months back.

Will the Access keys used by services will be affected by moving resources to another subscription?

Issue: I am planning to move my Azure Resources to another subscription but as an impact analysis I want to know if Access Keys to a resource such as Storage, Batch Services will be affected.
I have more than 30 services which are currently in use. I am looking forward to having least possible downtime and so I need to analyze what all services will be impacted.
I am aware of the resources which are possible to migrate from the Microsoft page: https://learn.microsoft.com/en-us/azure/azure-resource-manager/resource-group-move-resources.
Here are my few questions, I would like to know the answers in all possible cases such as migration to different directory/Azure Active directory or common Azure Active directory.
Will the Access Keys to a resource such as Storage, Batch Services etc.. will be affected after I migrate my resource to another subscription?
Do I need to reconfigure the Service Endpoints?
Thanks.
They wont change
No, storage account endpoints are not tied to resourceId

How to turn on/off Azure web apps during office hours [duplicate]

I thought one of the advantages of Azure was that I could turn services on and off depending on when I want them to be available.
However I cant see how to pause my App Service Plan.
Is it possible?
I want to use the S1 tier so that I can play with what it offers. However I want to be able to pause the cost accumulation when I am not using it.
I see from the app service pricing help that an app will still be billed for even though it is in the stopped state.
Yet the link also clearly states that I only pay for what I use. So how does that work?
If you put your hosting plan onto the free tier, you will stop being charged for it. However if you have things like deployment slots and certificates these will be deleted.
The ability to turn services on and off, is more to do with being able to scale services, so if you need 50 servers for an hour you can easily do that.
What you can do to make your solution temporary is to create a deployment script, using Powershell or Resource manager Templates then you can deploy your solution for exactly as long as you need it and then delete it again when you don't. In this sense you can turn your services on and off at a whim.
Azure provides building blocks for you to create the solution you need, it is up to you to figure out how to best use those building blocks to create the solution you seek.
Edited to answer extended question.
If you want to use the S1 pricing plan, and not have it charge when you are not using it, the only way of achieving that is by using automation. Fortunately, this is reasonably trivial to achieve.
If you look at this template it is pretty much all configured to deploy a website from Github to Azure on demand. If you edit that to configure it to your needs you can have a new Azure website online with 2 minutes of running the script.
Then you would have another script that deleted it once you had finished.
Doing it this way you would loose no functionality, and probably learn quite a bit about what is possible with Azure along the way.
App Service Plan
An app service plan is the hardware that a web app runs on. In the free and shared tier your web apps share an instance with other web apps. In the other tiers you have a dedicated virtual machine. It is this virtual machine that you pay for. In that case it is irrelevant whether or not you have web apps running on your app service or not, you still have a virtual machine running and you will be charged for that.
To change the App Service Plan via PowerShell, you can run the following command
Set-AzureRmAppServicePlan -ResourceGroupName $rg -Name $AppServicePlan -Tier Free
I was able to accomplish this using the dashboard by selecting the App Service Plan, clicking Scale up (App Service Plan), and then from there if you click Dev/Test you can select the Free tier.
As others have mentioned, you need to script this. Fortunately, I created a repository with one-click deployment to your Azure resources.
https://github.com/jraps20/jrap-AzureVerticalScaling
The steps are intended to be as simple and generic as possible:
Execute the one-click deployment from the repo readme
Select the subscription, resource group etc.
Deploy resource to Azure
Set up your schedule to scale up and scale down as-needed
The scripting relies on runbooks and variables to maintain the previous state of each App Service Plan and App Services within those plans. Some App Services cannot be scaled due to specific settings being used (AlwaysOn, Use32BitWOrkerProcess, ClientCertEnabled, etc.). In those cases, the previous values are stored as variables prior to down scaling and then the original values are reapplied when the services are scaled up.
For more clarity, I have written a blog post that goes into detail. The post is pertaining to Sitecore, but applies to any App Service setup- Drastically Reduce Azure PaaS Hosting Costs in Non-Prod Environments With Scheduled Vertical Scaling. It also includes a brief video tutorial to show its use case.
Myself and others have been using this repository/approach for well over a year and it works great. I mostly use it for POC's to reduce costs when I'm not actively working on something. However, its main intention was for targeting non-prod environments to save costs during non-work hours.
Azure App Service Plan is just an logical concept of a set of features and capacity that you can share across multiple apps. I don`t think you can "pause" a plan, instead you can pause your service. and depends on billing model of each service, you might or might not get charged.
Pausing = Delete or lower tier.
Scripting is the key.
Design Diagram
Use scripts to create (also consider shared resources)
Delete using scripts
Use scripts to recreate.
eg: If we use resource group properly per environment then
Export-AzureRmResourceGroup will create a template for us (everything in the resource group will be pulled out as script). So we can delete it and recreate it anytime.
To pause a VM and stop billing you need to shut is down and deallocate it. Just shutting down still has the capacity reserved as if its running.
Storage can't be shutdown - it can be moved to lower cost tiers.

Web and worker roles in Azure

Iam relatively new to Cloud Computing and azure. I was wondering whether you can have more than one web and worker role in an Azure application. If so what advantages can I get using multiple roles and where do they apply?
Yes, you can have more than 1 web or worker role in an Azure Cloud Service. You can have up to 25 different roles per deployment I believe in any mix of Web and Worker roles. See the Azure Subscription and Service Limits, Quotas and Constraints link for more information.
The advantage of having the roles within the same cloud service is simply that within that cloud service they can see all the other roles and instances easily (unless you configure them otherwise). They will all be relatively close to each other within a data center because a cloud service is assigned to a stamp of machines and controlled by a Fabric Controller assigned to that stamp. You can watch this video by Mark Russinovich which sheds more light on the inner workings of Azure and talks a bit about stamps I think. A cloud service is a security boundary as well, so you get some benefits from that encapsulation if you need to do a lot of inter machine communication that ISN'T going across a queue for some reason.
The disadvantage of batching a whole bunch of roles together is that they are tied pretty closely together at that point. You can certainly scale them separately, and you can do updates that target only a single role at a time. However, if you want to deploy changes to multiple roles you may end up having to do a full deployment to all roles (even those that haven't changed) or do updates to single roles one at a time until all the ones you need updated are, which can take some time. Of course, it could be argued that having them in separate cloud services would still have you doing updates concurrently depending on your architecture and/or dependencies.
My suggestion is to group only roles that REALLY belong together in the same solution. These are role that have workloads that are interrelated. Even then, there's nothing stopping you from separating these as well into separate deployments (though you may benefit from the security boundaries that being within the same cloud service). Think about how each role will be updated, and if they would generally be updated together or not. There are many factors in thinking about how to package roles together.

Resources