Arm template 800 resource limitation [closed] - azure

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
From Arm template website - https://learn.microsoft.com/en-us/azure/azure-resource-manager/templates/best-practices
The limit of arm template size is 4mb and the number of resources is 800.
I'm developing a service where I handle ARM template deployment for customers, however, I'm finding out that the arm templates are getting bigger and bigger and are going past the 800 resource limit and more than 4mb size.
What is the recommended path moving forward that will ensure idempotency and in the event of a disaster, ensure recovery in a timely manner?
I would not want to write my own service that would implement basically what arm is doing as I feel that would be a waste.
I heard about Linked templates but wanted to know if this was the rccommended approach and what other limitations I should be aware of.
EDIT: I am focusing on a specific problem. Would like to understand how to circumvent the 800 resource limitation from arm template, and whether linked template would have associated limitations. Thanks Rimaz and Jeremy for the explanation!

Definitely go with Linked templates (see : https://learn.microsoft.com/en-us/azure/azure-resource-manager/templates/linked-templates?tabs=azure-powershell#linked-template). With 800 resources you need to make sure your ARM templates are modular, and easily understandable to you and your devs. So create a master template that will in turn deploy the other templates linked to it.
You can use Azure Template Specs to easily manage/refer your linked templates when running the template deployment in your pipeline (instead of hosting them on a storage account or a public repo) https://learn.microsoft.com/en-us/azure/azure-resource-manager/templates/quickstart-create-template-specs?tabs=azure-powershell
Also check this helpful video from John Savil that shows how you can use template specs to make it easy for you to deploy linked templates in your pipelines https://www.youtube.com/watch?v=8MmWTjxT68o

Related

Azure Resource Group Organization [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I realize there probably isn't a single answer to this but I'm curious if there's any accepted best-practices or consensus on how resource groups and subscriptions should be organized.
Let's say you have a bunch of environments like dev, test, staging, and production. And your product is composed of N number of services, databases, and so on. Two thoughts come to mind:
Subscription per environment: use a different subscription for every environment and create resource groups for different subsystems within the environment. The challenge I have with this is it's not always obvious how to organize things. Say you have two subsystems that communicate through a service bus. Which resource group does the service bus itself belong to? The increased granularity is a nice option but in practice for me rarely used.
Resource group per environment: share the same subscription across all environments and use resource groups to group everything together. So you have a dev resource group, test resource group, and so on. This wouldn't give a ton of granularity but as I said that added granularity presents its own problems in my view.
Anyway, I'm just curious if there's any consensus or just thoughts on this. Cheers!
There's no right or wrong for this. I personally organize using Resource Groups / Application Level
rg-dev-app-a
rg-dev-app-b
rg-qa-app-a
rg-qa-app-b
and so on. You can also work with tags, which helps when dealing with shared resources between environments (dev / qa) or apps.
You can also find useful information in here: https://learn.microsoft.com/en-us/azure/cloud-adoption-framework/ready/azure-best-practices/naming-and-tagging
PS: I don't work with different subscriptions because there's no easy way (without powershell) to move resources between subscriptions (if needed).

Single or multiple instances of Application insights resource? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
The community reviewed whether to reopen this question 10 months ago and left it closed:
Original close reason(s) were not resolved
Improve this question
We have a microservice project with multiple applications consisting of frontend (angular, angular.js), backend apps (ASP.NET Core, PHP), gateways etc.
I was wondering whether it's a correct approach to have an Application Insights resource per project or maybe there should be just one per environment for all the applications ? It seems if I create multiple application insight resources and assign them all to separate projects Azure can somehow figure out they are all linked (routes visible on application map). I'm not sure what's the correct approach.
There are a few things to take into account here, like the amount of events you're tracking and if that 'fits' into one instance of Application Insights. Or if you're OK with using Sampling.
As per the FAQ: use one instance:
Should I use single or multiple Application Insights resources?
Use a single resource for all the components or roles in a single business system. Use separate resources for development, test, and release versions, and for independent applications.
See the discussion here
Should I use single or multiple Application Insights resources?
I would have one app insight per service. The reason is that app insights don’t cost until you hit the threshold. So if you use one app insight to log everything, it’s likely that you will hit the threshold pretty quickly.
Also, it is good practice to separate out the logs for each service as the data they hold can differ with regards to personal information.
You can however track the request across all services by application map or writing a query that combines the logs across multiple app insights.

How to QA test Azure Data Factory? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 2 years ago.
Improve this question
I am from QA team. My dev team has created pipelines in Azure Data factory. They want me to QA test them. I need to write manual test cases and later after some time I also need to automate this. Please guide me how/ what to test using manual test case. Also suggest me automation tool for later stage that I should use to create automation test cases. Selenium?
You can take a look at this blog post, it really helped me when I started with testing in ADF: https://blogs.msdn.microsoft.com/karang/2018/11/18/azure-data-factory-v2-pipeline-functional-testing/
You won't be able to test everything in Data Factory, at most you can check if connection strings are correct, queries dont break, objects are present (in database or blob storage or whatever you data source is), etc. Testing if the end result of a pipeline is what you intended to do, is highly dependent of the use case and most of the time its not worth it.
I'm not an expert, but as far as I know, Selenium is used to automate browser testing related stuff. Here you won't need a complex framework, you can get away with using a Powershell script as described in the blog post, but you also have other options like Python, .NET, REST api.
Hope this helped!!
Our Q&A team just changes the settings to see the pipeline behavior, uses not normal data to push trough the pipeline, different time zones and timestamps and etc.. But the majority of the test are the final pipeline results.
I have used a Specflow project (https://specflow.org/) and supporting .Net code to set up the tests and execute the pipeline on test files held in the project. You can automate this into your build or release pipelines.

Industry Standard Methods of Automating Azure Resource Deployment [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 5 years ago.
Improve this question
I have recently began automating the deployment of all of the Azure resources and other modifications that need to be made to build the dev environments at my company. I started working with Powershell on Linux using the .NET Core release of the AzureRM module. Come to find out, half of the cmdlets for interacting with Azure are present in another module, Azure. Which doesn't have a .NET Core release yet. See my other recent post for additional details on that issue.
I tried running my script today on Windows and it bombed horribly. Probably due to some weird syntactical differences between the platforms or something. I haven't began troubleshooting yet. But this led me to thinking about whether or not Powershell was even the best solution. Can anyone recommend an alternative method?
Preferably something less proprietary with better cross-platform support. I recognize there are similar questions on StackOverflow. But they address entire applications and CI/CD pipelines. I'm mostly referring to the underlying resource groups, security rules, etc. However I will likely also leverage this script to deploy k8s, couchbase, etc as well. So perhaps an entire design change is in order.
I'm looking forward to your insight, friends.
I'm using powershell on linux\windows to deploy resources to Azure without much of a hassle. But for resource provisioning I'd go with ARM Templates to automate deployments, as they can be deployed with almost anything, are kinda easy to understand when you scan through them and they are just a bunch of json.
ARM templates can be plugged into ansible (which I'm using currently) and some other things (like VSTS, Terraform, etc)
Also, Invoke-AzureRmResourceAction and New\Get\Remove-AzureRmResource are available on linux and windows and can be used to do pretty much anything in Azure (but they are a lot trickier than native cmdlets).

How can I load test my website on Azure? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I need to measure how many concurrent users my current azure subscription will accept, to determine my cost per user. How can I do this?
This is quite a big area within capacity planning of a product/solution, but effectively you need to script up a user scenario, say using a tool like JMeter or VS2012 Ultimate has a similar feature, then fire-off lots of requests to your site an monitor the results.
Visual Studio can deploy your Azure project using a profiling mode, which is great for detecting the bottlenecks in your code for optimisation. But if you just want to see how many requests per/role before it breaks something like JMeter should work.
There are also lots of products out there on offer, like http://loader.io/ which is great for not worrying about bandwidth issues, scripting, etc... and it should just work.
If you do role your own manual load testing scripts, please be careful to avoid false negatives or false positives, by this I mean that if you internet connection is slow and you send out millions of requests, the bandwidth of your internet may cause your site to appear VERY slow, when in-fact its not your site at all...
This has been answered numerous times. I suggest searching [Azure] 'load testing' and start reading. You'll need to decide between installing a tool to a virtual machine or Cloud Service (Visual Studio Test, JMeter, etc.) and subscribing to a service (LoadStorm)... For the latter, if you're focused on maximum app load, you'll probably want to use a service that runs within Azure, and make sure they have load generators in the same data center as your system-under-test.
Announced at TechEd 2013, the Team Foundation Test Service will be available in Preview on June 26 (coincident with the //build conference). This will certainly give you load testing from Azure-based load generators. Read this post for more details.

Resources