Azure Resource Group Organization [closed] - azure

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I realize there probably isn't a single answer to this but I'm curious if there's any accepted best-practices or consensus on how resource groups and subscriptions should be organized.
Let's say you have a bunch of environments like dev, test, staging, and production. And your product is composed of N number of services, databases, and so on. Two thoughts come to mind:
Subscription per environment: use a different subscription for every environment and create resource groups for different subsystems within the environment. The challenge I have with this is it's not always obvious how to organize things. Say you have two subsystems that communicate through a service bus. Which resource group does the service bus itself belong to? The increased granularity is a nice option but in practice for me rarely used.
Resource group per environment: share the same subscription across all environments and use resource groups to group everything together. So you have a dev resource group, test resource group, and so on. This wouldn't give a ton of granularity but as I said that added granularity presents its own problems in my view.
Anyway, I'm just curious if there's any consensus or just thoughts on this. Cheers!

There's no right or wrong for this. I personally organize using Resource Groups / Application Level
rg-dev-app-a
rg-dev-app-b
rg-qa-app-a
rg-qa-app-b
and so on. You can also work with tags, which helps when dealing with shared resources between environments (dev / qa) or apps.
You can also find useful information in here: https://learn.microsoft.com/en-us/azure/cloud-adoption-framework/ready/azure-best-practices/naming-and-tagging
PS: I don't work with different subscriptions because there's no easy way (without powershell) to move resources between subscriptions (if needed).

Related

Arm template 800 resource limitation [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
From Arm template website - https://learn.microsoft.com/en-us/azure/azure-resource-manager/templates/best-practices
The limit of arm template size is 4mb and the number of resources is 800.
I'm developing a service where I handle ARM template deployment for customers, however, I'm finding out that the arm templates are getting bigger and bigger and are going past the 800 resource limit and more than 4mb size.
What is the recommended path moving forward that will ensure idempotency and in the event of a disaster, ensure recovery in a timely manner?
I would not want to write my own service that would implement basically what arm is doing as I feel that would be a waste.
I heard about Linked templates but wanted to know if this was the rccommended approach and what other limitations I should be aware of.
EDIT: I am focusing on a specific problem. Would like to understand how to circumvent the 800 resource limitation from arm template, and whether linked template would have associated limitations. Thanks Rimaz and Jeremy for the explanation!
Definitely go with Linked templates (see : https://learn.microsoft.com/en-us/azure/azure-resource-manager/templates/linked-templates?tabs=azure-powershell#linked-template). With 800 resources you need to make sure your ARM templates are modular, and easily understandable to you and your devs. So create a master template that will in turn deploy the other templates linked to it.
You can use Azure Template Specs to easily manage/refer your linked templates when running the template deployment in your pipeline (instead of hosting them on a storage account or a public repo) https://learn.microsoft.com/en-us/azure/azure-resource-manager/templates/quickstart-create-template-specs?tabs=azure-powershell
Also check this helpful video from John Savil that shows how you can use template specs to make it easy for you to deploy linked templates in your pipelines https://www.youtube.com/watch?v=8MmWTjxT68o

Single or multiple instances of Application insights resource? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
The community reviewed whether to reopen this question 10 months ago and left it closed:
Original close reason(s) were not resolved
Improve this question
We have a microservice project with multiple applications consisting of frontend (angular, angular.js), backend apps (ASP.NET Core, PHP), gateways etc.
I was wondering whether it's a correct approach to have an Application Insights resource per project or maybe there should be just one per environment for all the applications ? It seems if I create multiple application insight resources and assign them all to separate projects Azure can somehow figure out they are all linked (routes visible on application map). I'm not sure what's the correct approach.
There are a few things to take into account here, like the amount of events you're tracking and if that 'fits' into one instance of Application Insights. Or if you're OK with using Sampling.
As per the FAQ: use one instance:
Should I use single or multiple Application Insights resources?
Use a single resource for all the components or roles in a single business system. Use separate resources for development, test, and release versions, and for independent applications.
See the discussion here
Should I use single or multiple Application Insights resources?
I would have one app insight per service. The reason is that app insights don’t cost until you hit the threshold. So if you use one app insight to log everything, it’s likely that you will hit the threshold pretty quickly.
Also, it is good practice to separate out the logs for each service as the data they hold can differ with regards to personal information.
You can however track the request across all services by application map or writing a query that combines the logs across multiple app insights.

Aim of using puppet, chef or ansible [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I read many article concerning Configuration Management, but I dont really understand on what this configuration is applied.
Is it on software himself ? Like, changing hosts in conf file etc... ?
Or on the app "host" ? In that case, what is the aim of using this kind of software, knowing that we generally use docker containers "ready to use" ?
You spent hours setting up that server, configuring every variable, installing every package, updating config files. You love that server so much that you named it 'Lucy'.
Tomorrow you get run over by a bus. Will your coworkers know every single tiny change you made to that server? Unlikely. They will have to spend hours digging into that server trying to figure out what you've done and why you've done it.
Now let's multiply this by 100s or even 1000s servers. Doing this manually is unfeasible.
That's where config management systems come in.
It allows you to have documentation of your system's configurations by the nature of config management systems itself. Playbooks/manifests/recipes/'whatever term they use' will be the authoritative description of your servers. Unlike readme.txt which might not always match the real world, these systems ensure that what you see there is what you have on your servers.
It will be relatively simple to duplicate this server configuration over and over to potentially limitless scale(Google, Facebook, Microsoft and every other large company work that way).
You might think of a "Golden image" approach where you configure everything, then take a snapshot and keep replicating it over and over. The problem is it's difficult to compare the difference between 2 such images. You just have binary blobs. Where as with most config management systems you can use traditional VCS and easily diff various versions.
The same principle applies to containers.
Don't treat your servers as pets, treat them as cattle.

How do companies like facebook release features slowly to portions of their user base? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I like how facebook releases features incrementally and not all at once to their entire user base. I get that this can be replicated with a bunch if statements smattered all throughout your code, but there needs to be a better way to do this. Perhaps that really is all they are doing, but that seems rather inelegant. Does anyone know if there is an industry standard for an architecture than can incrementally release features to portions of a user base?
On that same note, I have a feeling that all of their employees see an entirely different completely beta view of the site. So it seems that they are able to deem certain portions of their website as beta and others as production and have some sort of access control list to guide what people see? That seems like it would be slow.
Thanks!
Facebook has a lot of servers so they can apply new features only on some of them. Also they have some servers where they test new features before commiting to the production.
A more elegant solution is, if statements and feature flags using systems like gargoyle (in python).
Using a system like this you could do something like:
if feature_flag.is_active(MY_FEATURE_NAME, request, user, other_key_objects):
# do some stuff
In a web interface you would be able to isolate describe users, requests, or any other key object your system has and deliver your feature to them. In fact, via requests you could do things like direct X% of traffic to the new feature, and thus do things like A/B test and gather analytics.
An approach to this is to have a tiered architecture where the authentication tier hands-off to the product tier.
A user enters the product URL and that is resolved to direct them to a cluster of authentication servers. These servers handle authentication and then hand off the session to a cluster of product servers.
Using this approach you can:
Separate out your product servers in to 'zones' that run different versions of your application
Run logic on your authentication servers that decides which zone to route the session to
As an example, you could have Zone A running the latest production code and Zone B running beta code. At the point of login the authentication server sends every user with a user name starting with a-m to Zone A and n-z to Zone B. That way roughly half the users are running on the beta product.
Depending on the information you have available at the point of login you could even do something more sophisticated than this. For example you could target a particular user demographic (e.g. age ranges, gender, location, etc).

Are there any examples of group data-sharing using a replicated database, such as CouchDB? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
Background: I am working on a proposal for a PHP/web-based P2P replication layer for PDO databases. My vision is that someone with a need to crowd-source data sets up this software on a web server, hooks it up to their preferred db platform, and then writes a web app around it to add/edit/delete data locally. Other parties, if they wish, may set up a similar thing - with their own web apps written around it - and set up data-sharing agreements with one or more peers. In the general case, changes made to one database are written to another on a versioned basis, such that they eventually flow around the whole network.
Someone has asked me why I'm not using CouchDB, since it has bi-directional replication and record versioning offered as standard. I wasn't aware of these capabilities, so this turns out to be an excellent question! It occurs to me, if this facility is already available, are there any existing examples of server-to-server replication between separate groups? I've done a great deal of hunting and not found anything.
(I suppose what I am looking for is examples of "group-sourcing": give groups a means to access a shared dataset locally, plus the benefits of critical mass they would be unable to build individually, whilst avoiding the political ownership/control problems associated with the traditional centralised model.)
You might want to check out http://refuge.io/
It is built around couchdb, but more specifically to form peer groups.
Also, here is a couchbase sponsored case study of replication between various groups
http://site.couchio.couchone.com/case-study-assay-depot
This can be achived on standard couchdb installs.
Hope that gives you a start.

Resources