Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
The community reviewed whether to reopen this question 10 months ago and left it closed:
Original close reason(s) were not resolved
Improve this question
We have a microservice project with multiple applications consisting of frontend (angular, angular.js), backend apps (ASP.NET Core, PHP), gateways etc.
I was wondering whether it's a correct approach to have an Application Insights resource per project or maybe there should be just one per environment for all the applications ? It seems if I create multiple application insight resources and assign them all to separate projects Azure can somehow figure out they are all linked (routes visible on application map). I'm not sure what's the correct approach.
There are a few things to take into account here, like the amount of events you're tracking and if that 'fits' into one instance of Application Insights. Or if you're OK with using Sampling.
As per the FAQ: use one instance:
Should I use single or multiple Application Insights resources?
Use a single resource for all the components or roles in a single business system. Use separate resources for development, test, and release versions, and for independent applications.
See the discussion here
Should I use single or multiple Application Insights resources?
I would have one app insight per service. The reason is that app insights don’t cost until you hit the threshold. So if you use one app insight to log everything, it’s likely that you will hit the threshold pretty quickly.
Also, it is good practice to separate out the logs for each service as the data they hold can differ with regards to personal information.
You can however track the request across all services by application map or writing a query that combines the logs across multiple app insights.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I realize there probably isn't a single answer to this but I'm curious if there's any accepted best-practices or consensus on how resource groups and subscriptions should be organized.
Let's say you have a bunch of environments like dev, test, staging, and production. And your product is composed of N number of services, databases, and so on. Two thoughts come to mind:
Subscription per environment: use a different subscription for every environment and create resource groups for different subsystems within the environment. The challenge I have with this is it's not always obvious how to organize things. Say you have two subsystems that communicate through a service bus. Which resource group does the service bus itself belong to? The increased granularity is a nice option but in practice for me rarely used.
Resource group per environment: share the same subscription across all environments and use resource groups to group everything together. So you have a dev resource group, test resource group, and so on. This wouldn't give a ton of granularity but as I said that added granularity presents its own problems in my view.
Anyway, I'm just curious if there's any consensus or just thoughts on this. Cheers!
There's no right or wrong for this. I personally organize using Resource Groups / Application Level
rg-dev-app-a
rg-dev-app-b
rg-qa-app-a
rg-qa-app-b
and so on. You can also work with tags, which helps when dealing with shared resources between environments (dev / qa) or apps.
You can also find useful information in here: https://learn.microsoft.com/en-us/azure/cloud-adoption-framework/ready/azure-best-practices/naming-and-tagging
PS: I don't work with different subscriptions because there's no easy way (without powershell) to move resources between subscriptions (if needed).
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I don't know if I see the benefit to an app service. It seems like with controllers and domain services you can get the same result. Can someone post a scenario when it would make sense to use an app service over controller/domain service?
Host independence - if your domain model is only exercised through HTTP calls, your controllers may be fine to keep complexity low but being able to call them from multiple hosts (think console application for an event queue, serverless, tests) can be beneficial. I am a fan of adding complexity as it is needed but unfortunately, a lot of developers will just copy the pattern that has come before. Especially if the initial plans have been lost to attrition.
Tests - mentioned above and pretty much just one side of the above cube but having the Application Service as a seam for writing tests against is often quite useful if you don't want to tie your tests to your host. Having said that, tools for testing ASPNET (an I am sure other technologies) in proc have come a long way over the years.
Reveal intent - Controllers, unfortunately, suffer from functionality gravitation. All functionality tends to be pulled into them. Are they accepting HTTP requests? Deserializing those requests? Converting to a command? To a model? Orchestrating the domain model calls for creating the model, domain services, repositories? What really is it's responsibility? The term service is soooo overloaded that teams I have worked with call Application Services Use-cases and name them for exactly what they are trying to exercise the domain to do.
Although you can of course use controllers as this entry point, you lose some flexibility but gain some initial simplicity. This is the exact balancing act you are playing when delaying the use of DDD adoption in the first place instead of standard MVC app with no strategy for managing complexity. Maybe if you are not seeing a benefit, the application has not reached the complexity needed to use DDD in the first place? It does come with a complexity cost.
With regard to domain service, they are really part of your domain and do the work whereas the application service is really the entry point that orchestrates the whole use case. Be careful of overusing domain services though, personally, I often view them as a failure on my part to find a decent model (but maybe that's just me).
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
Currently i am trying to learn various Services of Amazon web services and Microsoft windows azure like Amazon sns,Amazon Storage,amazon Search.
I am having some question in my mind that why now a days cloud platforms are getting so popular than old traditional approach like previously we were storing our files(img,txt,.doc etc) in our web application project only but now adays some web application are storing their files on amazon storage or on azure storage.
What is the benefits of storing files and folders over cloud platform ??
Next why amazon search or azure search is preferred as when they were
not available searching was done and amazon and azure search are not
freely availabale??
Now if we talk about push notification then why to use amazon or azure
push notification if we can easily send notification through codes
that are available on internet??
I general i just want to know that why now a days web application are using more cloud platforms(Azure or amazon) even though they are costly??
Can anybody explain me this with some details??
Among the many reasons, the most important and common ones I can think of are:
High Availability - When you manage your own services, there is always an operational challenge of ensuring that they do not go down i.e, crash. This may cause a downtime to your application or even data loss depending on the situation. The cloud services you have mentioned, offer reliable solutions that guarantee maximum up time and data safety (by backup, for example). They often replicate your data across multiple servers, so that even if one of their servers are down, you do not loose any data.
Ease of use - Cloud services make it very easy to use a specific service by providing detailed documentation and client libraries to use their services. The dashboard or console of many cloud services are often user friendly and do not require extensive technical background to use. You could deploy a Hadoop cluster in Google Compute Engine in less than a minute, for instance. They offer many pre-built solutions which you can take advantage of.
Auto-Scale - Many cloud services nowadays are designed to scale automatically. The are built to scale automatically with increasing traffic. You do not have to worry about the traffic or application load.
Security - Cloud services are secure. They offer sophisticated security solutions using which, you can secure your service from being misused.
Cost - Trying to host your own services require extensive resources like high end servers, dedicated system administrators, good network connectivity etc. Cloud services are quite cheap these days.
Of course you could solve these problems yourself, but smaller organizations often do not prefer to do so because of the operational overhead. It would take more time and resources to reach a stage where your solution is both reliable and functional. People would often prefer to work on the actual problem their application is trying to solve and abstract away most operational problems which cloud services readily offer.
p.s. These are some opinions from an early stage startup perspective.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I need to measure how many concurrent users my current azure subscription will accept, to determine my cost per user. How can I do this?
This is quite a big area within capacity planning of a product/solution, but effectively you need to script up a user scenario, say using a tool like JMeter or VS2012 Ultimate has a similar feature, then fire-off lots of requests to your site an monitor the results.
Visual Studio can deploy your Azure project using a profiling mode, which is great for detecting the bottlenecks in your code for optimisation. But if you just want to see how many requests per/role before it breaks something like JMeter should work.
There are also lots of products out there on offer, like http://loader.io/ which is great for not worrying about bandwidth issues, scripting, etc... and it should just work.
If you do role your own manual load testing scripts, please be careful to avoid false negatives or false positives, by this I mean that if you internet connection is slow and you send out millions of requests, the bandwidth of your internet may cause your site to appear VERY slow, when in-fact its not your site at all...
This has been answered numerous times. I suggest searching [Azure] 'load testing' and start reading. You'll need to decide between installing a tool to a virtual machine or Cloud Service (Visual Studio Test, JMeter, etc.) and subscribing to a service (LoadStorm)... For the latter, if you're focused on maximum app load, you'll probably want to use a service that runs within Azure, and make sure they have load generators in the same data center as your system-under-test.
Announced at TechEd 2013, the Team Foundation Test Service will be available in Preview on June 26 (coincident with the //build conference). This will certainly give you load testing from Azure-based load generators. Read this post for more details.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
Background: I am working on a proposal for a PHP/web-based P2P replication layer for PDO databases. My vision is that someone with a need to crowd-source data sets up this software on a web server, hooks it up to their preferred db platform, and then writes a web app around it to add/edit/delete data locally. Other parties, if they wish, may set up a similar thing - with their own web apps written around it - and set up data-sharing agreements with one or more peers. In the general case, changes made to one database are written to another on a versioned basis, such that they eventually flow around the whole network.
Someone has asked me why I'm not using CouchDB, since it has bi-directional replication and record versioning offered as standard. I wasn't aware of these capabilities, so this turns out to be an excellent question! It occurs to me, if this facility is already available, are there any existing examples of server-to-server replication between separate groups? I've done a great deal of hunting and not found anything.
(I suppose what I am looking for is examples of "group-sourcing": give groups a means to access a shared dataset locally, plus the benefits of critical mass they would be unable to build individually, whilst avoiding the political ownership/control problems associated with the traditional centralised model.)
You might want to check out http://refuge.io/
It is built around couchdb, but more specifically to form peer groups.
Also, here is a couchbase sponsored case study of replication between various groups
http://site.couchio.couchone.com/case-study-assay-depot
This can be achived on standard couchdb installs.
Hope that gives you a start.