I currently have a microservice architecture consisting of ~20 services at the moment and each service has its own dedicated application insights instance per environment (dev/test/prod).
What I would like to do, if possible, is aggregate all of the different application maps into one global application map so that I can easily see everything through a single pane of glass (per environment) rather than having to drill into each individual service's application map.
The only way to do this, from what I've found, is to have ever service point to the same app insights resource. However, I would imagine that this approach would make it difficult to easily track metrics for an individual service, since the metrics would be based off the entire environments architecture rather than each service. Is there some way to build a workbook that combines all of the application maps?
Any ideas on how to approach this? Thanks in advance.
If your microservices are instrumented with Application Insights SDKs and rely on auto instrumentation then it should work out of the box. Application Insights will discover which components a particular app talks to and you should be able to get the full map by clicking on "Update map components".
One app view:
Whole connected microservice universe view:
If "Update map components" is greyed out then something wrong with distributed tracing instrumentation.
Related
I have a solution in Azure that has multiple networking components, and am trying to trace requests through from component to component. I have enabled a LogAnalyticWorkspace that these components output to.
Application Gateway w/ WAFv2,
API Management Instance,
Application Gateway w/o WAF,
Container Instance,
AppGW-WAF-->APIM-->AppGW-->Container
Is there some common attribute/header value/query string addition, etc. that I can use in the LAW to trace a request from point to point in the sequence above?
Any advice is appreciated!
it sounds like you want to do tracing on the application / HTTP layer to get something like this?!
Then you want to look at Application Insights and Correlation, probably using distributed tracing.
This also nicely integrates out of the box with APIM.
I managed to connect my bot's telemetry with Azure Application Insights. I am now trying to make it so the Application Insights can show certain values from the bot (example: a user's input). I assume this would be related to custom events, but after looking at documentations, I am still really confused and do not know how to set it up to log the values.
The bot framework itself has a way to write telemetry to an Application Insights instance. I believe this is what you've configured and have working so far. For writing custom events/metrics you would want to simply utilize the AI TelemetryClient yourself like you would in any other .NET Core application.
Once registered, you would change your IBot class to take TelemetryClient as a dependency to its constructor which will then be injected for you and then you just start recording events/metrics as you normally would.
The real question I always like to ask is: do you really want to tightly couple yourself directly to the Application Insights APIs? Do you perhaps just want to have a certain level of logging that you're doing through the logging abstraction (e.g. ILogger[<T>])? Or, if you need events, perhaps you want to use an EventSource instead. Both of these abstractions can then be captured by Application Insights by configuring the appropriate telemetry modules, but they do not tie your code directly to Application Insights itself. I believe the only thing that doesn't have a good existing abstraction would be if you needed to gather metrics. You could of course still build your own abstraction for that and then a custom module that funnels the details into AI.
Background
We are looking at porting a 'monolithic' 3 tier Web app to a microservices architecture. The web app displays listings to a consumer (think Craiglist).
The backend consists of a REST API that calls into a SQL DB and returns JSON for a SPA app to build a UI (there's also a mobile app). Data is written to the SQL DB via background services (ftp + worker roles). There's also some pages that allow writes by the user.
Information required:
I'm trying to figure out how (if at all), Azure Service Fabric would be a good fit for a microservices architecture in my scenario. I know the pros/cons of microservices vs monolith, but i'm trying to figure out the application of various microservice programming models to our current architecture.
Questions
Is Azure Service Fabric a good fit for this? If not, other recommendations? Currently i'm leaning towards a bunch of OWIN-based .NET web sites, split up by area/service, each hosted on their own machine and tied together by an API gateway.
Which Service Fabric programming model would i go for? Stateless services with their own backing DB? I can't see how Stateful or Actor model would help here.
If i went with Stateful services/Actor, how would i go about updating data as part of a maintenance/ad-hoc admin request? Traditionally we would simply login to the DB and update the data, and the API would return the new data - but if it's persisted in-memory/across nodes in a cluster, how would we update it? Would i have to expose this all via methods on the service? Similarly, how would I import my existing SQL data into a stateful service?
For Stateful services/actor model, how can I 'see' the data visually, with an object Explorer/UI. Our data is our Gold, and I'm concerned of the lack of control/visibility of it in the reliable services models
Basically, is there some documentation on the decision path towards which programming model to go for? I could model a "listing" as an Actor, and have millions of those - sure, but i could also have a Stateful service that stores the listing locally, and i could also have a Stateless service that fetches it from the DB. How does one decide as to which is the best approach, for a given use case?
Thanks.
What is it about your current setup that isn't meeting your requirements? What do you hope to gain from a more complex architecture?
Microservices aren't a magic bullet. You mainly get four benefits:
You can scale and distribute pieces of your overall system independently. Service Fabric has very sophisticated tools and advanced capabilities for this.
You can deploy and upgrade pieces of your overall system independently. Service Fabric again has advanced capabilities for this.
You can have a polyglot system - each service can be written in a different language/platform.
You can use conflicting dependencies - each service can have its own set of dependencies, like different framework versions.
All of this comes at a cost and introduces complexity and new ways your system can fail. For example: your fast, compile-time checked in-proc method calls now become slow (by comparison to an in-proc function call) failure-prone network calls. And these are not specific to Service Fabric, btw, this is just what happens you go from in-proc method calls to cross-machine I/O - doesn't matter what platform you use. The decision path here is a pro/con list specific to your application and your requirements.
To answer your Service Fabric questions specifically:
Which programming model do you go for? Start with stateless services with ASP.NET Core. It's going to be the simplest translation of your current architecture that doesn't require mucking around with your data layer.
Stateful has a lot of great uses, but it's not necessarily a replacement for your RDBMS. A good place to start is hot data that can be stored in simple key-value pairs, is accessed frequently and needs to be low-latency (you get local reads!), and doesn't need to be datamined. Some examples include user session state, cache data, a "snapshot" of the most recent items in a data stream (like the most recent stock quote in a stream of stock quotes).
Currently the only way to see or query your data is programmatically directly against the Reliable Collection APIs. There is no viewer or "management studio" tool. You have to write (and secure) an API in each service that can display and query data.
Finally, the actor model is a very niche model. It serves specific purposes but if you just treat it as a data store it will not work for you. Like in your example, a listing per actor probably wouldn't work because you can't query across that list, or even have multiple users reading the same listing simultaneously.
I believe that the mvc mini profiler is a bit of a 'God-send'
I have incorporated it in a new MVC project which is targeting the Azure platform.
My question is - how to handle profiling across server (role instance) barriers?
Is this is even possible?
I don't understand why you would need to profile these apps any differently. You want to profile how your app behaves on the production server - go ahead and do it.
A single request will still be executed on a single instance, and you'll get the data from that same instance. If you want to profile services located on a different physical tier as well, that would require different approaches; involving communication through internal endpoints which I'm sure the mini profiler doesn't support out of the box. However, the modification shouldn't be that complicated.
However, would you want to profile physically separated tiers, I would go about it in a different way. Specifically, profile each tier independantly. Because that's how I would go about optimizing it. If you wrap the call to your other tier in a profiler statement, you can see where the problem lies and still be able to solve it.
By default the mvc-mini-profiler stores and delivers its results using HttpRuntime.Cache. This is going to cause some problems in a multi-instance environment.
If you are using multiple instances, then some ways you might be able to make this work are:
to change the Http Cache to an AppFabric Cache implementation (or some MemCached implementation)
to use an alternative Storage strategy for your profile results (the code includes SqlServerStorage as an example?)
Obviously, whichever strategy you choose will require more time/resources than just the single instance implementation.
i intend to build software for SMEs on the azure platform that can be provisioned for different clients..what i mean is, once the client signs up, a new instance is automatically created for them on the azure platform.
Does anyone have any experience with building such solutions or are their any commercial packages like that available?
thanks
It sounds like you're planning to have a single-tenant system, where each instance is slightly different then others and is customized for each client slightly differently. If this is the case, Azure in general will not be a great platform for you. It thrives on providing a dynamic quantity of exactly-alike instances. Furthermore, having one instance per client is a bad idead, as instances are slightly volatile. MS may choose to bring one down for an upgrade, or instance may simply crash, and SLA is only inforced when 2+ instances are running.
I'd like to suggest that you consider multi-tenant environment, where your system shards itself virtually via database/architecture/etc. Do not tie instances to quantity of clients, but to actual load.
Now, if you want to spin up exactly same instances when new clients sign up, check out dynamic scaling service for Azure called AzureWatch # http://www.paraleap.com - its main premise to scale your instances to load, but with a few simple queue/table inserts it can programmatically scale you up or down. Contact me there if you think this will work for you, and ill be glad to explain how this can be done