Request tracing through multiple azure networking components? - azure

I have a solution in Azure that has multiple networking components, and am trying to trace requests through from component to component. I have enabled a LogAnalyticWorkspace that these components output to.
Application Gateway w/ WAFv2,
API Management Instance,
Application Gateway w/o WAF,
Container Instance,
AppGW-WAF-->APIM-->AppGW-->Container
Is there some common attribute/header value/query string addition, etc. that I can use in the LAW to trace a request from point to point in the sequence above?
Any advice is appreciated!

it sounds like you want to do tracing on the application / HTTP layer to get something like this?!
Then you want to look at Application Insights and Correlation, probably using distributed tracing.
This also nicely integrates out of the box with APIM.

Related

Is it possible to combine multiple Application Insights application maps?

I currently have a microservice architecture consisting of ~20 services at the moment and each service has its own dedicated application insights instance per environment (dev/test/prod).
What I would like to do, if possible, is aggregate all of the different application maps into one global application map so that I can easily see everything through a single pane of glass (per environment) rather than having to drill into each individual service's application map.
The only way to do this, from what I've found, is to have ever service point to the same app insights resource. However, I would imagine that this approach would make it difficult to easily track metrics for an individual service, since the metrics would be based off the entire environments architecture rather than each service. Is there some way to build a workbook that combines all of the application maps?
Any ideas on how to approach this? Thanks in advance.
If your microservices are instrumented with Application Insights SDKs and rely on auto instrumentation then it should work out of the box. Application Insights will discover which components a particular app talks to and you should be able to get the full map by clicking on "Update map components".
One app view:
Whole connected microservice universe view:
If "Update map components" is greyed out then something wrong with distributed tracing instrumentation.

Logging data in Azure Application Insights for a bot

I managed to connect my bot's telemetry with Azure Application Insights. I am now trying to make it so the Application Insights can show certain values from the bot (example: a user's input). I assume this would be related to custom events, but after looking at documentations, I am still really confused and do not know how to set it up to log the values.
The bot framework itself has a way to write telemetry to an Application Insights instance. I believe this is what you've configured and have working so far. For writing custom events/metrics you would want to simply utilize the AI TelemetryClient yourself like you would in any other .NET Core application.
Once registered, you would change your IBot class to take TelemetryClient as a dependency to its constructor which will then be injected for you and then you just start recording events/metrics as you normally would.
The real question I always like to ask is: do you really want to tightly couple yourself directly to the Application Insights APIs? Do you perhaps just want to have a certain level of logging that you're doing through the logging abstraction (e.g. ILogger[<T>])? Or, if you need events, perhaps you want to use an EventSource instead. Both of these abstractions can then be captured by Application Insights by configuring the appropriate telemetry modules, but they do not tie your code directly to Application Insights itself. I believe the only thing that doesn't have a good existing abstraction would be if you needed to gather metrics. You could of course still build your own abstraction for that and then a custom module that funnels the details into AI.

Spring Cloud: ZUUL + Eureka + NodeJS

I am new to Spring-Boot(Cloud) and going to work on one new project.
Our project architect have designed this new application like this:
One front end Spring boot application(which is also microservice) with Angular-2.
One Eureka server, to which other microservices will connect.
ZUUL proxy server, which will connect to Front end and mircoservices.
Now, followings are the things I am confuse about and I can't ask him as he too senior to me:
Do I need to have separate ZUUL proxy server ? I mean, what is the pro and cons of using same front end application as ZUUL server?
How MicorService-1 will communicate with Node's MicroService-1? Some blogs suggesting the Sidecar. But again, why ? because I can directly invoke ReST api of NodeJS-1 from the Microservice-1.
(I know, this is very tough to guess but still asking) NodeJS services(which are not legacy services) are suppose to call some third party api or retrieve data from DB.
Now, what I am not getting is why we need NodeJS code at all? Why can't we do the same in the microservices written in Java?
Can anyone who have worked with similar kind of scenario shed some light on my doubts?
I do not have the full context of the problem that you are trying to solve, therefore answers bellow are quite general, but still may be useful:
Do I need to have separate ZUUL proxy server? I mean, what is the pro and cons of using same front end application as ZUUL server?
Yes, you gonna need a separate API Gateway service, which may be Zuul (or other gateways, e.g tyk.io).
The main idea here is that you can have hundreds or even thousands of microservices (like Amazon, Netflix, etc.) and they can be scattered across different machines or data centres. It would be really silly to enforce your API consumers (in your case it's Angular 2) to memorise all the possible locations of each microservice. Better to have one API Gateway that knows about all the services under it, so your clients can call your gateway and have access to the underlying services through one single place. Also having one in your system will decouple your clients from your services, so it's possible to evolve them independently.
Another benefit is that you can have access control, logging, security, etc. in one single place. And, BTW, I think that you are missing one more thing in your architecture - it's Authorization Server. A common approach in building security for microservices is OAuth 2.0.
How MicorService-1 will communicate with Node's MicroService-1? Some blogs suggesting the Sidecar. But again, why? because I can directly invoke ReST API of NodeJS-1 from the Microservice-1.
I think you could use Sidecar, but I have never used it. I suppose that the question 'why' is related to the Discovery Service (Eureka in your architecture).
You can't call microservice NodeJS-1 directly because there may be several instances of NodeJS-1, which one to call? Furthermore, you can't know whether service is down or alive at any given point in time. That's why we use Discovery Services like Eureka - they handle all of these. When any given service has started, it must register itself in Eureka. So if you have started several instances of NodeJS-1 they all will be registered in Eureka and whenever Microservice-1 wants to call NodeJS-1 service it asks Eureka for locations of live instances of NodeJS-1. The service then chooses which one to call.
(I know, this is very tough to guess but still asking) NodeJS services(which are not legacy services) are suppose to call some third party api or retrieve data from DB.
Now, what I am not getting is why we need NodeJS code at all? Why can't we do the same in the microservices written in Java?
I can only assume that the NodeJS has been chosen because it has an outstanding performance for IO operations, including HTTP requests that may come in hand when calling 3-rd party services. I do not have any other rational explanations for this.
In general, microservices give you a possibility to have your microservices written in different languages and it's cool indeed since each language solves some problems better than the other. On the other hand, this decision should be made with caution and answer the question - "do we really need a new language in our stack to solve problem X?".

Calling Web API from Web Job or use a class?

I am working on creating a Web Job on Azure and the purpose is to handle some work load and perform server background tasks on my website.
My website has several Web API methods which are used by my website but I also want the Web Job to perform the same tasks as the Web API methods after they are finished.
My question is, should I get the Web Job to call this web API (if possible) or should I move the Web API code to a Class and have the Web API and also Web Job both call this class?
I just wondered what was normal practice here.
Thanks
I would suggest you put the common logic in a dll and have them both share that library instead of trying to get the webjob to call the webapi.
I think that will be the simple way to get what you want (plus it will help you keep things separated so they can be shared - instead of putting too much logic in your webapi).
I think it's players choice here. Both will run on the same instance(s) in Azure if you choose to deploy them that way. You can either reuse by dogfooding your API or reuse by sharing a class via a .dll. We started off mixed but eventually went with the dogfooding the API as the amount of Webjobs we are using got bigger/more complex. Here are a couple of reasons.
No coupling to the libraries/code used by the API
Easier to move the Web Job into its own solution(s) only dependent on the API and any other libraries we pick for it
Almost free API testing (Dogfooding via our own Client to the API)
We already have logging and other concerns wired up in the API (more reuse)
Both work though in reality, it really comes down to managing the dependencies and the size of app/solution is you are building.

Fetching data from cerner's EMR / EHR

I don't have much idea in medical domain.
We evaluating a requirement from our client who is using Cerner EMR system
As per the requirement we need to expose the Cerner EMR or fetch some EMR / EHR data and to display it in SharePoint 2013 portal.
To meet this requirement what kind of integration options Cerner proposes. Is there any API’s or Web services exposed which can be used to build custom solutions for the same?
As far as I know Cerner did expose EMR / EHR information in HL7 format, but i don't have any idea how to access that.
I had also requested Cerner for the same awaiting replies from their end.
If anybody who have associated with similar kind of job can through some light and provide me with some insights.
You will need to request an interface between your organization and the facility with the EMR. An interface in the Health Care IT world is not the same as a GUI. Is is the mechanism (program/tool) that transfers HL7 data between one entity and the other. There will probably be a cost to have an interface setup. However, that is the traditional way Cerner communicates with 3rd parties. HIPAA laws will require that this connection be very secure.
You might also see if the facility with the EMR has an existing interface that produces the info you are after. You may be able to share that data or have a flat file generated from that interface that you could get access to. Because of HIPAA regulations, your client may be reluctant to share information in that manner.
I would suggest you start with your client's interface/integration team. They would be the ones that manage the information into and out of Cerner. They could also shed some light on how they prefer to see things done.
Good Luck
There are two ways of achieving this as I know. One is a direct connectivity to Cerner's Oracle database. This seems less likely to be possible as Cerner doesn't allow other vendors to have a direct access to their database.
The other way is to use Cerner's mPage Web Services. We have achieved this using mPage Web Services. The client needs to host the web services on a IBM WAS or some other container. We used WAS as that was readily available to us. Once the hosting is done, you will get a URL and using that you can execute any CCL program which will return you the data in JSON/XML format. mPage webservice has a basic HTTP authentication.
Now, CCL has to be written in a way which can return you the data you require.
We have a successful setup and have been working on this since 2014. For more details you can also try uCERN portal.
Thanks,
Navin

Resources