I have a dashboard service that relies on other services to retrieve data. So in order to reuse existing services I'm calling ResolveService for each service I'm reusing.
My question is if it would be better for me to extract the logic from these services rather than resolving the service? In particular are there performance impacts if I were keep calling ResolveService?
It can be cleaner to extract the logic in shared dependencies, which allows for more finer-grained usage (i.e you call just what you need instead of the entire Service). But if you need to use the entire Service response than calling ResolveService<Service> is fine.
The performance impact is no different since it's essentially just resolving an a Service class from the IOC and executing it.
Related
1st approach
Implement the user profile in every micro service.
2nd approach: user profile service
Implement the user profile check in a single micro service.
What are other factors I might consider when making a decision? What would you do?
Actually you haven't mentioned yet another approach which I actually can recommend to consider:
Introduce the gateway - a special service that will take care of authorization / authentication between the "outer word" and your backend services:
Client ---> Gateway -----> Service 1
|-----> Service 2
...
It will be impossible to directly access Service1, 2, etc from the "Outer world" directly, only the gateway will be exposed, it will also take care of routing.
On the other hand, all the requests coming to the backend will be considered to be already authorized (might have additional headers with the "verified" roles list, or use some "standard" technology like JWT)
Besides the separation of concerns (backend services "think" only about the business logic implementation), this approach has the following benefits:
All the logic in one place, easy to fix, upgrade, etc. For example, your first approach might suffer from more advanced ecosystem (what if Services are written in different languages, using different frameworks, etc) - you'll have to re-implement the AuthZ in different technology stacks.
The user is not "aware" of all the variety of the services ( only the gateway is an entry-point, the routing is done in the gateway).
Doesn't have "redundant" calls (read CPU / Memory / IO) by backend services for the authZ. Compare with the second presented approach - you'll have to call external service upon each request.
You can scale the authZ service (gateway) and backend services separately.
For example, if you introduce new service you don't have to think how much overhead it will introduce to your AuthZ component (redis, database, etc). So you can scale it out only by business requirements
I am trying to build an Azure Durable function to orchestrate the execution sequences between multiple azure functions. Sometimes few of those functions are required to be executed parallel, sometimes in sequence. Its all based on some JSON configuration files.
But I am expecting my durable function being called by more than 1000 consumers and this every minute there is a probability of 1000 hits to the durable functions end point. Since internally durable function uses queues and tables, individual calls from durable functions will be organized, but what are the solutions available in azure to manage the situation of large number of hits in this durable function's API endpoint.
You can go through this load balancing solution documentation in Azure to decide on best load balancing solution to use in your case.
Given if it is a HTTP based application and based on the service provided, you can take a look at
Azure FrontDoor: It is a modern cloud CDN solution which provides fast, reliable, and secure access between your users and your applications’ static and dynamic web content across the globe. You can enable caching on the frontdoor to reduce the calls made to your backend. You can also secure your application with WAF.
Azure Application Gateway: It is a regional load balancer for a web application, where you can take advantage of features like WAF, auto-scaling, URL based routing etc.
Based on your requirements you can use both application gateway and front door together, more information can be found here.
I've been working on developing an API to serve a machine learning model using Azure Machine Learning (AML) webservice deployments on a Kubernetes target as outlined here: https://learn.microsoft.com/en-us/azure/machine-learning/service/how-to-deploy-and-where#prepare-to-deploy
My particular use case requires more than simple scoring of data through the model. The API needs to have multiple endpoints to perform different actions related to the model. For example, an endpoint to upload data, an endpoint to delete data, an endpoint to list existing data, an endpoint to score previously uploaded data, an endpoint to change preprocessing parameters, etc...
I can build all of this logic, but I am struggling with the fact that AML web services only provides one endpoint (The service URI ending in "/score"). Is there a way to add more endpoints to an AML service? For example, I would like to have a way for users to be able to POST, GET, DELETE, PUT "/data", GET "/predictions", and POST, GET, DELETE, PUT "/parameters", etc..
Is there a way to do this in AML or is this not the right tool for what I am trying to accomplish? Is there a better solution within Azure that is more suited for my needs?
Thank you!
Azure ML allows controlled rollout/traffic splitting, but doesn't directly support your API design.
I might need to know more about your use case to make a recommendation. Are you looking at implementing incremental learning? What is the motivation for separate endpoints?
-Andon
Your proposal seems like a stateful web server which is more than a REST API service. For example, you need to keep a piece of logic to maintain "ids" of data: if there are two POST /data calls with different data, and the DELETE /data need to operate on the proper one. This is much more than a single performance optimized machine learning service.
I would recommend you creating a separate server with all these logic pieces and only reach Azure Machine Learning service whenever you need it. You could also build a cache in your service to only call Azure ML service when a new data coming in or the local cache expired. It will save you additional money from Azure :-)
Can you guys explain
Service Fabric can be packaged with MULTIPLE SERVICES to be shipped but then
how do you reuse some of these services into other Application?
Is there a way Reliable Dictionary or Reliable Queue may be shared among
services deployed on Same Cluster?
I tried reading on google but no clear understanding. Your help will be really appreciated.
... how do you reuse some of these services into other Application?
What do you mean with reuse? Sharing the code? You could have a service in Application A talk to a service in Application B instead of having the same service in Application A.
Is there a way Reliable Dictionary or Reliable Queue may be shared among services deployed on Same Cluster?
No there is not. A Reliable Dictionary or Reliable Queue provides data locality to a service removing the need for additional network calls. As soon as you need this same data for multiple services you should consider using other storage solutions like CosmosDB, Blob storage or another database.
If you are looking for some kind of distributed cache you can take a look at Azure Redis.
It is, however, entirely possible to expose the data of a Reliable Dictionary or Reliable Queue using a service. Then that service acts like a data provider / repository. You can expose methods like Add() or Delete() in such a service that results in an update of the Reliable Dictionary or Reliable Queue.
I currently have a couple of WebApi projects that use a few class libraries such as address lookup, bank validation, image storage etc.
Currently they are all in a shared solution but I'm planning to split them up. I thought about moving the libraries into NuGet packages so that they are separate from the API projects and is properly shared.
However, if I make a change to one of these components I will need to build and redeploy the API service even though it's a separate component which has changed.
I thought about putting these components into a separate service but seems a bit of overhead for what it is.
I've been looking at Azure WebJobs and think I may be able to move these components into this instead. I have two questions related to this:
Are WebJobs suitable for calling on demand (not using a queue)? The request will be activated from a user on a web site which calls my API service which then calls the Web Job so it needs to be quick.
Can a WebJob return data? I've seen examples where it does some processing and updates a database but I need a response (ideally Json) back to my API service.
Thanks
According to your requirement, I assume that you could try to leverage Azure Functions by creating a function using the HTTP trigger, which could be triggered by accessing the Function URL with parameters and return the response as you expected. You could follow this tutorial for getting started with Azure Functions.