I'm currently looking into Azure Logic Apps and I'm having a bit of trouble understanding the documentation to get a feel for the kind of performance I can expect. I'm also having some trouble finding articles/blog posts detailing any real-world examples using logic apps and the performance being experienced.
The scenario I'm trying to solve has the following flow:
Http Request triggers the logic app
The body of the request is saved to storage ... either Cosmos or Table Storage
Depending on some values in the request body, call a external API without caring about the response (fire and forget)
Respond to the original Http Request with an appropriate response (e.g. 200 OK if steps 2 and 3 succeeded)
... and all of this needs to happen under 1 second. I'm not really sure what to expect in terms of number of requests per second coming into my flow, but I'm going to assume 100 requests per second.
I'm wondering if anyone has some real-world experience with logic apps, that might be doing something similar to what I'm looking at, and the performance they are experiencing? Is what I've outlined above feasible? Is there room for a higher number of requests per second?
I've considered Azure Functions (possibly Durable functions) but I'm not only concerned about the performance, but also the cold-start scenario, because I need my solution to work in real-time. At the moment I'm just building a .NET Core API to encapsulate this flow (plus more), but I'm thinking an Azure Logic App could streamline what I'm doing.
Overall LogicApps is a little bit slower, which can be a reason in your case you would not want to use it. To avoid cold starts in Functions you can use a premium plan with warm-up instances.
Also, read this threads:
Azure Logic App and Function App performance difference
Is Logic Apps performance slower compared to a direct .NET REST Call?
Related
I am a complete newbie for Azure and Azure Functions but my team plans to move to Azure soon. Now I'm researching how I could use Azure Functions to basically do what I would normally do in a .Net console application.
My question is, can Azure Functions handle quite a bit of code processing?
Our team uses several console apps that effectively pick up a pipe delimited file, do some business logic, update a database with the data, and log everything along the way. From what I've been reading so far I typically see that Azure Functions are used for little pieces of code. How little do they mean? Is it best practice to have a bunch of Azure Functions to replace a console app EX: have one function that does the reading of a file and create a list of objects, another function to loop through those items and add business logic, and then another to write the data to a database or can I use one Azure Function to do all of that?
Direct answer is yes - you can run bigger pieces of code as Azure Function - this is not a problem as long as you meet their limitations. You can even have dependency injecton. For chained scenarios, you can use Durable Functions. However, Microsoft do not recommend long running functions, cause of unexpected timeouts. See best practices for azure functions.
Because of that, I would consider alternatives:
If all what you need is run console app in Azure you can use WebJobs. Here is example how to deploy console app directly to azure via VisualStudio
For more complex logic you can use .NET Core Worker Service which behaves as Windows Service, and could be deployed to azure as App Service.
If you need long-running jobs but with scheduled runs only I had really great experience with Hangfire which can be hosted in Azure as well.
This is really hard to answer because we don't know what kind of console app you have over there. I usually try to use the same SOLID principles used to any class on my functions too. And whenever you need to coordenate actions or if you need to run things in parallel you always use Durable Functions Framework too.
The only concern is related to execution time. Your function cans get pretty expensive if you're running on consumption plan and do know pay attention to it. I recommend you the reading of the following gread article:
https://dev.to/azure/is-serverless-really-as-cheap-as-everyone-claims-4i9n
You can do all of that in one function.
If you need on-the-fly data processing, you can safely use Azure Functions even if it takes reading files or database communication.
What you need to be careful at and configure, though, is the timeout. Their scalability is an interesting topic as well.
If you need to host an application, you need a machine or a part of the storage space of a machine in Azure to do that.
I have a couple of questions that exist around micro service architecture, for example take the following services:
orders,
account,
communication &
management
Question 1: From what I read I understand that each service is suppose to have ownership of the data pertaining to that service, so orders would have an orders database. How important is that data ownership? Would micro-services make sense if they all called from one traditional database such that all data pertaining to the services would exist in one database? If so, are there an implications of structuring the services this way.
Question 2: Services should be able to communicate with one and other. How would that statement be any different than simply curling an existing API? & basing the logic on that response? Is calling a service more efficient than simply curling the API?
Question 3: Is it worth it? Now I understand this is a massive generality , and it's fundamentally predicated on the needs of the business. But when that discussion has been had, was the re-build worth it? & what challenges can you expect to face
I will try to answer all the questions.
Respect to all services using the same database. If you do so you have two main problems. First the database would become a bottleneck because all requests will go to the same point. And second you will have coupled all your services, so if the database goes down or it needs to update, all your services will be affected. (The database will became a single point of failure)
The communication between services could be whatever your services need (syncrhonous, asynchronous, via message passing (message broker), etc..) it all depends on the use cases you have to support. The recommended way to do to avoid temporal decoupling is to use a message broker like kafka, doing this your services don't have to known each other and in case some of them go down the others will still working. And when they are up again, they can continue to process the messages that have pending. However, if your services need to respond in synchronous way, you can define synchronous communication between services and use a circuit breaker to behave properly in case the callee service is down.
Microservices architecture is far more complicated to make it work, to monitoring and to debug than a traditional monolith architecture so, it is only worth if you will have very large requirements of scalability and availability and/or if the system is very large and it will require several teams working in different parts of the system and it is recommendable to avoid dependencies among them. So each team can work at their own pace deploying their own services
I am kind of confused about when an API is needed. I have recently created a mobile app with flutter and cloud firestore as the database where i simply queried and wrote to the database when needed. Now i am learning full stack web development and I recently watched a tutorial where he built like an Express API with GET, POST, and DELETE functionality for a simple item in the database.
Coming from a background where i just directly accessed the database i am not sure why an API in this case is necessary, is it so I wouldnt have to rewrite the queries every time? This is a very simple project so he's definitely not making a 3rd party api for other developers to use. Am i misunderstanding what an API does exactly?
It was really simple, there was one collection in a MongoDB database and he was using postman to read and write to and from the database to check if it works.
API is a standard way with which your front-end (web/mobile) stores/gets information for your application. Your front-end can/should not directly access database ever. Understand the purpose of front-end which is to just display the interface and should do minimal processing. All the application logic should be at your backend (API server) which is exposed to your frontend via API (GET, POST etc) calls. So to store an item in your database, you will write data storing logic in your backend, and expose an API end-point which when triggered will perform the storing operation. That API call should be used by your front-end to trigger the storing process. In this way your logic of storing/database or any other thing is not exposed, only the API URL is. The purpose of front-end is to be exposed whereas backend/database should never be exposed and used from front-end
May be for you, an API is not necessary. But, the use-cases of an API is a lot.
For example:
You don't have to write business logic for every platform. (iOS, Android, Web, Whatever)
Your app will be lightweight since some computation would be offloaded to server.
Your app can be reverse engineered to get secret informations. (or, Your secret algorithm may be?)
What if you need to store something in filesystem that you want share with others?
Also a good read: Why we should use REST?
In your case, you are using a pre-written SDK which knows how to connect to Firestore, does caching and updates application data when needed, and provides a standard method of reading, writing and deleting data in Firestore (with associated documentation and example data from google).
Therefore, using an API (as described for the mongoDB) is not required and is undesirable.
There are some cases where you might want to have no read or write access to a firestore collection or document, and in this case, you could write a cloud function which your app calls with parameters, that receives the data that you want to write and does some sort of checking or manipulation beyond the capabilities of cloud firestore rules (although these can get pretty sophisticated). See https://firebase.google.com/docs/firestore/security/get-started
Todd (in the video contained in this link) does a few good videos on this subject.
However, this is not really working in the same was as the API you mentioned in your question.
So in the case of using Firestore, you should use the SDK and not re-invent the wheel by creating your own API.
If you want to share photos for example, you can also store them in firebase storage and then provide a URL for other devices to access them without your app being installed.
If you want to write something to firestore which is then sent to all other users then you can use listeners on each app, and the data will be sent to the apps after it arrives at Firestore.
https://firebase.google.com/docs/firestore/query-data/listen gives an overview of this.
One thing to always look at with firebase is the cost of doing anything. Cloud functions cost more than doing a read of a firestore document.
This gives an overview of pricing for different capabilities within the firebase set of capabilities.
https://firebase.google.com/pricing
Another most important factor is coupling. To add to #Dijkstra API provides a way to decouple the logic from each other, thus allowing for more application reliability, maintainability, fault-tolerance and if required scalability.
Thus there is no right or wrong here, or the comparison of API vs DB call is in itself not justified for the fact that fetching the data from Database is the ultimate aim. Even if you use a REST API or Query a database.
The means to achieve the same can differ based on specific requirements. For example, fetching water from the well.
You can always climb down the well and fetch a bucket of water if you need 1 bucket per day and you are the only user.
But if there are many users you would want to install a pull and wheel where people use it to pour fetched water into their bucket, yet again this will depend if there are 100 users per day using or more than that. As this will not work in the case of more than 100 users.
IF the case is that an entire community of say 1000 user are going to need the water you would go with a more complex solution of installing a motorized water pump to pump out the water and supply it to the user's home via a pipeline. This solution has many benefits like fast supply, easy to use, filtered water, scheduled, etc. But the cost and effort to achieve the solution is higher as well.
All in all, It comes down to the cost-vs-benefit ratio which you and only you can chart out, for different solutions vs the particular problem, as you are the best judge of scale and future user flow.
While doing that you can ask the following question about the solution to help decide :
Is the solution satisfying the primary requirement of the problem?
How much time is it going to take to build it?
For the time we spend to build a solution, is it going to working at more than 75% or more of its capacity?
If not is there a simpler solution that I can use to satisfy the problem and scale it as the requirement increases?
HTH.
I use ASP.NET 4.7 MVC 5 on an Azure App Service.
I currently get JSON response data by calling REST APIs directly from my .NET code and then deserialising this JSON using
var order = JsonConvert.DeserializeObject<Order>(json.ToString());
This works fine and is pretty good regarding speed. However I am now looking into Azure Logic Apps to see if this could be used to call 3rd party APIs which would then transform its native schema to my standard schema.
Would the use of Logic Apps slow down the retrieval of data from the API endpoints compared to my current native .NET approach. I have a feeling that it will as it may be much more asynchronous/fire and forget. I am hoping that I would just call into the Logic App and get the same response as if I had done it natively, but with greater flexibility and scalability.
Thanks.
EDIT: My question is about the use of Logic App versus native calling. So assume one job each. I confused the matter talking about transformation as well. Apologies
I think the easy answer is yes. The api call in your native code would be fired immediately and return the payload to your application directly, whereas a logic app step has all the plumbing required by activity orchestration and infrastructure costs. You could probably measure a difference, but it may not impact the quality of your application, depending on what it is trying to do.
The question I would ask is “does it matter”? What are you giving up by using logic apps over c# code? Is it worth the trade off to enjoy the benefits of serverless computing (scale out, no infrastructure to maintain, focus on the what instead of the how, etc)?
We're planning on implementing a server-side notification mechanism that pushes out to iOS and Android via ANH. We will have no code footprint on our mobile clients, short of a call to our server API for "registration". In this way our approach is looking similar to this MSDN discussion.
I also see the alternate, more bare-bones, approach noted on MSDN.
Is it fair to conclude that the two approaches will have similar performance on the 'send' side?
It appears the main difference is this:
The former approach has already done the work of integrating with the Task and Async mechanism, presenting a callable C# mechanism that has taken on more of the RESTful API layer,
The DirectBatch/Send API is just that -- the raw RESTful API for you to use as you see fit.
For operations that are available as both REST API and SDK, you shouldn't see any significant difference in performance on the client side because the SDK is just a wrapper around the REST APIs. There are SDKs for both iOS and Android and it's recommended to use those so that you don't have to re-write the wrapper.
Direct Send is only available in .NET SDK at the moment and for other platforms as REST API, so you'd have to implement your own wrapper in case you're using something other than .NET for the operation. You can use the sample to help you in the process.
In terms of performance it depends on what you mean by that.
Direct send will most likely be delivered to customers a bit faster because ANH service doesn't have to do any registrations in the process, it just delivers notifications with your parameters. But it has it's limitations in terms of number of handles you can provide and also you need to manage handles yourself.
If you only mean performance on the client side, then there should be no difference as all calls are asynchronous. And if you take advantage of tags, then you can do really tricky sends in one server call and let ANH figure out the details behind it.
But without knowing your scenario and requirements there's no way to give a proper recommendation.