How Azure Logic app can long runs for days? - azure

I have external application as questionnaire and this application can be accessed by users' cellphone, table & web. This questionnaire quite heavy load, 128K/day.
For capturing data that come from those devices, I put Logic app. It looks good and works well but Logic app has limit duration, 120 seconds. I thought logic app can solve my design but the limitation restricts my initial idea. So,
How do I overcome this limitation ?
Do I need add additional services as a listener from questionnaire application as data feeding to Logic app ?
Could the limitation be modified for 30 days ?

Related

Azure Logic Apps performance

I'm currently looking into Azure Logic Apps and I'm having a bit of trouble understanding the documentation to get a feel for the kind of performance I can expect. I'm also having some trouble finding articles/blog posts detailing any real-world examples using logic apps and the performance being experienced.
The scenario I'm trying to solve has the following flow:
Http Request triggers the logic app
The body of the request is saved to storage ... either Cosmos or Table Storage
Depending on some values in the request body, call a external API without caring about the response (fire and forget)
Respond to the original Http Request with an appropriate response (e.g. 200 OK if steps 2 and 3 succeeded)
... and all of this needs to happen under 1 second. I'm not really sure what to expect in terms of number of requests per second coming into my flow, but I'm going to assume 100 requests per second.
I'm wondering if anyone has some real-world experience with logic apps, that might be doing something similar to what I'm looking at, and the performance they are experiencing? Is what I've outlined above feasible? Is there room for a higher number of requests per second?
I've considered Azure Functions (possibly Durable functions) but I'm not only concerned about the performance, but also the cold-start scenario, because I need my solution to work in real-time. At the moment I'm just building a .NET Core API to encapsulate this flow (plus more), but I'm thinking an Azure Logic App could streamline what I'm doing.
Overall LogicApps is a little bit slower, which can be a reason in your case you would not want to use it. To avoid cold starts in Functions you can use a premium plan with warm-up instances.
Also, read this threads:
Azure Logic App and Function App performance difference
Is Logic Apps performance slower compared to a direct .NET REST Call?

Google App Engine with Python 3: Mix Standard and Flexible for Websockets

I've started to port a web app backend to Google App Engine for scaling. But I'm completely new to GAE and just reading into the concepts. Steep learning curve.
I'm 95% certain that at some point many millions or at another point at least hundreds of thousands of users will start using the web app through a GUI app that I'm writing. And they will be globals users, so at some point in the future I'm expecting a relatively stable flow of connection requests.
The GAE Standard Environment comes to mind for scaling.
But I also want the GUI app to react when user related data changes in the backend. Which suggests web sockets, which aren't supported in the Standard Environment, but in the Flexible Environment.
Here's my idea: The main backend happens in a Standard app, but the GUI listens to update notifications from a Flexible app through web sockets. The Standard app calls the Flexible app after noteworthy data changes have occurred, and the Flexible app notifies the GUI.
But is that even possible? Because sibling Flexible instances aren't aware of each other (or are they?), how can I trigger the persistent connections held by the Flexible instance with an incoming call from the Standard app to send out a notification?
(The same question goes for the case where I have only one Flexible app and no Standard app, because the situation is kind of the same.)
I'm assuming that the Flexible app can access the same datastore that the Standard app can. Didn't look this one up.
Please also comment on whether the Standard app is even a good idea at all in this case and I should just go with Flexible. These are really new concepts to me.
Also: Are there limits to number of persistent connections held by a Flexible app? Or will it simply start another instance if a limit is reached?
Which of the two environments end up cheaper in the long run?
Many thanks.
You can only have one App engine instance per project however you can have multiple flex services or standard services inside of an instance.
Whether if standard is a good idea it depends up to your arquitecture, I'm pretty sure you've looked at the comparison chart, from experience is that if your app can work okay with all the restrictions (code runtimes, no availability to do background process, no SSH debugging, among others) I will definitely go for standard since it has a very good performance when working with spikes of traffic deploys new services in just seconds, keep in mind that automatic scaling is needed for the best performance result.
There are multiple ways to connect between flex or standard services one would be to just send an HTTP request from one service to another, but some other options with GCP services like Pub/Sub.
In the standard environment, you can also pass requests between
services and from services to external endpoints using the URL Fetch
API.
Additionally, services in the standard environment that reside within
the same GCP project can also use one of the App Engine APIs for the
following tasks:
Share a single memcache instance.
Collaborate by assigning work
between services through Task Queues.
Regarding Data Store you can access the same datastore from different services here is a quickstart for flex and the quickstart for standard
Which of the two environments end up cheaper in the long run?
Standard pricing is based on instance hours
Flexible pricing is based on usage of vCPU, memory, and persistent disks
If your service run very hight performance process on short periods of time probably standard will be chepear, however if you run low performance process on long periods of time, flex will be chepear, but again it depends on each use case.

Azure Mobile SDK vs Custom Code - Scalability

We have written two mobile apps and a web back end. Mobile apps are written in Xamarin, back end in C# in Azure.
There is shared data between all three apps, some are simple keyword tables, but some data tables will change, e.g. mobile user is moving around and making some updates to a table, updates need to go back to web app and then possibly out to the apps.
Currently use SQLite on the mobile apps and following a off-line first approach, i.e. user changes a table we write to SQLite on mobile and then sync to server. If user has no connectivity a background process will eventually sync up data to server when possible.
All this is custom code now, and I am a little hesitant to continue on this path. We are in testing with 4 users or so, but expectation is to grow to thousands or tens of thousands of users in 6 to 18 months.
I think that our approach might not scale. Would prefer to switch to an Offline first framework instead of continuing to roll our own.
Given our environment I think using Azure Mobile SDK would be the obvious path to follow.
In general would you choose an offline first framework if your app will grow? In particular, any experience with using Azure Mobile SDK?
Note that your question will likely be closed because you're asking for an opinion/recommendation but anyways...
From the Azure Mobile Apps Github repo:
Please note that the product team is not currently investing in any
new feature work for Azure Mobile Apps.
Also to my knowledge, Microsoft has not announced any new SDK or upgrade path.
With that in mind, one option is to keep your custom code and bonify it with code that you'd extract from the SDK or vice versa.
Assuming that your mobile app calls a web service, which then performs any necessary writes, you could load test a copy of your production environment to see if things fall over and at what point. I'm not a huge fan of premature optimization.
Assuming things do fall over, you could introduce a shock absorber between your web service endpoint and the database using a Service Bus Queue.

Change Azure Database Plan via API / Programmatically

I want to use the database plan "Web tier (Retired)" when my web application is being used (ie: for executing queries), but return to the plan "Standard tier" when the web application is idle.
Can I programmatically do this change?
I want to do that change in my web application's Application_Start
protected void Application_Start()
{
...
}
I would use the "standard tier" always if it wasn't that bad with large queries. It is really slow and there are several StackOverflow posts about that. The retired web plan, on the other hand, is really good with large queries but is very expensive
You can use the Update Database REST API to programmatically change the plan.
However, this is not a change that will occur in a matter of a few seconds. This could take several minutes (that's been my experience). So, putting this in your application startup code is not something I would recommend.
I would encourage you to look at trying to optimize your queries as a long-term solution. After all, the Web edition is only going to be around until September, 2015. You might want to look at this for some hints on things you could do differently.
Also, since you mentioned that Standard tier is not delivering the performance you are needing, you may want to give Premium a try to see if it works better for your application.

Saas Architecture and design suggestion.. Any existing products that simplifies the design

I have the following setup
Customer access -> Web application -> Database
A Server application (console based) for each customer running in the Server continuously that downloads data from various locations and update database
So if i am having 100 customers, i will need to run 100 console applications in the server.
If there is any problem/crash with one server application(because of specific kind of data i am downloading), i will be able to fix it by restarting or patching.
I took this approach as i initially thought it is easy to maintain. But feeling not anymore. I am sure there are better tools available outside to manage this kind of scenarios. If you know any please let me know. I should be able start/restart/patch/monitor server usage/check for crash on the server application through some nice GUI.
Or may be there is a way to write one multi-threaded application to serve all customers instead of one for each. And there may be a way to shutdown/restart the any customer's thread.
Thanks
The right way is to use a threaded application that can set your tenant context for the process that needs to be done for that thread.
This way, we have 1 app for all customers and van make use of application events and mailers to notify on case of any error.
An audit table with track of the various data processing status can help in a GUI to be built for tracking the progress on a tenant basis.
HTH

Resources