I have an Azure Logic App that provides a publicly accessible HTTPS URL to trigger the Azure logic app. The Logic App does a number of data manipulations and so on and then sends a response back to the requester. This is working well but I am concerned about escalating costs if some one intentionally calls this HTTPS URL (ie: a DDOS attack or something along those lines)
What is the best way to stop this? I know Azure has a limit of 100 000 runs per 5 minutes but that still adds up quickly. Is there a better way to put a threshold in place to stop a situation like this? Some thoughts I had:
Create a metric alert and call another Azure Logic App to disable the open logic app being called repetitively. But I do not seem to be able to get this to work.
Use Traffic manager or something similar to manage some limits or threshold but not sure how this would work either.
Any advise or thoughts would be very helpful. The through put on the Logic App would be relatively low and anything over say 5000 calls in 10 minutes would be out of the norm. For this reason the costs are relatively low and things like Azure DDOS protection are just too expensive. So the approach definitely needs to be cost efficient.
The best solution is to use some form of API management to securely expose your API's.
Here's a good blog post on how to secure your API's with Azure API Management:
using-azure-api-management-to-prevent-denial-of-wallet-attacks
Related
I recently set up a Logic App in Azure to control automated tasks for a website, and its job is to send a single HTTP request to a task-management web service every 12 hours to initiate a few batched "cleanup" jobs. This app was on the consumption plan, which says it only charges you for the resources you use. However, I noticed that the Logic app was, by itself, accruing about $200/month in charges ($5/day). This is for sending only two HTTP requests every day, even though the pricing model says HTTP requests cost something like $0.003 per request (or something like that).
So I reached out to Microsoft tech support and got a prompt refund, and links to the pricing model, but it still makes literally no sense to me. They couldn't seem to explain why a Logic App that sent only (1) HTTP request to an endpoint every 12 hours would cost a relatively high amount of money.
The pricing model is listed here: https://learn.microsoft.com/en-us/azure/logic-apps/logic-apps-pricing, but it doesn't seem to have examples that are applicable to my situation. Can anyone explain why such a simple "pinger" app would cost so much?
I have an Azure Logic App that monitors an SFTP site for new files, and if it finds one, it sends a message to an Azure Queue for subsequent processing, then deletes the file. My application has grown in scale and a single logic app seems to only be grabbing 5-10 files a minute.
Is it possible to setup a second (third, fourth, etc.) Logic App that monitors the same SFTP site, without the two apps conflicting/colliding with each other. I also see that there is a "High Throughput" setting that seems interesting, but I'm not sure it is what I need. My ultimate goal is to process more files faster, and I am considering changing the Logic App out for a scheduled Web Job that monitors the SFTP site. Since I am live and files are pouring in, I am a little reluctant to change anything until I know it's safe.
Any insight would be appreciated.
Thanks!!
Logic app comes under server less architecture, IF we select the pricing model based on 'number of executions' then it impact on the performance since Microsoft allocates the resources for such kind of pricing model as shared one and which server is free up the processing. I would recommend to attache service plan to it and select the pricing model 'per minute'
One more point, If you want longer operations to be done then Azure logic app is not appropriate one but since you are connecting to enterprise integration then logic app is good choice. I would recommend to divide this functionality between logic app with Azure function OR Microsoft flow.
I understand that Microservices is about independent loosely coupled services. I have read https://en.wikipedia.org/wiki/Microservices.
When it comes to Azure, I understand there are many components like Azure Service Fabric, AKS and also have the option of deploying containers within Azure VMs using Docker or any other containerization tools. However, since Microservices is about developing atmoic individually scalable services, can this also be achieved by deploying each service as an Azure Web API APP within an App Service Plan and configure Auto-Scale based on Performance metrics (though each API APP may not be individually scalable, they can still be individually manageable in terms of deployment, configuration etc)?
Can someone please suggest if this thought process is correct?
Microservices aren't a platform or technology so if you can make small independently deployable services then they are microservices. Sure - some tech helps but it depends on your situation.
If you only need a few services you probably don't need anything complex. Make sure services are well modeled, own their own data and ideally have a good monitoring and deployment pipeline setup. Design for service failure where possible.
Do you need to scale each part independently? Ideally, you should be able to but do services have very different requirements? You could have many small App service plans but that comes at cost of unused resources so split when you need to.
This question and of course the answers are going to be opinion based, but generally when thinking in terms of micros services, think not in terms as things like loads of API's and VM's etc. Instead think in terms of. When i upload an image, its needs to be resized, and the table updated to give a url for the thumb. or when XXX record is updated in database, Run XXX in order to create a report, or update Azure search. and that each service, just knows how to do a single thing only. I.E Resize an image.
Now one could say. I have a system, A repo library, and some functions library. When an image is posted, I upload, then call this, and that etc.
With Micor services. You would instead just add the image to a queue. Create an azure function that has a queue trigger. that would resize and save both the large and the thumb to storage. this would then either update the database, or in true micro service, it would add a queue to store the new info, another function would watch that queue and insert into the database.
You can use the DB queue from anything. You can use the Blob queue from anything. Your main API, does not care how images are handled. You can change your functions one day to maybe save to dropbox, instead of azure blob. All really easy, with no re-build of the API, because the API does not care.
A good example I use it for is email and SMS. My systems dont know how to send an email, or an SMS. They only know how to add to a queue. My microservices. SendEmail and SendSMS do know how to do it, and I can change how and who i send that content with, really easy. I can tomorrow change from Twilio to send grid, without ever telling the API that i've done it.
On a more complex thing. I have approval, at the moment that approval sends an email or SMS to either user or admin, and that can change over time. So I have an SMS server, Email Service and and approvalService. when approval happens, it just adds a config to the queue, The rest is done by a logic app, that knows to send an email to XXX and an SMS to XXX and then update database. My api, is just a post, that creates a queue.
Basically what I am saying here is to get started, maybe porting an existing app. Start with the workflow stuff, like send an email, resize an image, create a report, create a PDF, email 50 subscribers etc. and take all that code out and put into there own micro service that just knows how to do one thing. Then when you grow with confidence, create a workflow from all of these services with Logic Apps, let azure take care of the rest, thats what they want to do.
Currently our DB works in customer's local network and we have client app on C# to consume data. Due to some business needs, we got order to start moving everything to Azure. DB will be moving to Azure SQL.
We had discussion about accessing DB. There are two points:
One guy said that we have to add one more layer between our app (that will be working outside Azure at end-user PCs) and SQL Azure. In other words he suggested adding API service that will be translated all requests to DB, i.e. app(on-premises) -> API service (on Azure)-> SQL Azure. This approach looks more reliable and secure, since we are hiding SQL Azure behind facade of API service and the app talks to our API service only. It looks more like a reverse proxy. Obviously, behind this API we can build more sophisticated structure of DBs.
Another guy suggested connecting directly to DB, i.e. app(on-premises) -> SQL Azure. So far we don't have any plans to change structure of DB or even increase count of DBs. He claims it more simple and we can secure our connection the same way. Having additional service that just re-translates our queries to DB and back looks like wasting time.In the future, if needed, we would add this API.
What would you select and recommend, and why ?
Few notes:
We are going to use Azure AD to authenticate users.
Our application will be moving to Azure too, but later (in 1-2 years), we have plans to create REST API and move to thin client instead of fat client we have right now.
Good performance is our goal, we don't want to add extra things that can decrease it, but security is our most important goal as well.
Certainly an intermediate layer is one way to go. There isn't enough detail to be sure, but I wonder why you don't try the second option. Usually some redevelopment is normal. But if you can get away without it, and you get sufficient performance then that's even better.
I hope this helps.
Thank you.
Guy
If your application is not just a prototype (it sounds like it is not), then I advise you to build the intermediate API. The primary reasons for this are:
Flexibility
Rolling out a new version of an API is simple: You have either only one deployment or you have something like Octopus Deploy that deploys to a few instances at the same time for you. Deploying client applications is usually much more involved: Creating installers, distributing them, making sure users install them, etc.
If you build the API, you will be able to make changes to the DB and hide these changes from the client applications by just modifying the API implementation, but keeping the API interfaces the same. Moving forward, this will simplify the tasks for your team considerably.
Security
As soon as you have different roles/permissions in your system, you will need to implement them with DB security features if you connect to the DB directly. This may work for simple cases, but even there it is a pain to manage.
With an API, you can implement authorization in the API using C#. Like this, you can build whatever you need and you're not restricted by the security features the DB offers.
Also, if you don't take extra care about this, you may end up exposing the DB credentials to the client app, which will be a major security flaw.
Conclusion
Build the intermediate API. Except you have strong reasons not to. As always with architecture considerations, I'm sure there are cases where the above points don't apply. Just make sure you understand all the implications if you decide to go the direct route.
I am trying to build an application in Windows Azure that requires notifications - so I am using Http Polling, but I got a problem. I need to get to the very same instance of my web role so I can maintain the polling. I found a solution with web farm request rewriting(adding extra proxy role), but I don't like the solution.
Soo my question:
Is there a way to do stateless polling and if there is someone that implemented this - can you give me a hint or link?
You just need the poll request to thunk through to some sort of shared 'stateful' layer. Probably the best candidate would be to use the Windows Azure caching service. http://www.windowsazure.com/en-us/home/tour/caching/ you could also just use storage and choosing between the two will largely be determined by how busy your app is; i.e. whether it is cheaper to pay a fixed cost for some cache space with no per request charge or pay almost nothing for your storage space but pay a per request charge.
By taking this approach it does not matter if susequent polling requests end up routed to a different instance. To me this is best practice regardless; I've seen some fugly bugs pop up when people assume session affinity when making AJAX calls back into an Azure web role.