Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 3 years ago.
Improve this question
I'm trying to compare AWS and Azure for a custom web app that's essentially like any canned content management system. It requires web hosting, database, email, storage, security, some way to process ASP.NET but with high availability and load balanced.
The PaaS/IaaS distinction can sometimes be grey (in part because companies tend to use marketing jargon that portrays IaaS type services as maintenance free). From a small business perspective its quite clear though. If a service involves the SMB spend time maintaining rather than developing, its in the IaaS camp. Since I'm a single developer with limited time, a PaaS model for all services would be preferable. The ideal would be all services (web hosting, database, email, etc are offered as a zero maintenance scalable service rather than have to spin up and manage individual instances.
I find AWS can do everything but a drawback is that one still needs to manage instances (i.e. I would need to keep the software on instances updated, track instances, manage network, security, etc.) S3 doesn't process scripts. AWS Beanstalk and Optworks are still essentially mostly helper apps for starting up an IaaS type environment. (whereas say DynamoDB would count as a PaaS type service). Recently Microsoft has dropped prices on Azure which makes it an attractive alternative
In short, I am looking for a list of services offered by Azure which are actually no maintenance services that don't require I patch software or need to spin up instances to handle traffic spikes (e.g. web hosting, script processing, database, email, etc..)
web hosting, database, email, storage, security, some way to process ASP.NET but with high availability and load-balanced
All of the above are standard features which any matured cloud provider will have in the toolkit. In regards to MSFT Azure:
For web hosting - you have PaaS solutions such as App service plan
and App service environment. The upkeep of the platform (as the name suggests) is with Azure but note that any components that you deploy as part of the package belong to dev and test teams respectively
For database and storage - for a complete PaaS solution you have Azure SQL Server Database and Azure SQL Server Managed Instances, but as I said earlier you will anyways have to own any custom deployment (security policies, VNET injection and IAM's yourself)
Related
Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed last month.
Improve this question
If I deploy a bunch of Azure Functions in to a functions app and setup the functions app to use a consumption plan, then each calendar month the compute cost for the first 1 million calls is basically free.
How come I can't do this with a web API using something like MVC or OData?
The only difference I can see is the framework parts used, presumably there's some infrastructural reason for this?
Which leads to ...
I'm tempted to make all API implementations a set of Azure Functions to make the most of cloud costs but it feels like I'm making the infrastructure costs dictate my technical decisions a little too much here or that I'm missing something.
As a sort of secondary question if there's any MSFT peeps out there, would Microsoft consider making it so that all Azure app Services can be consumption planned?
The big difference between an App Service and a Function App is the fact that, for your App Service, there's an App Service Plan dedicated to run the App Service which reserves a set of resources like CPU and memory.
An App Service plan defines a set of compute resources for a web app to run. These compute resources are analogous to the server farm in conventional web hosting. One or more apps can be configured to run on the same computing resources (or in the same App Service plan).
When you create an App Service plan in a certain region (for example, West Europe), a set of compute resources is created for that plan in that region. Whatever apps you put into this App Service plan run on these compute resources as defined by your App Service plan.
Source: Azure App Service plan overview
For a Function App on a Consumption Plan, this is a specialized version of an App Service Plan. In that case you have a lot less to say about how that plan is configured or what resources it gets: that's all abstracted away for you as a user.
When you're using the Consumption plan, instances of the Azure Functions host are dynamically added and removed based on the number of incoming events. The Consumption plan is the fully serverless hosting option for Azure Functions.
Source: https://learn.microsoft.com/en-us/azure/azure-functions/consumption-plan
And while the Azure Functions Host is the application constantly handling requests by checking to see if your Function App is being called and requests need to be passed on to your code, in a more traditional application like an MVC app, it is your application that actually handles the request beginning to end.
EDIT
why can't I put an MVC controller up on the same contract terms in the cloud as I can with an azure function
Because the current implementation of an App Service is "analogous to the server farm in conventional web hosting", meaning it expects an entire web application. An Azure Function expects a piece of code that can handle the request (better: trigger). A controller is more than just that, and has some (a lot...?) of fluff around it to be able to work.
And, somewhat simplified: because it hasn't been made available by Azure. Presumably because it would make Azure Functions way too biased on how the (.NET, HTTP triggered) function should be implemented.
Abstraction: a Function is a piece of code that can handle a trigger. This trigger can be a lot of things, one of which is a HTTP request. From the Functions runtime's point of view, all triggers just need to be mapped to a handler. Currently, that handler could be considered framework agnostic, Azure Functions only prescribes it adheres to a certain signature.
By allowing developers to host an MVC Controller as the handler for an Azure Function, this would also mean it needs to have all the fluff around the controller either be in place (what, how, ... ?) or the controller be slimmed down to 'just' be a handler for a HTTP trigger... which would make it a regular Azure Function.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 10 months ago.
Improve this question
Please, consider a system (composed of many microservices and BFFs) that:
Each Platform (many microservices) and Customer Journey (BFF) has its own AWS Account (as part of an organization - Control Tower). We might have 20 - 30 AWS Accounts.
AWS Services used are: Lambda, SNS, SQS, Step Functions, EventBridge, Cognito, S3, CloudFront, CloudWatch, DynamoDB, Aurora Serverless (V2) + RDS Proxy, API GW (REST)
External Services: Lumigo for Monitoring, GitLab CI/CD (SaaS), Salesforce, Stripe, Twilio, Some Banks (API based)
Multi-region deployment (For DR only). So DynamoDB and Aurora Serverless (V2) are synched to another region, and the application is always deployed in both regions (Queues and other temporary states/data are not synched).
and knowing that it's now 2022 (Lambda will turn 10 in a couple of years) would we need VPC (VPCes?) for this solution for maximum security (regarding Infrastructure alone)?. It always looked to me that good governance, automatic rotation of IAM credentials, a strong CI/CD pipeline, and continuous and external security checks would be enough for Serverless Architecture, so that developers or DevOps wouldn't need to invest a lot of energy setting up and maintaining Network and VPC
Any help would be appreciated.
Cheers
So it is no must. You can keep your service also secure without a VPC. However, it may be more cost-effective to use a VPC. For example, if you move data from S3 to lambda you pay a fee for network traffic. If both have endpoints in the same VPC there are no fees.
Furthermore, the two accounts per microservice approach seems a bit complex. It would rather have one CDK construct/terraform/cloud formation template per microservice and then two instances of them for test and prod. The default quota for AWS Organization is 10 accounts, so it would limit you to 5 microservices.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
We have below technical stack in our application
Angular 7
Asp.net core 2.2
Sql server
Images
To go for serverless architecture on Azure, we have map as follows
Angular 7 - Blob (as it is static)
Asp.net core 2.2 - Azure functions
SQL server - SQL as a service
Images - Blob
Now how to handle Azure functions#Edge ?
Do we have alike Lambda#Edge in AWS?
As far as I know there's no equivalent Azure service right now. In fact, back in October 2018, the comparison between Lambda#Edge and Azure IoT Edge was removed from the Services Comparison page.
The equivalent right now would be to use CloudFlare Workers combined with Azure Functions. Troy Hunt explains how he did just that to scale Have I Been Pwned in Serverless to the Max: Doing Big Things for Small Dollars with Cloudflare Workers and Azure Functions. The site has a lot of traffic and Troy Hunt pays for it out of his own pocket. Workers on the edge means that Have I Been Pwned doesn't have to hit Blob storage in most cases.
Right now this may be a very good choice. Cloudflare Workers are faster than Lambda#Edge at this point and CloudFlare offers very good caching, proxying and DDOS protection services. You'll have to consider startup time too. Javascript functions can start faster than Java or .NET Core functions, which means they can handle cold starts and request bursts better.
All of this will certainly change in the future. Functions on the edge is a lucrative market. Lambda#Edge will definitely get faster. Azure may add its own service or cooperate with CloudFlare.
Two questions:
1. Why not use any CDN on azure in order to server your static files? Blob storage is not for static content but usually for user related binaries.
2. Why not use the [Azure app service]1 to host your .net core api (if it is an api)?
You can find documentation on how to use the functions with .net core here
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
I am considering three ways to to build a service bus topic listener:
Azure functions: https://learn.microsoft.com/en-us/azure/azure-functions/functions-bindings-service-bus
Service fabric: https://iamrufio.com/2017/04/21/azure-service-bus-listener-with-azure-service-fabric/
Web job: https://code.msdn.microsoft.com/Processing-Service-Bus-84db27b4
I'm not sure which way to go. I'm leaning towards Azure functions since it has a direct out-of-the-box service bus integration. However since it's fairly new I'm not sure if it's a safe option.
Service fabric, from what I've read, offers most resiliency and support.
And a web job would be safest to pick since everything is easily configurable but I'm afraid I'll be reinventing the wheel as no out-of-the-box support is provided.
What direction would be best?
It's a very open ended question. You should look at the requirements that you have and other constraints such as budget. For example, running a production grade Service Fabric cluster would require at least 5 nodes. Versus running webjob that would require a hosting plan with some scale out (for HA). Versus running with Azure Functions using consumption plan, where you'd pay per execution only after free grant 1 million requests and 400,000 GB-s of resource consumption per month is used up.
I would suggest to start simple, with Azure Functions. Create your prototype and see if that's what you need. Are you running into issue or not. With Functions utilization of Azure Service Bus could be somewhat limited. For example, you can't dead-letter a message as you either have to return successfully to complete it or throw an exception to retry. You can't defer a message, rather instead would need to send another message. Nor can you use transactional option by using send-via feature of Azure Service Bus.
If you find yourself requiring those features, WebJob would be my next candidate. You will have to look how you'd utilize it. Most likely you'll need to create your own receiving pump and handle things Functions offered for free, but you'll have the flexibility required to create multiple connections, configure clients the way you need, etc.
And only after that, if you see that aside from Service Bus you have requirements like data partitioning, or HA, or DR, or deploying and scaling out multiple services, I'd be more serious about Service Fabric.
Each of these 3 technologies has its place and use cases.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
We have many REST services within our infrastructure and these are built using different technologies(Java, Go, Ruby, NodeJS) but all of them have certain common requirement like Authentication, Authorization, Rate limiting, Analytics etc so we are thinking to have a API gateway in front of these APIs so that all the communication happens through it only.
I came to know about some open source products in market like Strongloop/Loopback, WSO2, TYK, APIAXLE & 3scale but most of these doesn't look like time tested and ready for production usage. Few things which are coming to my mind now:
How is user feedback after using one of these solutions?
Lot of people would need this kind of feature so how are they doing it? Am I looking in right direction?
Is there a better way to solve my problem without using API Gateway?
If I mention about WSO2 API Manager,
As I know lot of people using it in production and provide good feedback about it.
Yes. you can use API Manager for rate limiting. API Manager has feature called throttle tiers. you can use that feature for rate limiting. For other features like authentication and authorization you have to use API Manager with WSO2 identity server. For analytics feature you have to use API Manager WSO2 Business Activity Monitor. Integrating all these products you can achieve the features you have mentioned.
I can answer for 3scale since I work there.
3scale is a complete API management platform that implements authorization, rate limiting and analytics for your API. We offer different integration options the most popular of which is our API gateway, that can be hosted by us or deployed on-premise.
This is an Nginx-based gateway that is deployed in front of your API servers and authorizes incoming calls by reaching to the 3scale API. The gateway extracts the API key of the incoming call and the endpoint that is being called and checks whether this particular request should be authorized (i.e. valid key, usage within limits, valid endpoint, etc).
One key part of our API gateway is that the authorization is performed asynchronously so that it has no impact on the latency perceived by the API user.
Regarding your particular questions:
We have 600 customers using 3scale in production. This includes APIs with very large traffic volumes, some of which you can see and read about here.
I'd say the main choice is between using an API management platform or implementing these features yourself. The advantage of using something like 3scale is that we specialize in exactly this problem and we provide other very useful features besides the basic authorization and rate limiting: a developer portal hosted by us where your API users can register and manage their keys, a billing system that you can use to offer paid plans for your API, support for advanced auth patterns like OAuth2 and others that you can read about in our website.
You could also integrate 3scale in your API with one of our software libraries. However since you have multiple APIs written in different languages I'd recommend the API gateway since you will only have a single integration point (therefore easier to maintain).
As always the best is if you test it by yourself. We have a free plan with no time limits, so you can start there.