difference between elastic and scalable expenditure model - azure

I am going through exam questions, where in I find some questions are quite tricky. Where in I got a question in mind, what is difference between elastic and scaleble expenditure model?
The question arrived due to below two questions.
Your company is planning to migrate all their virtual machines to an Azure pay-as-you-go subscription. The virtual machines are currently hosted on the Hyper-V hosts in a data center.
You are required make sure that the intended Azure solution uses the correct expenditure model.
Solution: You should recommend the use of the elastic expenditure model.
Does the solution meet the goal?
• A. Yes B . No Ans ; No ( B)
B is the correct answer. The correct expenditure model is "Operational".
Your company is planning to migrate all their virtual machines to an Azure pay-as-you-go subscription. The virtual machines are currently hosted on the Hyper-V hosts in a data center.
You are required make sure that the intended Azure solution uses the correct expenditure model.
Solution: You should recommend the use of the scalable expenditure model.
Does the solution meet the goal?
A. Yes B . No Ans : No(B)
so how to differentiate scalable and operational?

Scalable : environments only care about increasing capacity to accommodate an increasing workload.
Elastic : environments care about being able to meet current demands without under/over provisioning, in an autonomic fashion.
Scalable systems don't necessarily mean they will scale back down - it's only about being able to reach peak loads.
Elastic workloads, however, will recognize dynamic demands and adapt to them, even if that means reducing capacity.
Operating expenditures are ongoing costs of doing business. Consuming cloud services in a pay-as-you-go model could qualify as an operating expenditure.
https://learn.microsoft.com/en-us/azure/cloud-adoption-framework/strategy/business-outcomes/fiscal-outcomes

This appears to be a "decoy" question that may be phrased as a 4-way multiple choice question in the real exam. The correct answer is 'operational expenditure' since you pay for what you use. The other possibility is 'capital expenditure' (which is wrong since you are not investing in assets). 'scalable' and 'elastic' expenditure are nonsense terms put in to distract you if you are just guessing.

Scalability is simply an increase in size or number—and, therefore, Elastic is also a form of scaling, but in this case within the same machine.
For example, we have two types of scaling:
HORIZONTAL SCALING (known as Elastic model): Increase memory and storage (etc.) of a VM as the workload increases and reduces accordingly.
VERTICAL SCALING (scalable expenditure model): Increase by adding or attaching more virtual machines of the same size and configuration.
Note: Memory and storage of the same VM are increased IN Elastic models WHILE new VMs are added in SCALABLE MODELS. Either way, just know that the number of VMs remain the same, but memory/storage increases in the 'elastic model' and number of VMs increases as well as memory/storage.

Related

Is Amazon EC2 free tier server appropriate for my little web application?

I'm building a little software activation web service in Java, so I need a cloud-based server which will run Apache and Tomcat and MySQL.
It will get very little usage as I don't expect to sell very much product at first. I'll be very lucky if the server handles one quick activation a day ... if it got 20 in a day that would be an amazing success.
I'm looking at Amazon EC2 pricing here ...
https://aws.amazon.com/free/?all-free-tier.sort-by=item.additionalFields.SortRank&all-free-tier.sort-order=asc
I see that there is a "Free Tier" which provides "750 hours per month of Linux t2.micro or t3.micro instance". And it's free for year.
STUPID QUESTION #1: 24h/day x 31 days/month is 744 hours ... so, does that mean I'm getting a free linux server running 24/7 for a year or is there a catch that I'm missing?
STUPID QUESTION #2: t2.micro/t2.micro has 1 vCPU, 1GB Memory ... is that enough power to run a simple Apache + Tomcat + MySQL web service reliably?
STUPID QUESTION #3: Any reason why I should skip the free tier and invest in a powerful pay $$$ option?
Yes. No catch. It's just not a very strong server.
That really depends on what that service does. Performance wise you need to pay attention to t2 instances being optimized for burst operations. That means they run full speed for a little while and then get throttled. But if you're talking about reliability, it's a whole other story. Just one machine is usually not enough for that. You want multiple machines in multiple data centers. What if one machine goes down? What if the whole data center goes down? It really depends on just how reliable you want it.
That really depends on what you're looking for. If you don't know yet, stick to free until you figure it out. I would even go for something simpler like Heroku at first. At least you won't have to take care of the reliability aspect as much.
You describe your service as: "Accept an encrypted license key, decrypt it, verify it, return and encrypted boolean response".
This sounds like an excellent candidate for a serverless solution:
AWS API Gateway providing an HTTPS endpoint that the application can call
It then triggers an AWS Lambda function that performs the logic and exits
However, you also mention a MySQL database. This could be provided by Amazon RDS. Or, you could go serverless and use DynamoDB (a NoSQL database).
The benefit of a serverless architecture is that it can scale to handle high loads and doesn't cost you anything (except potentially for the database) when not being used.
There is a free tier available for AWS API Gateway, AWS Lambda, Amazon DynamoDB and Amazon RDS.
There might be a limitation on network traffic for EC2 instances. You should look into that before deciding to host a web service on it. There is even a possibility it could charge you for using too much network bandwidth, so scalability might be an issue. I suggest you try Heroku instead, and then switch to other app hosting services when if and when you need to scale.
Yes, i have developed an low to medium web application as mysql backend.But, please be sure about number of users , as it depends on the performance and scalability.
If you are looking for very little usage EC2 is the best matching free tire which provides by the AWS.
The EC2 Micro instances to keep under the AWS Free Tier, which covers 750 hours of t2. micro instances. And the servers are available Linux as well as windows
When we talking about the second question it depends on your application type. As per the question that you asked 8GB is enough to run your apache and SQL.
But when it comes to reliability, it's a different story. In most cases, one machine is insufficient. You'd like to have multiple machines in different data centers. So, in that case, it is better to move to another service.
When we talking about your 3rd question, it also depends on the applicability of your application. If your application having a high number of users and many concurrent processes and if you need to improve the reliability, it is good to move to pay subscriptions.

Choosing the ideal multi-tenancy architecture for an ASP.NET Core application

I am currently working on an application that will be hosted on Azure. As it does not make sense to have an instance of it running for each customer (you'll see why), it's going to be a multi-tenancy solution.
To be honest: I'm only starting to gather experience with web applications, so I apologize if the answer to my question is obvious.
Question: Which multi-tenancy concept will be most beneficial for my application, considering the following assumptions:
Many tenants (ideally hundreds or even more, we'll see...)
consisting of few user accounts per tenant (<5-10 in most cases, up to 200 for a hand full of tenants)
dealing with mostly small amounts of data (<100 entries in <20 tables)
changes in data occur a few times a day (approx. <50 changes per
user per day)
The application needs to stay responsive (of course)
My thoughts:
Database-per-Tenant: Does not make sense as the DB won't be utilized
much, therefore not cost effective at all
Table-per-Tenant: Could be a good solution, guess this should scale
pretty good?
Tenant-column within the entities: Could be a problem with scaling, right? Could be
better when using charding on the tenant id?
I would really appreciate your help and some "shared experience" in order to choose the not-so-painful path.
A good summary of the different models can be found here:
https://www.linkedin.com/pulse/database-design-multi-tenant-applications-dharmendar-kumar/
Based on my experience on Azure I would recommend CosmosDB with the following options:
partitioned collections: if tenants are evenly distributed and have similar requirements
collection per tenant: if some tenants have scale or special requirements
mix between the preceding two.
Cosmos DB has a lot of benefits e.g sharding, global distribution, performance, freedom of consistency models as well as a good sql support.

Microservices With CosmosDB on Azure

I've read a bit about microservices and the favored approach appears to be a separate database for each microservice. With regards to Azure's CosmosDB, would that mean a separate Table for each service? What's the best way to architect this?
There are a huge variety of factors to consider here which ultimately means there is no right answer to this question and it will be very specific to the nature of the application you're trying to build. As such, broad statements attempting to offer "general" advice and patterns should be taken with a huge grain of salt. With Cosmos a few of the many high level things to consider when making your decisions are as follows:
Partitioning: Cosmos collections support almost infinite scale based on the selection of an appropriate partition key. So, for example you could have a single collection and separate your services such that they each write to a distinct partition key. This would provide you with a form of service multi-tenancy which might be perfectly appropriate for your particular application. However, throughput is also scaled at the collection level so if certain services have much higher read and/or write requirements this may not work for you and could be an indication that that particular service should use it's own collection which can be scaled independently.
Cost: You're billed per collection with a minimum throughput requirement. Depending on the number and nature of your micro services this could result in exponentially higher costs for little gain.
Isolation: Again, depending on the nature of your application you might have a hard business requirement that data from different services be physically separate from each other which would force you to use separate collections.
The point that I'm trying to make here is that there is absolutely no right answer to this question. You need to weight the pros/cons very carefully in the context of the solution you are trying to build and select the approach that is right for you.

How to optimize deployment to regions for minimum perceived latency and maximum cost savings?

I will be using Azure Cosmos DB with Azure Functions deployed in the same regions, with a gateway (cloudflare or an Azure option) which will route to the azure function in the closest region, which is deployed along side a Cosmos DB replication.
the benefits in perceived latency should be logarithmic right?
like, having 2 regions is 3x better,
3 region ~5x times better perceived latency. etc.
according to MS, Cosmos DB is available in all regions.
considering our customers aren't clustered around a specific region and are all over the world.
which is the optimal regions to deploy to?
for replication in
1 region
2 regions
3 regions
4 regions
You can use the http://www.azurespeed.com/
to see the closest DC from the client and pick the optimal location.
As an extreme/unrealistic case you can imagine each customer/client having a copy of the db running next to them. This should cause the least latency for the customer. Right ?
The answer is that it depends. If you talk about local read/write latency then that would be true. However, the more you replicate your database the more time write operations will take to synchronise across all nodes (and in turn affect what is available when you read). See consistency models here. Although you have customers spread across the globe, it would be better if you start from regions with the most load/requests and then spread out from there.
Deciding this is also when the proverbial "rubber meets the road" as you would soon realise that business might be willing to relax some latency needs around edges given the cost increase to achieve 100% coverage.

Mixing DS with D1_V2 VM's in Windows Azure

I'm designing an Azure solution for a webapplication that requires 2 VM's, a web-tier and an database-tier.
The web-tier contains the webapp that is a relatively large amount of calculation-work. The database-tier is a normal SQL server instance (+- 100 databases, total 500GB).
Azure offers the DS-series and D1_V2 series, the DS series supports SSD drives, the D1_V2 doesn't but has a 35% CPU than the DS.
Is my reasoning solid in thinking that I will be better off combining the 2 series, using the DS for the database-tier (SSD will provide higher IOPS for the database), while the D1_V2 will offer faster processing for my calculation-heavy webapp.
Any thoughts? Thanks!
Yes, you can combine those, because these are two completely different/standalone VMs.
It also makes sense that you use the V2 for your web server (due to its calculations) and a DS-series for you database server.
You should also use the Local SSD of you SQL Server machine to boost performance, e.g. by moving the temp db to it and/or setting up Buffer Pool Extensions and target the local SSD.
Your reasoning is solid and that may very well be the best way to deploy your solution.
However your methodology is perhaps rather flawed. What you say makes perfect sense, but you will never know whether that has any basis in reality until you test your application and see where the bottle necks are.
Considering how easy it is to scale a VM up or down, it doesn't take much to deploy and monitor and adjust accordingly.
In principal what you suggest is fine but reality crushes many a good principle.

Resources