The ASP.net MVC application is in EC2 and interacting with RDS ( SQL Server). The application is sending Bulk GET request (API call) to RDS via NHibernate to get the items. The application performance is very slow as sometimes it’s making around 500 numbers of GET API call to get 500 items from the DB ( note - getting items from DB has its own stored procedure/ Logic)
I was referring this to understand scaling RDS https://aws.amazon.com/blogs/database/scaling-your-amazon-rds-instance-vertically-and-horizontally/ and https://aws.amazon.com/premiumsupport/knowledge-center/requests-rds-read-replicas/
However, didn’t get much clue that support my business scenario.
My questions are(considering above scenario):
Is there any way to distribute my GET request to RDS (SQL Server) so that it can return the 500 items from SQL server quickly?
Is it possible to achieve this without any code or existing architecture change ( both from .net an SQL end)?
What are the different ways I should tryout to make this performance better?
What are the pricing details for Read replica?
Note: The application does both read and write. And, I’m more concern about this particular GET API calls.
Thanks.
Is there any way to distribute my GET request to RDS (SQL Server) so that it can return the 500 items from SQL server quickly?
You will need to have a router in your application that will route the request to the read replicas(can be many).
You can provision a read replica with different instance type with enhanced capacity for that use-case.
You can try memory cache, it can reduce response time and can off load read work load to the read replicas.
Is it possible to achieve this without any code or existing architecture change ( both from .net an SQL end)?
Based on the documentation "applications can connect to a read replica just as they would to any DB instance." which means your application requires additional modification to support the use-case.
What are the different ways I should tryout to make this performance better?
memory cache and instance type with enhanced capacity for reads(the same suggestion above)
What are the pricing details for Read replica?
It will depends on the instance type that you provision.
Related
I am re-designing a project I built a year ago when I was just starting to learn how to code. I used MEAN stack, back then and want to convert it to a PERN stack now. My AWS knowledge has also grown a bit and I'd like to expand on these new skills.
The application receives real-time data from an api which I clean up to write to a database as well as broadcast that data to connected clients.
To better conceptualize this question I will refer to the following items:
api-m1 : this receives the incoming data and passes it to my schema I then send it to my socket-server.
socket-server: handles the WSS connection to the application's front-end clients. It also will write this data to a postgres database which it gets from Scraper and api-m1. I would like to turn this into clusters eventually as I am using nodejs and will incorporate Redis. Then I will run it behind an ALB using sticky-sessions etc.. for multiple EC2 instances.
RDS: postgres table which socket-server writes incoming scraper and api-m1 data to. RDS is used to fetch the most recent data stored along with user profile config data. NOTE: RDS main data table will have max 120-150 UID records with 6-7 columns
To help better visualize this see img below.
From a database perspective, what would be the quickest way to write my data to RDS.
Assuming we have during peak times 20-40 records/s from the api-m1 + another 20-40 records/s from the scraper? After each day I tear down the database using a lambda function and start again (as the data is only temporary and does not need to be saved for any prolonged period of time).
1.Should I INSERT each record using a SERIAL id, then from the frontend fetch the most recent rows based off of the uid?
2.a Should I UPDATE each UID so i'd have a fixed N rows of data which I just search and update? (I can see this bottlenecking with my Postgres client.
2.b Still use UPDATE but do BATCHED updates (what issues will I run into if I make multiple clusters i.e will I run into concurrency problems where table record XYZ will have an older value overwrite a more recent value because i'm using BATCH UPDATE with Node Clusters?
My concern is UPDATES are slower than INSERTS and I don't want to make it as fast as possible. This section of the application isn't CPU heavy, and the rt-data isn't that intensive.
To make my comments an answer:
You don't seem to need SQL semantics for anything here, so I'd just toss RDS and use e.g. Redis (or DynamoDB, I guess) for that data store.
I am using Azure cosmosDB SDK v3.As you know the SDK supports CreateContainerIfNotExistsAsync which creates a container if there is no container matching provided container id. This is convenient.
But it pings Cosmos DB to know container exists or not whereas GetContainer doesn't as GetContainer assumes container exists. So CreateContainerIfNotExistsAsync would need one more round trip to Cosmos DB for most of operations if my understanding is correct.
So my questions is would it better to avoid using CreateContainerIfNotExistsAsync as much as possible in terms of API perspective? Api can have better latency and save bandwidth.
The different is explained in the Intellisense, GetContainer just returns a proxy object, one that simply gives you the ability to execute operations within that container, it performs no network requests. If, for example, you try to read an Item (ReadItemAsync) on that proxy and the container does not exist (which also makes the item non-existent) you will get a 404 response.
CreateContainerIfNotExists is also not recommended for hot path operations as it involves a metadata or management plane operation:
Retrieve the names of your databases and containers from configuration or cache them on start. Calls like ReadDatabaseAsync or ReadDocumentCollectionAsync and CreateDatabaseQuery or CreateDocumentCollectionQuery will result in metadata calls to the service, which consume from the system-reserved RU limit. CreateIfNotExist should also only be used once for setting up the database. Overall, these operations should be performed infrequently.
See https://learn.microsoft.com/azure/cosmos-db/sql/best-practice-dotnet for more details
Bottomline: Unless you expect the container to be deleted due to some logical pathway in your application, GetContainer is the right way, it gives you a proxy object that you can use to execute Item operations without any network requests.
I'm talking to Cosmos DB via the (SQL) REST API, so existing questions that refer to various SDKs are of limited use.
When I run a simple query on a partitioned container, like
select value count(1) from foo
I run into a HTTP 400 error:
The provided cross partition query can not be directly served by the gateway. This is a first chance (internal) exception that all newer clients will know how to handle gracefully. This exception is traced, but unless you see it bubble up as an exception (which only
happens on older SDK clients), then you can safely ignore this message.
How can I get rid of this error? Is it a matter of running separate queries by partition key? If so, would I have to keep track of what the existing key values are?
I'm working on a NodeJS project and using pretty common AWS setup it seems. My ApiGateway receives call, triggers lambda A, then this lambda A triggers other lambdas, say B or C depending on params passed from ApiGateway.
Lambda A needs to access MongoDB and to avoid hassle with running MongoDB myself I decided to use mLab. ATM Lambda A is accessing MongoDB using NodeJS driver.
Now, not to start connection with every Lambda A execution I use connection pool, again, inside of Lambda A code, outside of handler I keep connection pool that allows me to reuse connections when Lambda A is invoked multiple times.
This seems to work fine.
However, I'm not sure how to deal with connections when Lambda A is invoking Lambda B and Lambda B needs to access mLab's MongoDB database.
Is it possible to pass connection pool somehow or Lambda B would have to keep its own connection pool?
I was thinking of using mLab's Data API that exposes most of the operations of MongoDB driver and so I could use HTTP calls e.g. GET and POST to run commands against database. It seems similar to RESTHeart it seems.
I'm leaning towards option 2 but on mLab's Data API it clearly states to avoid using REST api unless cannot connect using MongoDB driver directly:
The first method—the one we strongly recommend whenever possible for
added performance and functionality—is to connect using one of the
available MongoDB drivers. You do not need to use our API if you use
the driver. The second method, documented in this article, is to
connect via mLab’s RESTful Data API. Use this method only if you
cannot connect using a MongoDB driver.
Given all this how would it be best to approach it? 1 or 2 or is there any other option I should consider?
Unfortunately you won't be able to 'share' a mongo connection across lambdas because ultimately there's a 'physical' socket to the connection which is specific to that instance.
I think both of your solutions are good depending on usage.
If you tend to have steady average concurrency on both lambda A and B across an hour period (which is a bit of a rule of thumb as to how long AWS keeps a lambda instance alive), then having them both own their own static connections is a good solution. This is because the chances are that a request will reach an already started and connected lambda. I would also guess that node drivers for 'vanilla' mongo are more mature than those for the RESTFul Data API.
However if you get spikey or uneven load, then you might use the RESTFul Data API. This is because you'll be centralising the responsibility for managing the number of open connections to your instances to a single point, which under these conditions means you're less likely to be opening unneeded connections, or using all of your current capacity and having to wait for a new connection to be established.
Ultimately it's a game of probabilistic load balancing- either you 'pool' all your connections in a central place (the Data API) and become less affected by the usage of a single function at the expense of greater latency on individual operations, or you pool at a function level but are more exposed to cold-starts opening connections under uneven concurrency.
We are using the Azure SQL Database (Web Edition) for a MVC3 ASP.NET/EF5 application.
Is there a limit to the number of sessions that this SQL Database setup supports? I am just wondering whether any delays that we are getting is due to some form of queuing or pooling. Currently we have about 5 concurrent users.
Thanks.
The SQL Azure Web edition database should support a high number of concurrent users - we've had applications running that issue thousands of queries per minute against Web databases.
Throttling
SQL Azure does implement database throttling to maintain performance for all users of the platform. If throttling has been applied to the current operation you'll receive error 40501. The link I've provided also shows you how to determine why throttling is being applied. If you receive this error you can treat it as a transient error and wait before retrying.
It doesn't sound like your connections are being throttled, because you mention only 5 concurrent users and talk about delays, whereas the throttling error would occur pretty quickly.
Transient error handling
If you're getting connection timeouts etc you need to handle them as transient errors. Transient errors are timeouts or dropped connections, as well as error codes 10054, 10053, 40501 (throttling as described above) and 40197 (usually because an upgrade or failover operation is in progress).
You should ensure you implement retry logic to handle transient errors.
Query performance
If you're executing long running queries you can check which ones are slow by logging into the database management URL:
https://<database-id>.database.windows.net/#$database=<database-name>
Log in and click "Query Performance" - take a look at the longest running queries at the top.