I need to know how many connections can be made with t2.small RDS instances, as i searched but getting different answers every where, in t2.micro instances we have 26 connections just need to know about t2.small too...
Thanks in advance
t2.small can have maximum of 150 connections
see this link for more info please :
max_connections at AWS RDS MySQL Instance Sizes
Actual info for Postgresql t3-instances (default.postgres10 parameter group):
db.t3.micro - 56 max_connections
db.t3.small - 112 max_connections
db.t3.medium - 225 max_connections
db.t3.large - 450 max_connections
db.t3.xlarge - 901 max_connections
db.t3.2xlarge - 1802 max_connections
Its similar for default.postgres9 and default.postgres11
Related
I have RDS Aurora PostgreSQL cluster with two instances:
cluster
├── instance_1 [writer] [no multiAZ]
└── instance_2 [reader] [no multiAZ]
When I changing instance type for instance_1 failover operation working correct but I have downtime about 1~2 minutes. I checked downtime by running
watch -n 3 "psql -h db.cluster.url -p 5432 -d postgres -U postgres -c 'select ID from TABLE limit 1'"
After that instance_1 becomes reader.
Is there any way to change instance_1 to reader manually, change it type and revert to writer without long downtime (no downtime is the best, but 5~10 seconds also acceptable)
I know that I may use multiAZ instances but it will be cost twice expensive.
Using RDS Proxy can greatly reduce downtime during failover:
With RDS Proxy, failover times for Aurora and RDS databases are reduced by up to 66%
A big amount of the seemingly long failover is taken by
the client library recovering from a connection loss and
the DNS propagation of the reader/writer switch
RDS Proxy handles the reader/writer switch so that no DNS changes have to be propagated to the client, it uses always the same end point.
There is a good article RDS Proxy which shows the speedup from 24 to 3 seconds average failover recovery time when using RDS proxy.
I am using postgres 9.5 on AWS RDS as the database and Sequelize as the ORM with node.js. The max_connections at the DB is 1660 while the max connection pool size at Sequelize is 600. Even at higher loads(~ 600 queries per second), which is evidenced by the Resource Request Timeout Error at Sequelize, the management console for AWS RDS shows the count of DB connections to be 10.
I want to ask if DB connections in the RDS console mean the same thing as the connection for which limits are configured in max_connections in RDS and max connection pool size in Sequelize.
If they are the same, then why doesn't the RDS console show more connections being used during the above mentioned times of higher load?
I want to ask if DB connections in the RDS console mean the same thing as the connection for which limits are configured in max_connections in RDS and max connection pool size in Sequelize.
Yes, DB connections means the same type of connection on which max_connections is setting a limit. However, the RDS console value is laggy. If the spike in connections is only transient, they might not show up at all, and if they show up it will be after the fact. Even if I were using RDS for my production data, I'd still set up a local database for testing things like this, as it would be easier to monitor in real time and in greater depth than provided by RDS. I don't know enough about Sequelize to say if it is the same thing as what "max connection pool size" refers to.
If they are the same, then why doesn't the RDS console show more connections being used during the above mentioned times of higher load?
Either they are there but you can't see them in the laggy console, or Sequelize isn't actually spawning them. Are there entries in the database log files?
Anyway, why do you want this? Your database doesn't have 600 CPUs. And probably doesn't have 600 independent IO channels, either. All you're going to do is goad your concurrent connections into fighting against each other for resources, and make your overall throughput lower due to contention on spinlocks or LWLocks.
I've been trying to work my way through this AWS Elastic Beanstalk tutorial. Following it to the letter, I'm getting a consistent error message at step #3.
Creating Auto Scaling group named: [xxx] failed. Reason: You have requested more instances (1) than your current instance limit of 0 allows for the specified instance type. Please visit http://aws.amazon.com/contact-us/ec2-request to request an adjustment to this limit. Launching EC2 instance failed.
The error message seems clear enough. I need to request an increase of my EC2 quota. However, I've done that, my quota is now at 10 EC2 instances and I've also been approved for 40 Auto Scaling Groups...
Any idea on what I'm missing? Full output attached.
I guest you still failed because your request increase on other instance type.
First, go to your aws console > EC2 > Limit page then you can see some thing as below:
Running On-Demand EC2 instances 10 Request limit increase
Running On-Demand c1.medium instances 0 Request limit increase
Running On-Demand c1.xlarge instances 0 Request limit increase
Running On-Demand m3.large instances 5 Request limit increase
You can see my limit it 10 instances but with instance type c1.medium and c1.xlarge is 0. Only limit of m3.large type is 5. So you must request AWS increase your limit on exactly which instance type you want to use.
In brief, I am having trouble supporting more than 5000 read requests per minute from a data API leveraging Postgresql, Node.js, and node-postgres. The bottleneck appears to be in between the API and the DB. Here are the implmentation details.
I'm using an AWS Postgresql RDS database instance (m4.4xlarge - 64 GB mem, 16 vCPUs, 350 GB SSD, no provisioned IOPS) for a Node.js powered data API. By default the RDS's max_connections=5000. The node API is load-balanced across two clusters with 4 processes each (2 Ec2s with 4 vCPUs running the API with PM2 in cluster-mode). I use node-postgres to bind the API to the Postgresql RDS, and am attempting to use it's connection pooling feature. Below is a sample of my connection pool code:
var pool = new Pool({
user: settings.database.username,
password: settings.database.password,
host: settings.database.readServer,
database: settings.database.database,
max: 25,
idleTimeoutMillis: 1000
});
/* Example of pool usage */
pool.query('SELECT my_column FROM my_table', function(err, result){
/* Callback code here */
});
Using this implementation and testing with a load tester, I can support about 5000 requests over the course of one minute, with an average response time of about 190ms (which is what I expect). As soon as I fire off more than 5000 requests per minute, my response time increases to over 1200ms in the best of cases and in the worst of cases the API begins to frequently timeout. Monitoring indicates that for the EC2s running the Node.js API, CPU utilization remains below 10%. Thus my focus is on the DB and the API's binding to the DB.
I have attempted to increase (and decrease for that matter) the node-postgres "max" connections setting, but there was no change in the API response/timeout behavior. I've also tried provisioned IOPS on the RDS, but no improvement. Also, interestingly, I scaled the RDS up to m4.10xlarge (160 GB mem, 40 vCPUs), and while the RDS CPU utilization dropped greatly, the overall performance of the API worsed considerably (couldn't even support the 5000 requests per minute that I was able to with the smaller RDS).
I'm in unfamilar territory in many respects and am unsure of how to best determine which of these moving parts is bottlenecking API performance when over 5000 requests per minute. As noted I have attempted a variety of adjustments based on the review of Postgresql configuration documentation and node-postgres documentation, but to no avail.
If anyone has advice on how to diagnose or optimize I would greatly appreciate it.
UPDATE
After scaling up to m4.10xlarge, i performed a series of load-tests, varying the number of request/min and the max number of connections in each pool. Here are some screen captures of monitoring metrics:
In order to support more then 5k requests, while maintaining the same response rate, you'll need better hardware...
The simple math states that:
5000 requests*190ms avg = 950k ms divided into 16 cores ~ 60k ms per core
which basically means your system was highly loaded.
(I'm guessing you had some spare CPU as some time was lost on networking)
Now, the really interesting part in your question comes from the scale up attempt: m4.10xlarge (160 GB mem, 40 vCPUs).
The drop in CPU utilization indicates that the scale up freed DB time resources - So you need to push more requests!
2 suggestions:
Try increasing the connection pool to max: 70 and look at the network traffic (depending on the amount of data you might be hogging the network)
also, are your requests to the DB a-sync from the application side? make sure your app can actually push more requests.
The best way is to make use of a separate Pool for each API call, based on the call's priority:
const highPriority = new Pool({max: 20}); // for high-priority API calls
const lowPriority = new Pool({max: 5}); // for low-priority API calls
Then you just use the right pool for each of the API calls, for optimum service/connection availability.
Since you are interested in read performance can set up replication between two (or more) PostgreSQL instances, and then use pgpool II to load balance between the instances.
Scaling horizontally means you won't start hitting the max instance sizes at AWS if you decide next week you need to go to 10,000 concurrent reads.
You also start to get some HA in your architecture.
--
Many times people will use pgbouncer as a connection pooler even if they have one built into their application code already. pgbouncer works really well and is typically easier to configure and manage that pgpool, but it doesn't do load balancing. I'm not sure if it would help you very much in this scenario though.
We are using Azure Redis Cache and we need to monitor the state of it. One of thing we need is information about maximum memory. Currently, we enter the information manually, however we want to avoid it in future. Standard command used for this purpose config get maxmemory is disabled in Azure. For completeness, we are using StackExchange.Redis as a client.
Any idea, how to get the information? Also, why is the get version of command disabled?
There is currently no way to get the maxmemory setting. The "config" command is blocked for a few reasons. One is that setting certain config settings could impact the stability of our service. Another is that any changes to config would be lost if the server instance was restarted. We are looking into ways to enable "config get" but keep "config set" blocked.
Here are the current values for maxmemory for each size cache offering:
Name Size maxmemory
C0 250 MB 285,000,000
C1 1 GB 1,100,000,000
C2 2.5 GB 2,600,000,000
C3 6 GB 6,100,000,000
C4 13 GB 13,100,000,000
C5 26 GB 26,200,000,000
C6 53 GB 53,300,000,000