AWS-RDS: Invalid max storage size for engine name postgres and storage type gp2 - amazon-rds

I'm getting this error while modifying DB instance. Invalid max storage size for engine name postgres and storage type gp2: 198 for schedule autoscaling.
This never happened before, just yesterday and today. Below is the current storage configuration of the DB instance. Does anyone know where this error came from?

Databases can only be increased by 5GB or 10% which ever is greater

Related

Kubernetes Persistent Volume not shows the real capacity

I have a persistent volume in my cluster (Azure disk) that contains 8Gi.
I resized it to contain 9Gi, then changed my PV yaml to 9Gi as well (since it is not updated automatically) and everything worked fine.
Then I made a test and changed the yaml of my PV to 1000Gi (and expected to see an error) and received error from my pvc that claims this PV: "NodeExpand failed to expand the volume : rpc error: code = Internal desc = resize requested for 10, but after resizing volume size was 9"
However, if I typed kubectl get pv, it is still looks like this PV capacity is 1000Gi (and of course that in Azure this is still 9Gi since I not resized it).
Any advice?
As a general rule: you should not have to change anything on your PersistentVolumes.
When you request more space, editing a PersistentVolumeClaim: a controller (either CSI, or in-tree driver/kube-controllers) would implement that change against your storage provider (ceph, aws, ...).
Once done expanding the backend volume, that same controller would update the corresponding PV. At which point, you may (or might not) have to restart the Pods attached to your volume, for its filesystem to be grown.
While I'm not certain how to fix the error you saw: one way to avoid those would be to refrain from editing PVs.

Cosmos Write Returning 429 Error With Bulk Execution

We have a solution utilizing a micro-service approach. One of our micro-service is responsible for pushing data to Cosmos. Our Cosmos database is using serverless provision having a 5,000 RU/s limit.
The data we are inserting into Cosmos looks like the below. There are 10 columns and we are pushing a batch containing 5,807 rows of this data.
Id
CompKey
Primary Id
Secondary Id
Type
DateTime
Item
Volume
Price
Fee
1
Veg_Buy
csd2354csd
dfg564dsfg55
Buy
30/08/21
Leek
10
0.75
5.00
2
Veg_Buy
sdf15s1dfd
sdf31sdf654v
Buy
30/08/21
Corn
5
0.48
3.00
We are retrieving data from multiple sources, normalizing it, and sending out the data as one bulk execution to Cosmos. The retrieval process happens every hour. We understand that we are spiking the Cosmos database once per hour with the data that has been retrieved and then stop sending data until the next retrieval cycle. So if this high peak is the problem, what remedies exist for such a scenario?
Can anyone shed some light on what we should/need to do to overcome this issue? Perhaps we are missing a setting when creating the Cosmos database or possibly this has something to do with partitioning?
You can mostly determine these things by looking at the metrics published in the Azure Portal. This doc is a good place to start, Monitor and debug with insights in Azure Cosmos DB.
In particular I would look at the section titled, Determine the throughput consumption by a partition key range
If you are not dealing with a hot partition key you may want to look at options to throttle your writes. This may include modifying your batch size and putting the write operations on a while..loop with a one second timer until RU/s consumed equals 5000 RU/s. You could also possibly look at doing queue-based load leveling and put writes on a queue in front of Cosmos and stream them in.

Request Timeout in Azure Cosmos DB in sdk v3

I am inserting the data to azure cosmos db. In some time it throws an error (Request Timeout : 408). I have increased the Request Timeout to 10 mins.
Also, i have iterate each item from api and calling CreateItemAsync() method instead of bulk executor.
Data To Insert = 430 K Items
Microsoft.Azure.Cosmos SDK used = v3
Container Throughput = 400
Can anyone help me to fix this issue.
Just increase your throughput. But it's going to cost you a lot of money if you leave it increased. 400 RU/s isn't going to cut it unless you batch your operation to the point where it's going to take a long time to insert 400k items.
If this is a one-time deal, increase your RU/s to 2000+, then start slowly inserting items. I would say, depending on the size of your documents, maybe do 50 at a time, then wait 250 milliseconds, then do 50 more until you are done. You will have to play with this though.
Once you are done, move your RU/s back down to 400.
Cosmos DB can be ridiculously expensive, so be careful.
ETA:
This is from some documentation:
Increase throughput: The duration of your data migration depends on the amount of throughput you set up for an individual collection or a set of collections. Be sure to increase the throughput for larger data migrations. After you've completed the migration, decrease the throughput to save costs. For more information about increasing throughput in the Azure portal, see performance levels and pricing tiers in Azure Cosmos DB.
The documentation page for 408 timeouts lists a number of possible causes to investigate.
Aside from addressing the root cause with the SDK client app or increasing throughput, you might also consider leveraging Azure Data Factory to ingest the data as in this example. This assumes your data load is an initialization process and your data can be made available as a blob file.

Microsoft.WindowsAzure.Storage.StorageException: 'There is already a lease present.'

I am trying to create a webjob project in .NET core 3.1. I followed the following [guide]https://learn.microsoft.com/en-us/azure/app-service/webjobs-sdk-get-started , except instead of connecting to a storage account, I connect using "UseDevelopmentStorage=true" for the connection string and have the Storage emulator running.
Every couple of builds I get the exception : "Microsoft.WindowsAzure.Storage.StorageException: 'There is already a lease present.'". The exception is thrown at
using (host)
{
await host.RunAsync();
}
This does not happen on every build, and apart from using TimerTrigger, I don't use any other storage functions.
Does anyone know what is causing this?
No matter what trigger you use, when you run webjob, you need to write your running log into blob storage. Here you use local storage with UseDevelopmentStorage=true.
The Lease Blob operation creates and manages a lock on a blob for write and delete operations. The lock duration can be 15 to 60 seconds, or can be infinite. In versions prior to 2012-02-12, the lock duration is 60 seconds.
There is the possibility cause the issue "There is already a lease present" due to concurrent blob storage usage.
Actually, I did solve it, but wanted to test for a couple of days. I also experienced an exception when trying to add a new method to Functions.cs. This led me to believe that something else was going on.
What fixed it for me was to remove the storage emulator and downloaded a fresh version, since then I no longer get the exceptions and I can add new functions.

Can a partitioned CosmosDB / DocumentDB collection have fewer than 400 RU/s of throughput configured?

Update: This question is now invalid as the events I'd thought happened didn't happen quite as I'd thought (see below for details). I'm leaving the question as-is though as the answers and comments may be useful to others.
I've created a collection via the Azure Portal, configured initially with:
Storage Capacity: Unlimited
Initial Throughput Capacity (RU/s): 2500
Partition Key: /PartitionKey
Then through the .NET SDK I've changed the Initial Throughput Capacity (RU/s) to 400.
According to the Scale & Settings tab for the collection in the Azure Portal the value of Throughput (400 - 10,000 RU/s)* is 400.
Is this a supported configuration? I'm assuming this is a bug somewhere but perhaps it isn't? What would I be charged for this collection?
As an aside...
The Add Collection screen doesn't allow me to set the Throughput to 400 on initial creation but it seems I can change it afterwards.
Update: I think I've worked out what happened. I manually created a partitioned collection, then I forgot that my code (an importer/migration tool I'm working on) deletes the database and recreates the database and collection on startup. When it does this, it's created as a non-partitioned collection. Now that I've corrected this, I get the error "The offer should have valid throughput values between 2500 and 100000 inclusive in increments of 100." if I try to reproduce what I thought I'd managed to do before.
You're not seeing a bug. You're attempting to set an RU range on a partitioned collection.
Single-partition collections (10GB) allow for 400-10000 RU.
What you're showing in your question is a partitioned collection, with scale starting at 2500 RU.
And you cannot configure a partitioned collection for 400 RU, whether through the portal or through API/SDK.

Resources