Can Anyone explain me how the auto-scaling works. So aws requires Maximum storage threshold be greater then allocated storage. I can not understand what will happen if I will exceed allocated storage, but still will not reach the threshold
I think you are misunderstanding the maximum threshold. the maximum threshold is to specified the maximum storage that AWS would scale for you. for example if you set storage to 20GB, and your maximum threshold is 100GB, then AWS will only autoscale your storage to 100GB then stop. I think aws info on RDS creation panel is quite confusing
Related
What is the maximum allowed Docker image size for Azure Kubernetes Service (AKS)?
For Azure Container Instance the max allowable docker image size is 15 GB.
But I could not find any documentation for AKS limit that covers max allowable docker image size. Any feedback or documentation in this regard will be appreciated.
There's no explicit limit on the container image size, but large images with large unique layers are likely to create containers with a large memory footprint, potentially exceeding resource limits or the overall available memory of worker nodes (7 GiB for a Standard_DS2_v2 VM, which is currently the default in an AKS node pool).
If a container image is excessively large (TBs), kubelet might not be able to pull it from the registry to a node due to lack of disk space.
I need to know Azue Ultra disks IOPS and Disk throughput range based on disk Size, using azure API, I need help on if any document is available to understand it.
According to my research, we can use the following Azure rest API to get the key capabilities of ultra disks.
GET https://management.azure.com/subscriptions/<subscriotion ID>/providers/Microsoft.Compute/skus?$filter=location eq '<the location you want to check>' &api-version=2019-04-01
For instance, if you want to check southeast Asia, you will get a list as below, which contains ultra disk info :
Then we can use these capabilities to know IOPS and Disk throughput range based on disk Size. For example, The minimum IOPS per disk is 2 IOPS/GiB and the max IOPS per disk is 300 IOPS/GiB. Meanwhile, The IOPS disk should be greater than 100 IOPS and less than 160000 IOPS. So if our disk zise is 4 GB, the IOPS range is 100 - 1200. For more details, please refer to the document.
If I am creating an Azure Storage Account v2 then what is the maximum capacity of (or maximum size) of files we can store in the blob storage? I see some docs talking about 500 TB as the limit. Does that mean once the storage account reaches that 500 TB limit then it will stop accepting the uploads? Or is there a way to store more files by paying more?
It depends on the region. According to https://learn.microsoft.com/en-us/azure/azure-subscription-service-limits#storage-limits US and Europe can have up to 2PB Storage accounts. All other regions are 500TB. As mentioned by Alfred below, you can request an increase if you need to (see new max sizes here https://azure.microsoft.com/en-us/blog/announcing-larger-higher-scale-storage-accounts/)
I have yet to see a storage account hit the limit, but I would anticipate you would hit an error trying to upload a file at max capacity. I would advise designing your application to make use of multiple storage accounts to avoid hitting this limit (if you are expecting to exceed 500TB).
you can ask support to increase
https://azure.microsoft.com/en-us/blog/announcing-larger-higher-scale-storage-accounts/
In their managed disk pricing page, Microsoft Azure present a billing method based on predefined disk size, but nowhere do thy mention pricing of arbitrary disk size. I would assume they charge by the closest larger disk size (e.g a 38GiB will be charged as 64GiB)
Yes, your understanding is correct. When considering the disk size for billing of the managed disk. You can refer to this doc.
Billing for managed disks depends on the provisioned size
of the disk. Azure maps the provisioned size (rounded up) to the
nearest Managed Disks option as specified in the tables below. Each
managed disk maps to one of the supported provisioned sizes and is
billed accordingly. For example, if you create a standard managed disk
and specify a provisioned size of 200 GB, you are billed as per the
pricing of the S15 Disk type.
I'm trying to run different helm charts and I keep running into this error. It's much more cost effective for me to run 3-4 cheaper nodes than 1 or 2 very expensive nodes that can have more disks attached to them.
Is there a way to configure kubernetes or helm to have a disk attach limit or to set the affinity of one deployment to a particular node?
It's very frustrating that all the deployments try to attach to one node and then run out of disk attach quota.
Here is the error:
Service returned an error. Status=409 Code="OperationNotAllowed"
Message="The maximum number of data disks allowed to be attached to a
VM of this size is 4."
Is there a way to configure kubernetes or helm to have a disk attach
limit or to set the affinity of one deployment to a particular node?
For now, ACS k8s provision PVC based on Azure managed disks or blob disks, so the limit is the number of VM disks.
For now, Azure does not support change the limit about number of VM disks. About VM size and max data disks, we can find the limit here:
More information about limit, please refer to this link.
By the way, the disk maximum capacity is 2TB, maybe we can extend it to 2TB.