Will change AWS RDS storage type (from Magnetic to GP2) cause downtime and data loss? - amazon-rds

I have an RDS db (db.t3.micro) with Magnetic storage type. I'd like to change it to GP2 but I'm now sure about the duration of the downtime (nor I can find it in the doc).
Will it continue working during the "migration"?
Furthermore, could I get any data loss?
Thanks

According to https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_PIOPS.StorageTypes.html#USER_PIOPS.ModifyingExisting:~:text=instance%20storage.-,Modifying%20settings%20for%20Provisioned%20IOPS%20SSD%20storage,-You%20can%20modify there will not be any outage and degrade performance
You can modify the settings for a DB instance that uses Provisioned IOPS SSD storage by using the Amazon RDS console, AWS CLI, or Amazon RDS API. Specify the storage type, allocated storage, and the amount of Provisioned IOPS that you require. The range depends on your database engine and instance type.
Although you can reduce the amount of IOPS provisioned for your instance, you can't reduce the storage size.
In most cases, scaling storage doesn't require any outage and doesn't degrade performance of the server. After you modify the storage IOPS for a DB instance, the status of the DB instance is storage-optimization.

Related

Why doesn't Cosmos DB multiple write regions guarantee write availability during a region outage?

Cosmos DB documentation (https://learn.microsoft.com/en-us/azure/cosmos-db/high-availability#what-to-expect-during-a-cosmos-db-region-outage) says that "Given the internal Azure Cosmos DB architecture, using multiple write regions doesn't guarantee write availability during a region outage. The best configuration to achieve high availability during a region outage is single write region with service-managed failover."
Does that mean multi-region writes is just for distributing workloads with limited availability? What is the internal Azure Cosmos DB architecture and where can I get it?
In my understanding, if there is an outage in one region, all writes from that region should be re-directed to other region writes and seamlessly update the database. Is it correct?

Data Transfer between S3 and Blob Storage

We have a large amount, 1PB, of (live) data that we have to transfer periodically between S3 and Azure Blob Storage. What tools do you use for that? And what strategy do you use to minimize cost of transfer and downtime?
We have evaluated a number of solutions, including AzCopy, but none of them satisfy all of our requirements. We are a small startup so we want to avoid homegrown solutions.
Thank you
Azure Data Factory is probably your best bet.
Access the ever-expanding portfolio of more than 80 prebuilt connectors—including Azure data services, on-premises data sources, Amazon S3 and Redshift, and Google BigQuery¬—at no additional cost. Data Factory provides efficient and resilient data transfer by using the full capacity of underlying network bandwidth, delivering up to 1.5 GB/s throughput.

azure blob premium storage vs standard storage

I want to use the premium storage for better performance.
I am using it for BLOBS and i need the fastest blob access for reading.
I am using the reading and writing of the blobs only internally within the data center
I create a premium storage and checked it vs the standard storage by reading a blob of 10 MB 100 times in different location using seek method (reading 50 kb each time).
I read it using a VM machine with windows server 2012
the result are the same - around 200 ms.
Do i need to do something else ? like attach the storage ? if so how do i attach the storage.
both the vm and the storage are at the same region
You can use Premium Storage blobs directly via the REST API. Performance will be better that Standard Storage blobs. Perf difference may not be obvious in some cases if there is local caching on the application or when the blob is too small. Here 10MB blob size is tiny compared to the performance limits. Can you retry with a larger blob? Like, 10 GB? Also note that Premium Storage model is not optimized for tiny blobs.
Well, in Virtual machine cases it always rely on your main Physical HDD, unless you will used that premium storage it's plus but i think internet connection matters as well.
By default, there is a temporary storage(SSD) provided with each VM. This temporary storage drive is present on the physical machine which is hosting your VM and hence can have higher IOPS and lower latency when compared to the persistent storage like data disk.
For test, we can create a VM with HDD disk, and attach a SSD to this VM. After it complete, we can install some tools to measure disk performance, in this way, we can find the difference between HDD and SSD.
like attach the storage ? if so how do i attach the storage.
We can via Azure new portal to attach a SSD to this VM.
More information about attach disk to VM, please refer to this link.

Questioning compute consistency of Azure VM

We have been using an azure VM for hosting SQL Server. A4 size i.e. 4 cores & 7GB RAM
We have noticed intermittent slow performance of database.
We are worried that since azure VM is multi-tenant instance, it could not be working at 4 cores performance always.
We are trying to understand that when we fire a 4 core VM, does it
mean we always have that much compute power? or will it reduce
depending on other users?
The first thing you should do is measure why your your database is performing so slow. Are you hitting the memory limit of your VM? The CPU limit? Or is the performance of the data disks an issue (IOPS)?
On MSDN there's a checklist with things you need to consider when hosting SQL Server in Virtual Machines:
Use minimum Standard Tier A2 for SQL Server VMs.
Keep the storage account and SQL Server VM in the same region.
Disable Azure geo-replication on the storage account.
Avoid using operating system or temporary disks for database storage or logging.
Avoid using Azure data disk caching options (caching policy = None).
Stripe multiple Azure data disks to get increased IO throughput.
Format with documented allocation sizes.
Separate data and log file I/O paths to obtain dedicated IOPs for data and log.
Enable database page compression.
Enable instant file initialization for data files.
Limit or disable autogrow on the database.
Disable autoshrink on the database.
Move all databases to data disks, including system databases.
Move SQL Server error log and trace file directories to data disks.
Apply SQL Server performance fixes.
Setup default locations.
Enable locked pages.
Backup directly to blob storage.
Azure will not share cores and memory as long as you are not choosing the smallest VM sizes.
However, keep in mind that other tenants can still interfere with you, mostly on the network traffic. I/O to and from persistent drives (any drive except D: ) also goes through network.

Azure cloudapp storage

I have a very unique question. In azure when you look at the pricing calculator and your deciding which size of VM to deploy for your cloud service the pricing calculator at the following URL
http://www.windowsazure.com/en-us/pricing/calculator/?scenario=cloud
shows storage along with the the size of the VM. For example the extra small instance says
"Extra small VM (1GHz CPU, 768MB RAM, 20GB Storage)" while the large instance shows "Large VM (4 x 1.6GHz CPU, 7GB RAM, 1,000GB Storage)".
My question is this. If I link a storage account to this cloud service do I get the listed storage in my storage account included with my payment for the cloud service. EG. I have a Large instance with a linked storage account and in the storage account I have 500GB of data stored. Do I pay 251.06 for the cloud service and an additional $36.91 for the 500 gb or is the storage free because it is under the 1000 gb limit listed as included storage for the cloud service?
Your question not unique, but rather common. The answer is - you pay for VM once and for Cloud Storage - second time. The point is that if you do Cloud Service (Web and Worker Roles) the storage that comes with the VM is NOT persistent storage. This means that the VM storage (the one that is from 20GB to 2TB depending ot VM size) can go away at any point of time. While the Cloud Storage (the cloud storage account - BLob / Tables / Queues) is absolutely durable, secure, persistent and optionally even geo-replicated.

Resources