Expected Downtime to update Amazon RDS to mysql 5.6 - amazon-rds

Recently I've got notification from Amazon saying
Updates available: You have OS upgrades pending for 1 instance(s). To
opt in to these upgrades, select a DB instance, open the Instance
Actions menu, and click Upgrade Now, Upgrade at Next Window. If you do
nothing, optional upgrades will remain available and mandatory
upgrades will be applied to your instances at a later date specified
by AWS. You can review the type of the upgrade in the Maintenance
column. Note: The instances will be taken offline during the OS
upgrade.
I have Amazon RDS instance with configuration given below
Class: db.m3.xlarge
Engine: mysql 5.5.40
Storage Type: Magnetic
Multi-AZ: Yes
Storage: 250 GB (55% used)
I need to know the expected downtime to update.
Thanks in advance.

Your downtime will be minimal because your instance is set up as Multi-AZ. In this configuration, the standby instance is upgraded first, then a failover to the standby instance occurs, then the first instance (now in standby) is upgraded. Your only downtime will be during the failover process, which is usually 1-2 minutes in duration.

Related

How to patch GKE Managed Instance Groups (Node Pools) for package security updates?

I have a GKE cluster running multiple nodes across two zones. My goal is to have a job scheduled to run once a week to run sudo apt-get upgrade to update the system packages. Doing some research I found that GCP provides a tool called "OS patch management" that does exactly that. I tried to use it but the Patch Job execution raised an error informing
Failure reason: Instance is part of a Managed Instance Group.
I also noticed that during the creation of the GKE Node pool, there is an option for enabling "Auto upgrade". But according to its description, it will only upgrade the version of the Kubernetes.
According to the Blog Exploring container security: the shared responsibility model in GKE:
For GKE, at a high level, we are responsible for protecting:
The nodes’ operating system, such as Container-Optimized OS (COS) or Ubuntu. GKE promptly makes any patches to these images available. If you have auto-upgrade enabled, these are automatically deployed. This is the base layer of your container—it’s not the same as the operating system running in your containers.
Conversely, you are responsible for protecting:
The nodes that run your workloads. You are responsible for any extra software installed on the nodes, or configuration changes made to the default. You are also responsible for keeping your nodes updated. We provide hardened VM images and configurations by default, manage the containers that are necessary to run GKE, and provide patches for your OS—you’re just responsible for upgrading. If you use node auto-upgrade, it moves the responsibility of upgrading these nodes back to us.
The node auto-upgrade feature DOES patch the OS of your nodes, it does not just upgrade the Kubernetes version.
OS Patch Management only works for GCE VM's. Not for GKE
You should refrain from doing OS level upgrades in GKE, that could cause some unexpected behavior (maybe a package get's upgraded and changes something that will mess up the GKE configuration).
You should let GKE auto-upgrade the OS and Kubernetes. Auto-upgrade will upgrade the OS as GKE releases are inter-twined with the OS release.
One easy way to go is to signup your clusters to release channels, this way they get upgraded as often as you want (depending on the channel) and your OS will be patched regularly.
Also you can follow the GKE hardening guide which provide you with step to make sure your GKE clusters are as secured as possible

Azure kubernetes service node pool upgrades & patches

I have some confusion on AKS Node pool upgrades and Patching. Could you please clarify on this.
I have one AKS node pool, which is having 4 nodes, so now I want to upgrade the kubernetes version only in two nodes of node pool. Is it possible?
if it is possible to upgrade in two nodes, then how we can upgrade remaining two nodes? and how we can find out which two nodes are having old kubernetes version instead of latest kubernetes version
While doing the Upgrade process, will it create two new nodes with latest kubernetes version, and then will it delete old nodes in node pool?
Actually azure automatically applies patches on nodes, but will it creates new nodes with new patches and deleted old nodes?
1. According to the docs:
you can upgrade specific node pool.
So the approach with additional node-pool mentioned by 4c74356b41.
Additional info:
Node upgrades
There is an additional process in AKS that lets you upgrade a cluster. An upgrade is typically to move to a newer version of Kubernetes, not just apply node security updates.
An AKS upgrade performs the following actions:
A new node is deployed with the latest security updates and Kubernetes version applied.
An old node is cordoned and drained.
Pods are scheduled on the new node.
The old node is deleted.
2. By default, AKS uses one additional node to configure upgrades.
You can control this process by increase --max-surge parameter
To speed up the node image upgrade process, you can upgrade your node images using a customizable node surge value.
3. Security and kernel updates to Linux nodes:
In an AKS cluster, your Kubernetes nodes run as Azure virtual machines (VMs). These Linux-based VMs use an Ubuntu image, with the OS configured to automatically check for updates every night. If security or kernel updates are available, they are automatically downloaded and installed.
Some security updates, such as kernel updates, require a node reboot to finalize the process. A Linux node that requires a reboot creates a file named /var/run/reboot-required. This reboot process doesn't happen automatically.
This tutorial summarize the process of Cluster Maintenance and Other Tasks
no, create another pool with 2 nodes and test your application there. or create another cluster. you can find node version with kubectl get nodes
it gradually updates nodes one by one (default). you can change these. spot instances cannot be upgraded.
yes, latest patch version image will be used

Solr 7.5 Auto scaling replica types

I am using Solr version 7.5. I am trying to configure autoscaling in Solr, mainly by adding replicas. What is happening is, whenever Solr adds new replicas during autoscaling it only adds replicas of type NRT. What I need to achieve is Solr should only add replicas of type TLOG during every autoscaling event. Is this possible? Any help is appreciated.
You'll have to upgrade to at least Solr 8.3 to get support for replica type when autoscaling. This is the same as in "Autoscaling solr - Add pull replicas, not NRT replicas" - the type wasn't configurable before 8.3.
I suggest upgrading to the newest version if you're going to do the upgrade.

Kubernetes NodeLost/NotReady / High IO Disks

I am experiencing a very complicated issue with Kubernetes in my production environments losing all their Agent Nodes, they change from Ready to NotReady, all the pods change from Running to NodeLost state. I have discovered that Kubernetes is making intensive usage of disks:
My cluster is deployed using acs-engine 0.17.0 (and I tested previous versions too and the same happened).
On the other hand, we decided to deploy the Standard_DS2_VX VM series which contains Premium disks and we incresed the IOPS to 2000 (It was previously under 500 IOPS) and same thing happened. I am going to try with a higher number now.
Any help on this will be appreaciated.
It was a microservice exhauting resources and then Kubernetes just halt the nodes. We have worked on establishing resources/limits based so we can avoid the entire cluster disruption.

Are Amazon RDS instances upgradable?

Will I am able to switch (I mean upgrade or downgrade) Amazon RDS instance on need basis or do I have to create a new afresh and go through migration?
Yes, Amazon RDS instances are upgradeable via the modify-db-instance command. There is no need for data migration.
From the Amazon RDS Documentation:
"If you're unsure how much CPU you need, we recommend starting with the db.m1.small DB Instance class and monitoring CPU utilization with Amazon's CloudWatch service. If your DB Instance is CPU bound, you can easily upgrade to a larger DB Instance class using the rds-modify-db-instance command.
Amazon RDS will perform the upgrade during the next maintenance window. If you want the upgrade to be performed now, rather than waiting for the maintenance window, specify the --apply-immediately option. Warning: changing the DB Instance class requires a brief outage for your DB Instance."
RE: Outage Time: we have a SQL Server 2012 RDS Instance (1TB non IOPS drive), and going from an db.m1.xlarge to db.m3.xlarge (more CPU, less $$) incurred just over 4 minutes of downtime.
NOTE: We did the upgrade from the AWS console GUI and selected "Apply Immediately", but it was 10 minutes before the outage actually began. The RDS status indicated "Modifying" immediately after we initiated the update, and it stayed this way through the wait time and the outage time.
Hope this helps!
Greg
I just did an upgrade from a medium RDS instance to a large when we were hit with unexpected traffic (good, right? :) ). Since we have a multi-AZ instance, we were down for 2-3 minutes. In Amazon's documentation, they say that the downtime will be brief if you have a multi-AZ instance.
For anybody interested, we just modified an RDS instance (MySQL, 15 GB HD, rest of standard parameters) changing it from micro to small. The downtime period was 5 minutes.
RE: Outage Time: we have just upgraded postgresql 9.3 by immediately requesting following changes:
upgrading postgresql 9.3.3 to 9.3.6
instance resize from m3.large to m3.2xlarge
changing storage type to provisioned IOPS
extending storage from 200G to 500G (most expensive operation in terms of time)
It took us almost 5 hours to complete this whole operation. Database contains around 100G of data at moment of upgrade. You can monitor progress of your upgrade under Events section in RDS console. During upgrade RDS takes couple of backup snapshots, progress of those can be monitored under Snapsnots section.
We just did an upgrade from db.m3.large to db.m3.xlarge with 200GB of non-IOPS data running SQL Server 2012. The downtime was roughly 5 minutes.
Upgrading MySQL RDS from db.t2.small to db.t2.medium for 25G of data took 6 minutes.
On multi-az, there will be a failover, but otherwise it will be smooth.
Heres the timeline data from my most recent db instance type downgrade from r3.4xlarge to r3.2xlarge on a Multi-Az configured Postgres 9.3 with 3TB of disk( actual data is only ~800G)
time (utc-8) event
Mar 11 10:28 AM Finished applying modification to DB instance class
Mar 11 10:09 AM Multi-AZ instance failover completed
Mar 11 10:08 AM DB instance restarted
Mar 11 10:08 AM Multi-AZ instance failover started
We had a Alter statement for a big table( around 53 million records) , and it was not able to complete the operation.
The existing size usage was 48GB.
We decided to increase the allocated Storage in AWS - RDS Instance
The whole Operation took 2 hours to complete
MYSQL
db.r3.8xlarge
from 100G to 200G
The Alter statement took around 40 min but it worked.
Yes, they're upgradable. Upgraded RDS instance from SQL Server 2008 to SQL Server 2012 for instance size of about 36 GB, class db-m1-small, storage 200 GB and with no IOPS or Multi AZ. There was no downtime, this process barely took 10 minutes.

Resources