Creating a development instance of a large production RDS instance - amazon-rds

We have a large RDS instance in AWS (100GB). We're creating a script which will clear out old data in the database at regular intervals. In order to test this fully we'd like to setup a copy of the existing production RDS instance which we can use to test the script on (so we don't lose data from the production instance).
Is there a way to create a standalone duplicate RDS instance based on another instance? I had thought I could do this by using a snapshot, but it appears you can only restore an instance from a snapshot.

With a snapshot you can restore to a different/new instance name. E.g. create a snapshot of database-one then restore the snapshot to a different instance name database-one-copy.
I use this method programmatically to create a development database on a nightly schedule or as needed using the python AWS SDK Boto3 and this method https://github.com/airsciences/aws-rds-persist.

Related

Creating a CloudSQL instance with terraform and custom lightweight machine

To create an sql database instance you need to specify the tier in the form
db-custom-<CPUs>-<Memory_in_MB>
It ends up setting up an standard machine type. How can I specify a lightweight type?
it's legacy format (e.g., db-f1-micro, db-n1-standard-1, ...).
current machine type(lightweight, standard and high memory) is just guide line. see below.
https://cloud.google.com/sql/docs/mysql/create-instance#machine-types

How to do "Launch more like this" ec2-instances using javascript

I want to create copy of my instance programmatically using javascript, and I also want to mount my S3 bucket to the newly created instance.
Is there a way to do "Launch more like this" using javascript.
Things I tried:
Created an AMI
Using that I created an instance.
But it is not copying the contents of Original Instance into the newly created Instance. And also it is not mounting the S3 bucket.
Launch More Like This is an AWS Console UI functionality that copies over all settings of the current instance like AMI, Storage, Security Groups, AZs, Subnets etc, but still gives you an opportunity to make modifications before launching. This can be easily reproduced by coping over the the output of DescribeInstances API and applying them to the RunInstances API.
It does not copy over the contents/data of the existing machine. If you need to copy over the contents, create an AMI of the existing instance and then launch the new EC2 instance using the new AMI.
To attach an S3 Bucket as a volume to your EC2 instance, you can use S3FS/Fuse You may want to install this as part of your AMI, so you don't need to install it each time you launch your instance. You can run the mount scripts as part of the init scripts, where you can specify or configure the S3 bucket to be mounted.
Hope this helps.

AWS: How to launch multiple of the same instance from python?

I have an AWS Windows Server 2016 VM. This VM has a bunch of libraries/software installed (dependencies).
I'd like to, using python3, launch and deploy multiple clones of this instance. I want to do this so that I can use them almost like batch compute nodes in Azure.
I am not very familiar with AWS, but I did find this tutorial.
Unfortunately, it shows how to launch an instance from the store, not an existing configured one.
How would I do what I want to achieve? Should I create an AMI from my configured VM and then just launch that?
Any up-to-date links and/or advice would be appreciated.
Yes, you can create an AMI from the running instance, then launch N instances from that AMI. You can do both using the AWS console or you could call boto3 create_image() and run_instances(). Alternatively, look at Packer for creating AMIs.
You don't strictly need to create an AMI. You could simply the bootstrap each instance as it launches via a user data script or some form of CM like Ansible.

AWS Data Pipeline between RDS Instances (MySQL)

Is it possible to build a data pipeline in AWS to transfer data between two different RDS MySQL instances? The transfer would be taking place once per day (although not necessarily at the same time every day).
I am interested in copying full datatables from one instance to another, but the documentation for the data pipeline service doesn't seem to consider this use case.
Thanks in advance.
If one is a copy of the other, you can use Data Migration Services (a different Amazon service).
If you choose "ongoing replication" then the service will update your target database throughout the day with changes from the source database.
I suspect if you start making changes to the target database that make it different to the source database then you will have problems.

Should I put database and CMS files on a separate EBS or S3?

Is is possible, or even advisable to use and EBS instance that remains at Instance Termination, to store database/website files, and reattach to a new Amazon instance in case of failure? OR should I backup a volume-bundle to S3? Also, I need an application to accelerate terminal window functions intelligently. Can you tell I'm a linux NOob?
We do this with our Nexus installation - the data is stored on a separate EBS instance that's regularly snapshotted but the root disk isn't (since we can use Puppet to create a working Nexus instance using the latest base AMI, Java, Tomcat and Nexus versions). The one drawback of this approach (vs your other approach of backing up to S3) is that you can't retrieve it outside of AWS if needed - if that is an important use case I'd recommend either uploading a volume bundle or a .tar.gz backup to S3.
However, in your case if you have a single EBS-backed EC2 instance which is your CMS server you could run it with a large root volume and keep that regularly backed up (either using EBS Snapshots or backing up a .tar.gz to S3) - if you're not particularly familiar with Linux that'll likely be the easiest way to make sure all your data is backed up (and if you need to extract the data only you can always do this by attaching that volume (or an instantiation of a snapshot of it) to another machine - you'd also have access to all the config files which may be of use...
Bear in mind that if you only want to run your server some of the time you can always Stop the instance rather than Terminate it - the EBS Instances will remain. Once you take a snapshot your data is safe - if part of an EBS Instance fails but it hasn't been modified since the last snapshot then AWS will transparently restore it with the EBS Snapshot data.

Resources