We have a plan to create a new application, which won't have the same architecture of the current one, and will definitely have some differences in the tables' schemas. Our current application's database is running on a AWS Aurora Serverless v1 cluster, with MySQL 5.7. We need to migrate data from the old application to the new one, making some schema changes in some places.
One solution is to create dumps of the old tables, in an easily readable format like csv, then import the data in the new application, picking the data as needed. This will require writing bespoke code on both applications to export data from one and import it into the other.
Considering that we are in AWS, and we intend to keep using it, choosing Aurora as a database (we'll have to decide whether use a provided solution or go with Serverless v2), is there any service to export data from one Aurora Serverless v1 cluster to another Aurora cluster, making changes along the way?
I know that database snapshots exist, but that's not exacly our goal. We need to migrate data from one database to another, not creating another copy of the first one. I've seen that there is a service, called "AWS Database Migration Service", that's used to migrate data into RDS, but does this service suit our needs, to get data from one Aurora Serverless database, do some changes, and then put it into another RDS database of our choice?
You could use following options to accomplish your goal.
1.- AWS Database Migration Service, this service provide you capabilities to replicate your data and also perform some schema transformation during replication process. You can create selection and transformation rules, the service will execute those rules and apply changes to the target schemas. You could review more details on this link.
https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Tasks.CustomizingTasks.TableMapping.SelectionTransformation.Transformations.html
2.- AWS Glue Studio, it is ETL service that allows you extract, convert and transfer data between source and target databases.
What is AWS Glue Studio?
Automate ETL jobs between Amazon RDS for SQL Server and Azure Managed SQL using AWS Glue Studio
Related
I have a request to move Source Oracle DB into AWS Oracle RDS. I research on AWS page to find out the solution, but AWS guide very complex such as upload dump file to S3, download file dump...I don't want to do on this way because it very take time. Any one have any solution to move database to AWS Oracle RDS?
My updated: Source Oracle DB is not use any AWS service. It only installed on phycical server.
Please help share any solution/tools can use to migrate
You can use the AWS native Database Migration Service (AWS DMS). Below is the link to the AWS DMS workshop.
https://catalog.us-east-1.prod.workshops.aws/workshops/77bdff4f-2d9e-4d68-99ba-248ea95b3aca/en-US/oracle-oracle/data-migration
In the link, source database is mentioned as AWS RDS, but you can connect to any source database at on-premises or other locations. All you need is connectivity (between source Oracle and AWS target RDS Oracle) along with source database host IP, database port and db user credentials with necessary permissions to pull data from source. Usually for database migrations a direct connect is recommended to avoid data transfer related issues, but can also be done over VPN (can be slow for large migrations)
Start with the pre-requisites (permissions, grants etc.) to prepare the source database for migration as mentioned in the link above.
Other tools to explore would be to use Database native tools, like for Oracle we get Data Pump (export/import), for which we will have to use S3 for dumping the source data and then importing into AWS RDS from S3. This may be ok for one time activity. But for large number of migrations AWS DMS with a stable connectivity is the way to go.
Third option I can think of is use of AWS Snowball, if there is no reliable connectivity between source and AWS for large data transfer. AWS Snowball edge storage can be requested to your db location and hooked up to the network. Dump the database export into the Snowball and ship it back to AWS. They will copy the DB dump to a S3 bucket and from there we can import it into RDS.
Hope that helps...
You can use SQL Developer Tool to copy database from source to AWS Oracle RDS (I am using SQL Developer Tool version 19.2.1.247)
Before migrate you need the below things to prepare/configure on AWS Oracle RDS, to ensure the same with Source Oracle DB
Create the same user who assigned to your schema/database
If Source Oracle DB is using tablespace, you must be create the same tablespace
After prepared, You will do the below steps on SQL Developer Tool
Using admin account to created a connect to Source Oracle DB server.
Using admin account to created a connect to Oracle AWS server.
Go to Tools -> Database Copy...
On Dialog, choose source db and destination db
INFO:If you are using tablespace on source db, you must be choose [Tablespace Copy] like that:
Click Next to continue and waiting to transfer.
I need to move schema and its objects from one AWS RDS to another AWS RDS db.
I have used AWS schema conversion tool (SCT) in the past.
Is there any other better way or what I am doing is the best.
Thank you,
AWS Database Migration Service (AWS DMS) will help you to migrate databases to AWS quickly and securely. The source database remains fully operational during the migration, minimizing downtime to applications that rely on the database. The AWS Database Migration Service can migrate your data to and from the most widely used commercial and open-source databases.
Here is the reference link which will help you to guide further for AWS DMS
Is it possible to connect ADF to an Oracle database on AWS as source and migrate data to an Azure SQL Server?
I've made some attempties and the result was always timeout.
The goal was achieved using an Integration Runtime, but didn't wanted to use it. I'd like a direct connection.
I've made some attempties and the result was always timeout.
Hi,Vinicius. Based on the Oracle connector,no more special properties you need to configure except host,user,password properties etc for Oracle database on AWS. You could check the supported versions of an Oracle database.
If you still have timeout issue, you could commit feedback to azure data factory to find an official statement.
Since you want to avoid IR, maybe you could refer to below solutions:
1.Try export data to AWS S3 and ADF supports AWS S3 connector as source.
2.Try to use Data Integration - Kettle tool to transfer data by jdbc driver.
I have my application's database running in AWS RDS (postgresql). I need to migrate the data from AWS to Azure SQL Data Warehouse.
This is a kind of ETL process and I need to do some calculations/computations/aggregations on the Data from Postgresql and put it in a different schema in Azure SQL Data Warehouse for reporting purpose.
Also, I need to sync the data on a regular basis without duplication.
I am new to this Data Migration concept and kindly let me know what are the best possible ways to achieve this task?
Thanks!!!
Azure datafactory is the option for you. It is a cloud data integration service, to compose data storage, movement, and processing services into automated data pipelines.
Please find the Postgresql connector below.
https://learn.microsoft.com/en-us/azure/data-factory/data-factory-onprem-postgresql-connector
On the transform part you may have to put in some custom intermediate steps to do the data massaging.
Have you tried the Azure datafactory suggestion?
Did it solve your issue?
If not, you can try using Alooma. This solution can replicate PostgreSQL database hosted on Amazon RDS to Azure SQL data warehouse in near real time. (https://www.alooma.com/integrations/postgresql/)
Follow this steps to migrate from RDS to Azure SQL:
Verify your host configuration
On the RDS dashboard under Parameter Groups, navigate to the group that's associated with your instance.
Verify that hot_standby and hot_standby_feedback are set to 1.
Verify that max_standby_archive_delay and max_standby_streaming_delay are greater than 0 (we recommend 30000).
If any of the parameter values need to be changed, click Edit Parameters.
Connect to Alooma
You can connect via SSH server (https://support.alooma.com/hc/en-us/articles/214021869-Connecting-to-an-input-via-SSH) or to to whitelist access to Alooma's IP addresses.
52.35.19.31/32
52.88.52.130/32
52.26.47.1/32
52.24.172.83/32
Add and name your PostreSQL input from the Plumbing screen and enter the following details:
Hostname or IP address of the PostgreSQL server (default port is 5432)
User name and Password
Database name
Choose the replication method you'd like to use for PostgreSQL database replication
For full dump/load replication, provide:
A space- or comma-separated list of the names of the tables you want to replicate.
The frequency at which you'd like to replicate your tables. The more frequent, the more fresh your data will be, but the more load it puts on your PostgreSQL database.
For incremental dump/load replication, provide:
A table/update indicator column pairs for each table you want to replicate.
Don't have an update indicator column? Let us know! We can still make incremental load work for you.
Keep the mapping mode to the default of OneClick if you'd like Alooma to automatically map all PostgreSQL tables exactly to your target data warehouse. Otherwise, they'll have to be mapped manually from the Mapper screen.
We have a web application which is hosted on EC2 (Apache in Ubuntu) with MySQL DB in RDS (Multi AZ). We are planning to go for another application instance which will primarily be used by our support team to analyse certain LIVE issues. In order to do this, we would like to have a copy of LIVE DB data in another instance, preferably in another RDS instance. Here is our approach:
Get the latest RDS snapshot
Create a new RDS instance, and copy the RDS snapshot into it
Set up the application configuration to point the DB to the new RDS instance created above
Could you please share your comments on whether this approach is fine, or is there a better approach?
By the way, I checked following stackoverflow questions:
How to copy a database using RDS
Amazon RDS replica
In both these questions, mysqldump is suggested. But in my case the DB size will be huge, and mysqldump might slow down the LIVE performance.
Take a look at AWS read replicas. See http://aws.amazon.com/rds/mysql/#Read_Replica