Started working with AWS, Have huge account of EXCEL data to be stored in AWS and need to access those data form AWS API's. Pleas help me where to start for this.
Not clear about the use case. can anyone explain me the difference between Amazon Simple Storage Service (Amazon S3), Amazon Elastic Compute Cloud (Amazon EC2), Amazon SimpleDB in simple words.
Please help me in how start from scratch.
Thanks,
In simple terms:
S3: File Storage
EC2: Virtual Servers (Linux or Windows)
Simple DB: A NoSQL database. Most people use the newer DynamoDB service these days.
You really aren't giving enough information for anyone to provide detailed help. You could store the data as files in S3, and possibly query it using Athena. You could store the files on a file system on an EC2 server and run any Windows or Linux program there to work with the files. You could store the data in a relational OLTP database using the RDS service. You could store the data in an OLAP database using Redshift. You could store the data in a NoSQL database using DynamoDB. If it is an extremely large amount of data you might want to look into using the Elastic Map Reduce service to process it.
Related
I have a request to move Source Oracle DB into AWS Oracle RDS. I research on AWS page to find out the solution, but AWS guide very complex such as upload dump file to S3, download file dump...I don't want to do on this way because it very take time. Any one have any solution to move database to AWS Oracle RDS?
My updated: Source Oracle DB is not use any AWS service. It only installed on phycical server.
Please help share any solution/tools can use to migrate
You can use the AWS native Database Migration Service (AWS DMS). Below is the link to the AWS DMS workshop.
https://catalog.us-east-1.prod.workshops.aws/workshops/77bdff4f-2d9e-4d68-99ba-248ea95b3aca/en-US/oracle-oracle/data-migration
In the link, source database is mentioned as AWS RDS, but you can connect to any source database at on-premises or other locations. All you need is connectivity (between source Oracle and AWS target RDS Oracle) along with source database host IP, database port and db user credentials with necessary permissions to pull data from source. Usually for database migrations a direct connect is recommended to avoid data transfer related issues, but can also be done over VPN (can be slow for large migrations)
Start with the pre-requisites (permissions, grants etc.) to prepare the source database for migration as mentioned in the link above.
Other tools to explore would be to use Database native tools, like for Oracle we get Data Pump (export/import), for which we will have to use S3 for dumping the source data and then importing into AWS RDS from S3. This may be ok for one time activity. But for large number of migrations AWS DMS with a stable connectivity is the way to go.
Third option I can think of is use of AWS Snowball, if there is no reliable connectivity between source and AWS for large data transfer. AWS Snowball edge storage can be requested to your db location and hooked up to the network. Dump the database export into the Snowball and ship it back to AWS. They will copy the DB dump to a S3 bucket and from there we can import it into RDS.
Hope that helps...
You can use SQL Developer Tool to copy database from source to AWS Oracle RDS (I am using SQL Developer Tool version 19.2.1.247)
Before migrate you need the below things to prepare/configure on AWS Oracle RDS, to ensure the same with Source Oracle DB
Create the same user who assigned to your schema/database
If Source Oracle DB is using tablespace, you must be create the same tablespace
After prepared, You will do the below steps on SQL Developer Tool
Using admin account to created a connect to Source Oracle DB server.
Using admin account to created a connect to Oracle AWS server.
Go to Tools -> Database Copy...
On Dialog, choose source db and destination db
INFO:If you are using tablespace on source db, you must be choose [Tablespace Copy] like that:
Click Next to continue and waiting to transfer.
am looking to compare the aurora DB and RDS DB in aws. I see that aurora can also use RDS behind the scenes.
I have worked in sybase and sql server in the past. For these difference is clear as they are two different products with their own SQL and admin consoles. I couldn't draw similar picture for aws databases.
The main difference is the deployment, escalation and managing tools that AWS (or GCP's Database) offers you. The engine (Aurora in your case) is the component that those services use to CRUD data from the Databases they manage.
Amazon Relational Database Service (RDS)
Google Cloud databases
I need to move schema and its objects from one AWS RDS to another AWS RDS db.
I have used AWS schema conversion tool (SCT) in the past.
Is there any other better way or what I am doing is the best.
Thank you,
AWS Database Migration Service (AWS DMS) will help you to migrate databases to AWS quickly and securely. The source database remains fully operational during the migration, minimizing downtime to applications that rely on the database. The AWS Database Migration Service can migrate your data to and from the most widely used commercial and open-source databases.
Here is the reference link which will help you to guide further for AWS DMS
I am creating a web app for some co-working. I have text with assets (most of them pictures in print quality, say 5MB, all in all about 5GB per month).
I am going to host this on amazon's cloud using EC2 instances for a node.js server and a mongodb with attached block storage.
The assets are private to an object so not everyone will have access to it.
How should i handle the assets? Save them as binarys in the database or load them up to S3? (or any other amazon service)
Does somebody have experience on this? Or maybe some helpful links. Thanks in advance
I would probably store the assets on S3 without public access, then you can grant access to authorized users by generating temporal signed urls from your webservers when needed.
This way you can leverage your servers complexity by handing over the storage dirty work to S3, and you can still have your files accessed only by who has access to them.
If you need to do access control there are lots of things you could do but the most obvious would be to server the asserts through your web server and have it implement the access control logic desired. Your app could proxy through the source object from S3 or MongoDB GridFS. If you are already using MongoDB, in this particular case I would use GridFS unless you want some of the cost saving features of S3 such as reduced redundancy storage.
We have a web application which is hosted on EC2 (Apache in Ubuntu) with MySQL DB in RDS (Multi AZ). We are planning to go for another application instance which will primarily be used by our support team to analyse certain LIVE issues. In order to do this, we would like to have a copy of LIVE DB data in another instance, preferably in another RDS instance. Here is our approach:
Get the latest RDS snapshot
Create a new RDS instance, and copy the RDS snapshot into it
Set up the application configuration to point the DB to the new RDS instance created above
Could you please share your comments on whether this approach is fine, or is there a better approach?
By the way, I checked following stackoverflow questions:
How to copy a database using RDS
Amazon RDS replica
In both these questions, mysqldump is suggested. But in my case the DB size will be huge, and mysqldump might slow down the LIVE performance.
Take a look at AWS read replicas. See http://aws.amazon.com/rds/mysql/#Read_Replica