Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I am new to AWS. All I know is that the Postgre database is hosted in AWS RDS. I want to build an ML model using AWS Sagemaker. I am not sure how to get the data from AWS RDS so that I can use it for building the ML model.
I will be thankful for any help.
There are multiple ways which you can follow to achieve this. Below are couple of options that you can use:
Export Amazon RDS/Amazon Aurora snapshots to Amazon S3 as Apache Parquet then build models using SageMaker.
Directly connect to RDS and build you models using SQL in Sageaker
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 days ago.
Improve this question
Recently I made a Nextjs web app with backend nodejs where I am taking user data and storing it in MongoDB, and user images and video weres stored in an AWS S3 bucket.
Now I want to deploy my entire website on the best platform like AWS instances.
I am thinking to host nodejs and nextjs on a separate server, so I can manage both easily.
I am a newbie in cloud stuff.
please guide the best way to deploy my website.
Thank you
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
What guidelines have been useful for folks who are looking to have a very large (100TB - PB) cloud database with multiple readers/writers (IoT) sources?
We expect to have a REDIS cache backed by either DynamoDB, Azure CosmosDB, or other (Not yet decided).
But is it a problem to have purely lambda and serverless to service the read/write requests? There are some guidelines from AWS about this:
https://aws.amazon.com/blogs/architecture/how-to-design-your-serverless-apps-for-massive-scale/
https://aws.amazon.com/blogs/compute/best-practices-for-organizing-larger-serverless-applications/
and one case study:
https://www.serverless.com/blog/how-droplr-scales-to-millions-serverless-framework
Your best bet for information like this is Azure Architecture Center that has articles on best practices and architectural guidance.
Regarding using Dynamo or Cosmos DB to back Redis, I can't offer any guidance on the efficacy for doing such a thing. What I can say is that I do see customers opt-out of using Redis altogether and use Dynamo or Cosmos as a key/value cache-layer because the latency is good enough.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I want to create the dashboard dynamically using dynamic MySQL query. Is it possible?
Your title asks if you can create a QuickSight dashboard programmatically. The answer is no. The only QuickSight APIs are for managing Users and Groups.
Your question then asks if you can use Dynamic MySQL queries. Using a SQL Query - Amazon QuickSight says:
When creating a new data set based on a direct query to a database, you can choose an existing SQL query or create a new SQL query.
That might be what you seek.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 days ago.
Improve this question
I have a GPU application(C# windows app) that runs locally on a desktop that would like to ran on Amazon ECS P2 or G2 that can on the fly spin up instances dumps some output and shuts-down when done.
I have few questions:
- Does this on fly spin-up instances work at all? Both Windows and Linux ?.
- Do I need to log into each spin-up instances and execute the application manually?
- The app needs to read input data and dump output file and I wonder how is this handled on Amazon?
Any good pointers is very much appreciated.
You can install/launch your code at the EC2 startup via custom script, e.g. it can read data or code from S3 under the same account. It's under Configure Instance Details->Advanced Details. I'm sure you can script it using aws cli too.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
Is it possible to run the azure sql DML statements (update or delete) from Azure machine learning studio. Please help!
Currently no. There is no odbc driver in the container.
If you provide some more details about what you want to delete maybe I can offer a workaround.
But to update a Db... You could use execute python to send an event to an event hub(in a service bus). Then connect to that event hub via stream analytics. There you can set the azure SQL Db as an output port to update your row.
Let me know if you need more details about any step.