Can Aurora Serverless v2 be used with AppSync RDS Resolvers? - amazon-rds

We're upgrading our serverless version 1 MYSQL 5.7 database to Serverless v2 MySQL 8.0 (Aurora 3) as documented here and here. Our current AppSync graphql API uses the RDS resolvers to access the database and call stored procedures through the Data API and it works wonderfully. In our test upgrade we've found that the Aurora 3 Serverless 2 version of the database (although available to standard connection via MySQL WorkBench) has deprecated the Data API.
So any AppSync resolvers that follow the recommended RDS AppSync pattern in the Aurora Resolver tutorial or the RDS template mapping reference won't work and give the following error in CloudWatch:
"error": {
"message": "RDSHttp:{\"message\":\"httpendpoint not enabled."}",
"type": "400 Bad Request"
},
Using aws rds modify-db-cluster --db-cluster-identifier <clusterid> --enable-http-endpoint has no effect.
The release notice for Aurora Serverless v2 gives RDS proxy as one of the benefits but I can't see a way to integrate that into AppSync without writing a lot of additional lambda. Can anyone tell us how to use v2 with AppSync RDS resolvers?

Please review: When will Aurora Serverless V2 have a Data API?
Basically, no, there are no set plans (but a lot of feature requests) for the Data-API.
I can only recommend you to help us flood AWS with feature requests for Data-API in Aurora Serverless v2!
Use your AWS Support channel, your AWS account rep. and AWS re:Post forums!

Related

What is the recommended way to use MongoDB with a NodeJS application deployed on Elastic Beanstalk?

I am currently building a web app which needs a database to store user info. The startup I'm working for wants to deploy it on Elastic Beanstalk. I am just getting started with all the cloud stuff and am completely at a loss.
Should I create a MongoDB Atlas cluster? Will it be able to connect to my app hosted on EB? Will I need to upgrade my plan on AWS to be able to connect to a database? Can it be integrated with DynamoDB? If yes, is DynamoDB significantly costlier?
I don't have answers to any of the above questions and am just honestly looking for a roadmap on what to do. I went through numerous articles and videos but still can't arrive at a solution. Any help will be much appreciated.
Should I create a MongoDB Atlas cluster?
That is one possible solution. You could also look at Amazon's Document DB service, which is MongoDB compatible.
Will it be able to connect to my app hosted on EB?
There is nothing preventing you from connecting to a MongoDB Atlas cluster from EB.
Will I need to upgrade my plan on AWS to be able to connect to a
database?
No
Can it be integrated with DynamoDB?
DynamoDB is a completely different database system, that shares almost nothing in common with MongoDB other than the fact that neither of them use SQL. If your application already uses MongoDB, then converting it to DynamoDB could be a large lift.
If yes, is DynamoDB significantly costlier?
In general, DynamoDB would be significantly cheaper than MongoDB, because DynamoDB is a "serverless" offering that charges based on your usage patterns, while MongoDB would include charges for a server running 24/7.

Google Cloud Platform to AWS migration [PostgreSQL 9.6]

I'm working on a Django (2.1) project that is hosted on Google Cloud Platform with a ~= 7GB size PostgreSQL (9.6) database.
The documentation doesn't cover this specific version of PostgreSQL, so I'm stuck in the DMS endpoints configurations to connect the old database and perform the instance replication with DMS (Database Migration Service) from AWS.
I've followed this tutorial, but there is no details about the endpoints configuration. Nothing on the documentations too (I've spent a lot of time searching on it). Only with a feel other specific databases like Oracle and MySQL.
I need to know how to configure the Source and Target endpoints of the instance on AWS DMS, so I can connect my database on GCP and start the replication.
I've found my answer by trial and error.
Actually the configuration is pretty straightfoward, after I found out that I did not create the RDS instance first:
RDS - First you need to create your DB instance that will host your DB. After creating you can see the endpoint and port of your database: e.g. Endpoint your-database.xxxxxxxxxxxx.sa-east-1.rds.amazonaws.com port 5432;
DMS - On Database Migration Service painel, go to Replication Instance and create a new one. Set the VPC to the one that you've created or the default one if it works for you.
Source Endpoint - Configure with the Google Cloud PLatform IP setted on your Django project settings.py. The source endpoint work getting your DB from GCP using the IP;
Target Endpoint - Set this one with the address and port that you created at step 1;
Test connection.
After many trials I've completed my database migration successfully.
Hope this help someone who's passing trough the same problems.

spring cloud stream kinesis configuration

I am trying integrate spring cloud stream kinesis in my app but i cant find all configuration option in there manual. I have seen this link:
https://github.com/spring-cloud/spring-cloud-stream-binder-aws-kinesis/blob/master/spring-cloud-stream-binder-kinesis-docs/src/main/asciidoc/overview.adoc
There are few properties mentioned like:
spring.cloud.stream.instanceCount=
I would like to know how can i set some of the properties which i cant see in the documentation:
hostname
port number
access key
secret key
username
I am looking for something like:
spring.cloud.stream.binder.host=
spring.cloud.stream.binder.port=
spring.cloud.stream.binder.access_key=
There is no host or port for AWS services. You only do a connection to the AWS via an auto-configuration. The Spring Cloud Kinesis Binder is fully based on the auto-configuration provided by the Spring Cloud AWS project. So, you need to follow its documentation how to configure accessKey and secretKey: https://cloud.spring.io/spring-cloud-static/spring-cloud-aws/2.1.2.RELEASE/single/spring-cloud-aws.html#_spring_boot_auto_configuration:
cloud.aws.credentials.accessKey
cloud.aws.credentials.secretKey
You also may consider to use a cloud.aws.region.static if you don't run your application in the EC2 environment.
There is no more magic than standard AWS connection settings and auto-configuration provided by the Spring Cloud AWS.
Or you can rely on the standard AWS credentials file instead.

Realtime database on AWS

Hello, I'm using Google Firebase Real-time Database.
It was so good. On nodejs, react, react-native and etc,
it was nice to be able to handle changes in database values as event.
But I wish to develop with Amazon AWS
because I want to know why so many people love AWS and decisively, our company has been supported for 2 years to use AWS.
I wannna implement Realtime database on AWS but I couldn't find information on realtime database on AWS Console.
Question
For Realtime Database with AWS, I think I have to use several features(maybe lambda, dynamoDB). Is it right?
On React, NodeJS or etc, can I handle change on database's value?
(like google firebase cloud function, realtime database)
Let me answer your questions inline.
For Realtime Database with AWS, I think I have to use several features(maybe lambda, dynamoDB). Is it right?
You can use the newly introduced feature AWS AppSync with different storage optionsthis (e.g DynamoDB or RDS Aurora) where it creates a GraphQL schema and query option making a layer on top of AWS databses for realtime communication with clients. However its still under preview so you need to request it from AWS filling the form.
The other approach is to use AWS APIGateway, Lambda, DynamoDB or Aurora with AWS IOT Websockets.
On React, NodeJS or etc, can I handle change on database's value? (like google firebase cloud function, realtime database)
Yes both DynamoDB and Aurora provides triggers for change sets to invoke Lambda code.
here is a good comparison of AWS real time db and Google firebase might helpful to you
https://db-engines.com/en/system/Amazon+DynamoDB%3BFirebase+Realtime+Database%3BRealm

Does Azure support payload request/response mappings into documentDB like AWS velocity templates?

Context: We have a serverless arch on AWS that we're trying to replicate in Azure. We originally used lambda to read/write to dynamo but switched to velocity templates because they're faster.
Goal: Read/Write to Azure's documentDB with a serverless arch.
Potential Solutions:
Read/Write with Azure Functions - I've already done this, but suspect it will be too slow once we integrate with front-end
Map payload requests/responses with the Azure equivalent of AWS's Velocity Templates
Question: Does Azure support payload request/response mappings into documentDB like AWS velocity templates?
You can front-end/onboard DocumentDB (now CosmosDB) API's onto Azure API Management (which is the API Gateway PAAS service available in Azure) and use the end-points from APIM in Azure functions. You can use policies and liquid templating language to further customize/transform the message or read part of message.

Resources