Realtime database on AWS - node.js

Hello, I'm using Google Firebase Real-time Database.
It was so good. On nodejs, react, react-native and etc,
it was nice to be able to handle changes in database values as event.
But I wish to develop with Amazon AWS
because I want to know why so many people love AWS and decisively, our company has been supported for 2 years to use AWS.
I wannna implement Realtime database on AWS but I couldn't find information on realtime database on AWS Console.
Question
For Realtime Database with AWS, I think I have to use several features(maybe lambda, dynamoDB). Is it right?
On React, NodeJS or etc, can I handle change on database's value?
(like google firebase cloud function, realtime database)

Let me answer your questions inline.
For Realtime Database with AWS, I think I have to use several features(maybe lambda, dynamoDB). Is it right?
You can use the newly introduced feature AWS AppSync with different storage optionsthis (e.g DynamoDB or RDS Aurora) where it creates a GraphQL schema and query option making a layer on top of AWS databses for realtime communication with clients. However its still under preview so you need to request it from AWS filling the form.
The other approach is to use AWS APIGateway, Lambda, DynamoDB or Aurora with AWS IOT Websockets.
On React, NodeJS or etc, can I handle change on database's value? (like google firebase cloud function, realtime database)
Yes both DynamoDB and Aurora provides triggers for change sets to invoke Lambda code.

here is a good comparison of AWS real time db and Google firebase might helpful to you
https://db-engines.com/en/system/Amazon+DynamoDB%3BFirebase+Realtime+Database%3BRealm

Related

How to use MQL with node.js?

I have created a Monitoring Metrics Dashboard in my Google Cloud Console. The dashboard is working as expected, but since my app is highly dependent on those metrics, I was thinking about creating a schedule to see these metrics data and update the server accordingly.
After investigating the dashboards, I have noticed that there is an MQL query. Is there any way to execute this query in my node.js function so I can fetch the data and update the server?
You can try MetaApi https://metaapi.cloud cloud service which provides REST API and WebSocket API access to both MetaTrader 4 and MetaTrader 5 accounts.
Official REST API documentation: https://metaapi.cloud/docs/client
SDKs: https://metaapi.cloud/sdks (javascript, python and Java SDKs are provided as per April 2021)
It supports reading account information, positions, orders, trade history, receiving quotes, and accessing market data.
The service also provides copy trading API https://metaapi.cloud/docs/copyfactory and API to calculate forex trading metrics on a MetaTrader account https://metaapi.cloud/docs/metastats.
There is a case similar to yours in Stackoverflow (answered by user3666197).
And also you can easily connect your nodejs server in mysql. MySQL is one of the most popular open-source databases in the world and efficient as well.
Please follow Nodejs mysql tutorial for more details about the steps/process process of how to connect nodejs server to mysql.

Firebase Realtime Database Triggers on Cloud Run or Kubernetes, Not Cloud Functions

Based on Firebase documentation, one can create a cloud function to trigger when documents are added/updated/deleted on Firebase database through:
functions.database.ref('/messages/{pushId}/original').onUpdate();
However, using the Node.js admin SDK, when I call the following:
admin.database().ref("/messages/{pushId}/original").onUpdate();
It returns the error:
TypeError: admin.database(...).ref(...).onUpdate is not a function
What else we should do to get access to Firebase Realtime Database Triggers on Cloud Run or Kubernetes?
firebaser here
The onUpdate method only exists in the firebase-functions SDK and in the Cloud Functions equivalent on GCP. It does not exist in the Firebase Admin SDK, which is why you get an error when you try to use it there.
While it may be possible to receive Realtime Database events on Cloud Run (I haven't tried this myself), you will have to follow the process outlined for Eventarc on Cloud Run and then wire up the Realtime Database events as documented there.
Update: as a team mate pointed out, the Realtime Database triggers you can receive through Eventarc today are for administrative events, like creation and deletion of database instances. Triggers for data-level events are in the works, but not available yet (nor any timelines on when they will be).

What is the recommended way to use MongoDB with a NodeJS application deployed on Elastic Beanstalk?

I am currently building a web app which needs a database to store user info. The startup I'm working for wants to deploy it on Elastic Beanstalk. I am just getting started with all the cloud stuff and am completely at a loss.
Should I create a MongoDB Atlas cluster? Will it be able to connect to my app hosted on EB? Will I need to upgrade my plan on AWS to be able to connect to a database? Can it be integrated with DynamoDB? If yes, is DynamoDB significantly costlier?
I don't have answers to any of the above questions and am just honestly looking for a roadmap on what to do. I went through numerous articles and videos but still can't arrive at a solution. Any help will be much appreciated.
Should I create a MongoDB Atlas cluster?
That is one possible solution. You could also look at Amazon's Document DB service, which is MongoDB compatible.
Will it be able to connect to my app hosted on EB?
There is nothing preventing you from connecting to a MongoDB Atlas cluster from EB.
Will I need to upgrade my plan on AWS to be able to connect to a
database?
No
Can it be integrated with DynamoDB?
DynamoDB is a completely different database system, that shares almost nothing in common with MongoDB other than the fact that neither of them use SQL. If your application already uses MongoDB, then converting it to DynamoDB could be a large lift.
If yes, is DynamoDB significantly costlier?
In general, DynamoDB would be significantly cheaper than MongoDB, because DynamoDB is a "serverless" offering that charges based on your usage patterns, while MongoDB would include charges for a server running 24/7.

How to calculate how much data is being transferred over cloud each month in AWS

We are using AWS for our infra requirement, and for billing and costing purpose we need to know the exact amount of data transferred to our EC2 instances for a particular client. Is there any such utility available in AWS or how should I approach this problem.
Our Architecture is simple we have a api server which is a Node.js® server on one of the ec2 instance, this talks to the db server which is a MongoDB® on another ec2, apart from this we also have a web application server which runs angular web application in Node.js® again.
Currently we don't use ELB and we Identified the client by there login informations i.e the organisation id in the JWT Token.
Given your current architecture, you will need to create some form of Node middleware that extracts the client ID and content-length from the request (and/or response) and writes them to persistent storage. Within the AWS ecosystem, you could write to DynamoDB, or Kinesis, or even SQS. Outside the AWS ecosystem you could write to a relational DB, or perhaps the console log with some form of log agent to move the information to persistent store.
However, capturing the data here has a few issues:
Other than logging to the console, it adds time to each request.
If logging to the console, there will be a time delay between the actual request and the time that the log is shipped to persistent storage. If the machine crashes in that interval you've lost data.
When using AWS services you must be prepared for rate limiting (this is one area where SQS is better than Kinesis or DynamoDB).
Regardless of the approach you use, you will have to write additional code to process the logs.
A better approach, IMO, would be to add the client ID to the URL and an ELB for front-end load distribution. Then turn on request logging and do after-the-fact analysis of the logs using AWS Athena or some other tool.
If you run these EC2 instances in VPC, you can use VPC Flow Logs to get insight into how much data each of the instances transfers.

AmazonWebService - Should i use AWS API Gateway or AWS SDK

I'm trying to call a lambda function from NodeJS. After research i know 2 ways to do it:
Assign Lambda function into AWS API Gateway and call that API.
Call Lambda function through AWS SDK
What are pros and cons of API Gateway and AWS SDK ? And when to use each way above?
It depends. API Gateway is mostly used to give temporary access to Lambda functions in environments that are not secure (i.e. browsers, desktop apps, NOT servers).
If your environment is secure, as in it runs on an EC2 instance with an IAM role, or another server with secure stored credentials, then feel free to use the SDK and call the Lambda function correctly.
If you need to expose your Lambda function to the entire internet, or to authorised users on the web, or to any user that has the potential to grab the access key and secret during transit, then you will want to stick API Gateway in front.
With API Gateway you can secure your Lambda functions with API keys, or through other authorisers such as Amazon Cognito so that users need to sign in before they can use the API endpoint. This way they only gain temporary credentials, rather than permanent ones that shouldn't be available to anyone.
I disagree with _DF about the security concern on invoking lambda directly through client. Over the 4 years I implementing Client + AWS SDK on my serverless approach. Direct hit to all microservices we have such as Lambda, DynamoDB, S3, SQS, etc.
To work with this approach, we have to strong understand about IAM Role Policy including its statements concept, Authentication Token, AWS Credential, and Token - Credential exchange.
For me, using SDK is better to implement serverless rather than API Gateway. Why I prefer to implementing SDK instead of API on my serverless infra?
API Gateway is Costly
Network hop-less
In fact, SDK is commonly contain an API to communicate with other applications Class base and simple call such as dynamodb.put(params).promise(), lambda.invoke(params).promise(), s3.putObject(params).promise(), etc. We can see a sample API call like fetch(URL).promise(), the term is not really different
API is more complex and some case can't or shouldn't be handled with
SDK is not scalable? No, I dont think so. Because it's class base, it's so scalable.
Slimming the infra and code writing, i.e to work with s3 no need deploy API+Lambda
Speed up the process, i.e storing data to dynamodb no need business logic through API+lambda
Easy maintaining, we only maintain our client code
Role Policy is more scalable; etc

Resources