Introducing node.js layer between UI and AWS services - node.js

I am designing a solution on AWS that utilizes Cognito for user management.
I am using this Quick Start as a starting point:
SAAS QuickStart
With one significant change: I plan to make this serverless. So no ECS containers to host the services. I will host my UI on S3.
My one question lies with the 'auth-manager' used in the existing solution, and found on github:
Auth-Manager using Node.js
Basically, this layer is used by the UI to facilitate interaction with Cognito. However, I don't see an advantage to doing it this way vs. simply moving these Cognito calls into the front-end web application. Am I missing something? I know that such a Node layer may be advantageous for providing a caching layer but I think I could just utilize Elasticache(Redis)as a service if I needed that.
Am I missing something? If I simply moved this Node auth-manager piece into my S3 static Javascript application, am I losing something?
Thanks in advance.

It looks like its pulling some info from
https://github.com/aws-quickstart/saas-identity-cognito/blob/master/app/source/shared-modules/config-helper/config.js
//Configure Environment
const configModule = require('../shared-modules/config-helper/config.js');
var configuration = configModule.configure(process.env.NODE_ENV);
which exposes lots of backend AWS account info, which you wouldn't want in a front end app.
Best case seems to run this app on a small ec2 instance instead of faragte because of the massive cost difference, and have your front-end send requests for authorization.

Related

Lambda functions cause race condition for shared access token | Serverless Framework | Nodejs

I have a lambda nodejs function that basically forwards requests to a third-party resource server, this third-party server requires an access token that is generated on my backend and appended to the request (Axios). Only the latest issued token works and the previously generated token becomes invalid once a new one is issued.
Problem: If two or more requests are received on the backend at the same time calling said function, one of the two requests will have a race condition and result in usage of an invalid access token
Using Serverless framework (AWS) with Nodejs.
Correct me if I'm wrong but there is no way to share a variable like in the express framework since each function request is completely separate.
Should I store the token in a database? (A solution I don't personally like)
I also assume caching has no meaning for sls functions.
Any suggestions/solutions are appreciated.
Note: Multiple other functions use the flow for the same resource server.
It looks like you want all of your requests to be processed sequentially. In that case, you can set a maximum concurrency to 1, and you won't have two lambdas running at the same time
That being said, it won't scale anymore and it kinda defeats the benefits of a serverless infrastructure.
Lambda is stateless compute, so if you need shared state, you'll need to build that. If your throughput is low, dynamo is cheap enough and comes with consistency guarantees that may be effective for you.
If not, redis would be a good option, especially a managed a managed solution like redislabs which offers an HTTP API, nicely suited for Lambda.

Can I run a front-end and back-end on Netlify?

I want to practice creating my own RESTful API service to go along with a client-side application that I've created. My plan is to use Node and Express to create a server. On my local machine, I know how to set up a local server, but I would like to be able to host my application (client and server) online as part of my portfolio.
The data that my client application would send to the server would not be significant in size, so there wouldn't be a need for a database. It would be sufficient to just have my server save received data dynamically in an array, and I wouldn't care about having that data persist if the user exits the webpage.
Is it possible to use a service like Netlify in order to host both a client and server for my purposes? I'm picturing something similar to how I can start up a local dev server on my computer so that the front-end can interface with it. Except now I want everything hosted online for others to view. I plan to create the Express server in the same repo as the front-end code.
No, Netlify doesn't allow you to run a server or backend. However, they do allow you to run serverless functions in the cloud. These can run for up to 10 sec. at a time. Furthermore Netlify also have a BETA solution called "background functions" That can run for up to 15 minutes. But honestly for a RESTful API there sure would be better solutions out there?
If you are still looking for the Netlify for Backend you can consider Qovery. They explained here why it is a good fit for their users.

Conceptual question: How do React Native, Apollo, Node, and GraphQL all work together?

I'm new to GraphQL, Apollo, AWS S3, and Redux. I've read the tutorials for each and I'm familiar with React Native, Node, Heroku, and Mongo. I'm having trouble understanding the following:
how a "GraphQL Server" is hosted for a mobile device using React Native?
can I create the GraphQL server with Node and host it on AWS S3?
how to grab that data by using Apollo/GraphQL in my React Native code and store that data locally using Apollo/Redux?
do I have to use Graphcool as the endpoint from the start instead? All I'm trying to do is pulling data from my database when the app loads (not looking to stream it, so that I am able to use the data offline).
Where should I look to get a better understanding?
I have a couple comments for you in your exploration of new territory.
GraphQL is simply the query language the talks to your database. So you are free to run any type of api (on a server, serverless, etc.) that will use graphql to take in a graphql query/mutation and interact with your database.
GraphCool is a "production-ready backend" basically back-end as a service. So you wouldn't worry about running a server (as I believe they run most everything on serverless infrastructure) or managing where your DB is housed.
You can run an HTTP server on AWS EC2 or serverless using AWS Lambda. (Or the same flavor with Google or Azure). Whatever you decide to use to accept requests, your endpoint will accept graphql query strings and then do stuff with the db. AWS S3 is more of static storage. You can store files there to be retrieved, or scripts that can be pulled, but S3 probably isn't where you would want any server-like code to run.
Apollo would be a tool to use on your frontend for easily interacting with your graphql server. React-Apollo
Apollo/Redux may help you then manage the state throughout the app. You'll simply be loading the data into the app state on load then interacting with that state without needing to make any more external calls it sounds like.
Hopefully this was helpful.

Connecting to AWS Elasticsearch from non-AWS node.js app

I'm working on puzzling out an infrastructure-ish issue with a project I'm working on. The service that I'm developing is hosted on a transient, containerized platform w/o a stable IP — only a domain name (api.example.com). I'm utilizing Elasticsearch for search, so requests go to something like /my-search-resource and then use ES to find results to return. It's written in node and uses the supported elasticsearch driver to connect to ES.
The issue I'm having is in trying to use an AWS Elasticsearch domain. This project is bootstrapped, so I'm taking advantage of the free-tier from AWS, even though the other services are hosted/deployed on another platform (think: heroku, GCP, etc. — containerized and transient resources).
Since I can't just whitelist a particular IP, I'm not sure what I should do to enable the service to have access to the service. I do need to sign every request sent to the domain? This isn't ideal, since it would require monkey-patching the ES driver library with that functionality. Ideally, I'd like to just use username & pw to connect to the domain, but I know IAM isn't really oriented for something like that from an external service. Any ideas? Is this even something possible?
In my current project we connect to AWS Elastic by using the normal elasticsearch NPM package, and then use http-aws-es to create a specific AWS connection header when connecting.
So for example we have something like this:
const es = require( 'elasticsearch' );
const httpAwsEs = require( 'http-aws-es' );
const esClient = es.Client( {
hosts: 'somehostonaws',
connectionClass: httpAwsEs,
awsConfig: {
region: 'some-aws-region',
accessKey: 'some-aws-access-key',
secretKey: 'some-aws-secret-key'
}
} );
That wouldn't require the whole AWS SDK, but it would allow you to connect to Elastic's that are behind the AWS. Is that a solution to your issue?
This is not a solution to the problem, but a few thoughts on how to approach it. We're in the same pickle at the moment: we wish to use AWS but we do not want to tie in with AWS SDK. As far as I understand it, AWS offers 3 options:
Open to public (not advisable)
Fixed IP addresses (whitelist)
AWS authentication
Option 1 is not an option.
Option 2 presents us with the problem that we have to teach whatever we use to log there to go through a proxy so that the requests appear to come from the same IP address. Our setup is on Heroku and we use QuotaGuard for similar problems. However: i checked the modules I was going to use to interact (we're trying to log there, either to logstash or elasticsearch directly using winston transports) and they offer no support for proxy. Perhaps this is different in your case.
Option 3 is also not supported in any way by winston transports at this time. Which would leave us to use aws-sdk modules and tie in with AWS forever or write our own.

Scaling nodejs app with pm2

I have an app that receives data from several sources in realtime using logins and passwords. After data is recieved it's stored in memory store and replaced after new data is available. Also I use sessions with mongo-db to auth user requests. Problem is I can't scale this app using pm2, since I can use only one connection to my datasource for one login/password pair.
Is there a way to use different login/password for each cluster or get cluster ID inside app?
Are memory values/sessions shared between clusters or is it separated? Thank you.
So if I understood this question, you have a node.js app, that connects to a 3rd party using HTTP or another protocol, and since you only have a single credential, you cannot connect to said 3rd party using more than one instance. To answer your question, yes it is possibly to set up your clusters to use a unique use/pw combination, the tricky part would be how to assign these credentials to each cluster (assuming you don't want to hard code it). You'd have to do this assignment when the servers start up, and perhaps use a a data store to hold these credentials and introduce some sort of locking mechanism for each credential (so that each credential is unique to a particular instance).
If I was in your shoes, however, what I would do is create a new server, whose sole job would be to get this "realtime data", and store it somewhere available to the cluster, such as redis or some persistent store. The server would then be a standalone server, just getting this data. You can also attach a RESTful API to it, so that if your other servers need to communicate with it, they can do so via HTTP, or a message queue (again, Redis would work fine there as well.
'Realtime' is vague; are you using WebSockets? If HTTP requests are being made often enough, also could be considered 'realtime'.
Possibly your problem is like something we encountered scaling SocketStream (websockets) apps, where the persistent connection requires same requests routed to the same process. (there are other network topologies / architectures which don't require this but that's another topic)
You'll need to use fork mode 1 process only and a solution to make sessions sticky e.g.:
https://www.npmjs.com/package/sticky-session
I have some example code but need to find it (over a year since deployed it)
Basically you wind up just using pm2 for 'always-on' feature; sticky-session module handles the node clusterisation stuff.
I may post example later.

Resources