I have found two ways to combine React and Express:
[OPTION 1] ExpressJS serve React static files
Fundamentally, the idea here is to have pre-compiled JavaScript files at your disposal before spinning up the server. And then in Express middleware:
app.use(express.static(path.join(__dirname, '../client/build')));
And then in tandem have other endpoints that execute logic on server-side.
[OPTION 2] Host React files separately
Basically, decouple the logic in entirety of Frontend and Backend with one server just for serving static JS files. And then another Express server for running any queries against the database. And returning simple JSON responses.
Which one is the recommended way? Is there an advantage of using one approach over the other?
This is an opinionated question, and so I can only give an opinionated response.
I personally recommend OPTION 1 because you'll run into Cross-Origin-Resource-Sharing (CORS) issues in a number of places when you decouple your front and back ends. This isn't to say that it can't be done, but it will be an annoying thing you'll have to deal with.
With option 2 you'll also most likely have to be sending any requests to your back end with absolute URL paths, which can be very tough to maintain when an application scales.
With option 1 you'll have more flexibility, less to maintain, and less annoying workarounds to implement.
Related
I am trying to build a HAPI REST (API) Server. I think I'd like to make a separate NodeJS server for the front end and separate the two entirely. It would be nice that they don't know about each other at all to simplify development (like both having access to the database - but I assume that would allow for collisions and crazy things).
The idea is so I can scale one and not the other, or I can secure them differently (user/pass for front end, api key for back end), or replace one and not the other.
I assume I should have two different servers, how do I do this? I have seen people just make "two instances" listening on different ports, but it is the same code and can't actually be on separate server instances?
Perhaps I am thinking about this wrong. I assume this MUST be common, what is the regular approach?
I think you're on the right track. Have you read this part of the documentation?
There's a github repo that suggests a starting point.
One strategy might be to embed a Jetty server at a custom context path in your Java app and respond to Hapi Fhir queries.
You should be able to then proxy all your requests at the server level for secure things like user auth or open up certain resources to be queried openly from NodeJS or any REST api.
Finding out how to embed a jetty server should be simple. Proxying requests and auth, maybe not so much
I'm building a next.js project and while I usually would just use the "Custom Express Server" method to implement my graphql API (using apollo-server-express), I thought that it might be a good idea if I decoupled the next.js project from the graphql API so that each of the servers are hosted on different machines.
But usually I would implement any session-related logic in the graphql API, using something like graphql-passport; I figured that's good practice because if I ever choose to add another frontend (maybe a mobile app or something) they can share the same session logic. But given that I'm server side rendering content with next.js, how do I forward the user's session info to the graphql server? Because the next.js server shouldn't have to redo authentication, right?
Let me know if there are any flaws in the architecture too, I'm kind of new to this.
Using the Next server to run the GraphQL service is certainly not a good idea, so yes, do separate the two.
Letting the Next server SSR-render pages with user specific content using the users session is probably not a good idea either, unless you have some specific use case that requires the served HTML pages to have the user specific data in them already. The reasons for this are:
SSR rendering will require lots of server side computations since all pages always will have to be rerendered.
NextJS is moving away (since v9.3) from the getInitialPros() way of doing things towards using getStaticProps() to generate a page that is common for all users and which can load its session dependent stuff straight from the GraphQL API once it is displayed on the client device.
This approach will generally have higher performance and scale much better.
Should you really want to go the "SSR with user session data" route you start in the getServerSideProps(context) method, where context.req is the actual request which will have all your session data in it (cookies or headers).
This session data you can then extract from the req and pass on to the GraphQL server requests that require authentication.
If I wanted to utilise useful middleware like express.cookieParser(); and suchlike, am I expected to be economical with my instances of express when splitting my NodeJS application up into different files.
For example, if I use var express = require('express') in one file, and again in many others, am I wasting resources fetching these and re-instantiating them? Or does require cache modules, or (even better) create a global instance of them?
I know the performance impact of requiring express on multiple files would probably be negligible - this is more of a question to help me understand how modules load.
When you require a module twice, the second require uses the cached exports object that was called earlier.
Source: http://docs.nodejitsu.com/articles/getting-started/what-is-require
I have a question - I found a few answers floating around here.
Basically, I want to have a node.js server application serving up json documents for requests.
All templating will happen client side
Some of the requests will require authentication but most will not.
I'd like to eliminate the need for framework (express) session management wherever possible (as recommended by linked-in) to improve performance so I thought of a couple solutions for authentication.
1) Write custom authentication that persists the session as a document and checks it wherever a request is made to the node.js server that needs authentication. Keep all user info in html5 storage or cookies for the dom to use for templating.
+ Works
- Have to write custom security to avoid session management in express/node.
2) Have 2 node.js instances. One serves everything in the public domain. One is for secure requests only. Still keep all user info in the session.
+ Simple as we can push session management onto the framework for requests that require authentication
- Has 2 node instances. May have some bad DRY.
Is the second option reasonable? Or is there another option I'm missing. Option 1 is my fallback as I'd rather not do all of the custom coding when it's already built into express.
EDIT:
To leave one possibility on here, I think I can use multiple callbacks on a resource request to allow an interceptor type pattern for validating user in session. This answers the first question.
From the express documentation:
Several callbacks may also be passed, useful for re-using middleware that load resources, perform validations, etc.
app.get('/user/:id', user.load, function(){
// ...
})
This is a question involving single page web apps and my question is in bold.
WARNING:
I'm hardly an expert on this subject and please correct me if I'm wrong in part of my understanding of how I think HTTP and WebSockets work.
My understanding of how HTTP restful APIs work is that they are stateless. We use tools like connect.session() to interject some type of state into our apps at a higher level. Since every single request is new, we need a way to re-identify ourself to the server, so we create a unique token that gets sent back and forth.
Connect's session middleware solves this for us in a pretty cool way. Drop it into your middleware stack and you have awesome-sauce sessions attached to each request for your entire application. Sprinkle in some handshaking and you can pass that session info to socket.io fairly easily, even more awesome. Use a RedisStore to hold the info to decouple it from your connect/express app and it's even more awesome. We're talking double rainbow awesome here.
So right now you could in theory have a single page application that doesn't depend on connect/sessions because you don't need more than 1 session (initial handshake) when it comes to dealing with websockets. socket.io already gives you easy access to this sessionId, problem solved.
Instead of this authentication work flow:
Get the email and password from a post request.
Query your DB of choice by email to get their password hash.
Compare the hashes.
Redirect to "OK!" or "NOPE!".
If OK, store the session info and let connect.session() handle the rest for the most part.
It now becomes:
Listen for a login event.
Get the email and password from the event callback.
Query your DB of choice by email and get their password hash.
Compare the hashes.
Emit an "OK!" or "NOPE!" event.
If OK, do some stuff I'm not going to think of right now but the same effect should be possible?
What else do we benefit from by using connect? Here's a list of what I commonly use:
logger for dev mode
favicon
bodyparser
static server
passport (an authentication library that depends on connect/express, similar to what everyauth offers)
The code that loads the initial single page app would handle setting up a static server and favicon. Something like passport might be more tricky to implement but certainly not impossible. Everything else that I listed doesn't matter, you could easily implement your own debug logger for websockets.
Right now is there really anything stopping us from having a single http based index.html file that encapsulates a websocket connection and doesn't depend on connect at all? Would socket.io really be able to make that type of application architecture work without setting up your own HTTP restful API if you wanted a single page app while offering cross brower support through its auto-magical fallbacks?
The only real downside at this point is caching results on the client right? Couldn't you incorporate local storage for that? I think creating indexable/crawlable content pages for search engines wouldn't be THAT big of a deal -- you would basically create a tool that creates static html files from your persistent database right?
Check out Derby and SocketStream.
I think what you're asking for is if it is plausible (using socket.io) to create a website that is a single static page with dynamically changing content.
The answer is "yes", it can work. Several node.js web frameworks already do this although I don't know of any that use socket.io.