I have a webpack config for client and server, the server can access the bundled code such that I can call react.renderToString(<App>) to produce server-side-rendered content (and then content gets hydrated client side).
The server is mounted in a sub-path, so, to use it locally you'd visit http://localhost:8080/some/path.
Everything works flawlessly except for asset paths! Lets say <App> creates an image such as <img src="/static/logo.svg"/> (that is, App.tsx imports ./logo.svg and uses it in an image tag), because server is mounted in a sub-path, the HTML URL for logo.svg should be prefixed with the base-path. So the real final src should be /some/path/static/logo.svg.
I can make this work on the client by setting __webpack_public_path__ before other imports and it work's great! But how can I use that on the server? I only know where it's mounted per request, and __webpack_public_path__ can only be set during import (as far as I can tell), so it feels like I'm boned. Certainly if I change __webpack_public_path__ after startup the new value is not used.
Are there any solutions? I'm happy to include any snippets of code to clarify any context. Currently the request comes to the server via a request-header, and I thought it'd be easy to "just" use it to prefix assets..
For reference, server is an Express server and the SVG file is loaded via standard file-loader. Happy to update my Q with additional information or code. I've hammered my head against this for hours so any help, insights, thoughts, or guesses much appreciated (I'm still fairly new to webpack et. al).
I like stupid plans. A good stupid plan to execute now is better than searching endlessly for the smart one.
So here's my stupid plan. Be warned, it's a real humdinger:
On the server only, I set __webpack_public_path__ to some arbitrary sentinel value, I render the HTML output, and then I search and replace the sentinel value, swapping in my actual public path.
I'm not going to claim this is elegant, but what I can say is: It works.
Related
Bots are amazing, unless you're Google Analytics
After many months of learning to host my own Discord bot, I finally figured it out! I now have a node server running on my localhost that sends and receives data from my Discord server; it works great. I can do all kinds of the things I want to with my Discord bot.
Given that I work with analytics everyday, one project I want to figure out is how to send data to Google Analytics (specifically GA4) from this node server.
NOTE: I have had success in sending data to my Universal Analytics property. However, as awesome as that was to finally see pageviews coming into, it was equally heartbreaking to recall that Google will be getting rid of Universal Analytics in July of this year.
I have tried the following options:
GET/POST requests to the collect endpoint
This option presented itself as impossible from the get-go. In order to send a request to the collection endpoint, a client_id must be sent along with the request itself. And this client_id is something that must be generated using Google's client id algorithm. So, I can't just make one up.
If you consider this option possible, please let me know why.
Install googleapis npm package
At first, I thought I could just install the googleapis package and be ready to go, but that idea fell on its face immediately too. With this package, I can't send data to GA, I can only read with it.
Find and install a GTM npm package
There are GTM npm packages out there, but I quickly found out that they all require there to be a window object, which is something my node server would not have because it isn't a browser.
How I did this for Universal Analytics
My biggest goal is to do this without using Python, Java, C++ or any other low level languages. Because, that route would require me to learn new languages. Surely it's possible with NodeJS alone... no?
I eventually stumbled upon the idea of actually hosting a webpage as some sort of pseudo-proxy that would send data from the page to GA when accessed by something like a page scraper. It was simple. I created an HTML file that has Google Tag Manager installed on it, and all I had to do was use the puppeteer npm package.
It isn't perfect, but it works and I can use Google Tag Manager to handle and manipulate input, which is wonderful.
Unfortunately, this same method will not work for GA4 because GA4 automatically excludes all identified bot traffic automatically, and there is no way to turn that setting off. It is a very useful feature for GA4, giving it quite a bit more integrity than UA, and I'm not trying to get around that fact, but it is now the Bane of my entire goal.
https://support.google.com/analytics/answer/9888366?hl=en
Where to go from here?
I'm nearly at the end of my wits on figuring this one out. So, either an npm package exists out there that I haven't found yet, or this is a futile project.
Does anyone have any experience in sending data from NodeJS to GA4? (or even GTM?) How did you do it?
...and this client_id is something that must be generated using Google's client id algorithm. So, I can't just make one up...
Why, of course you can. GA4 generates it pretty much the same as UA does. You don't need anything from google to do it.
Besides, instead of mimicking just requests to the collect endpoint, you may just wanna go the MP route right away: https://developers.google.com/analytics/devguides/collection/protocol/ga4 The links #dockeryZ gave, work perfectly fine. Maybe try opening them in incognito, or in a different browser? Maybe you have a plugin blocking analytics urls.
Moreover, you don't really need to reinvent the bicycle. Node already has a few packages to send events to GA4, here's one looking good: https://www.npmjs.com/package/ga4-mp?activeTab=readme
Or you can just use gtag directly to send events. I see a lot of people doing it even on the front-end: https://www.npmjs.com/package/ga-gtag Gtag has a whole api not described in there. Here's more on gtag: https://developers.google.com/tag-platform/gtagjs/reference Note how the library allows you to set the client id there.
The only caveat there is that you'll have to track client ids and session ids manually. Shouldn't be too bad though. Oh, and you will have to redefine the concept of a pageview, I guess. Well, the obvious one is whenever people post in the chan that is different from the previous post in a session. Still, this will have to be defined in the code.
Don't worry about google's bot traffic detection. It's really primitive. Just make sure your useragent doesn't scream "bot" in it. Make something better up.
Not long ago , we used to have server render pages and then React came for client side rendering and single page application.It introduced virtual DOM's and changed the way we write our code.
We require all these react libraries and install them as dependencies before writing our codes. Now we can break into many components , have many css and scss files including images. But at the end we will build the files, make compact bundle and serve from build folder.
Express get route
app.get('*', (req,res) =>{
res.sendFile(path.join(__dirname+'/client/build/index.html'));
});
Heres, What I have understood :
Build folder is the place where webpack combines all the files and create minified bundle ready for deployment. That file is basically simple HTML and JS files which every browser can understand. As all the browser doesn't understand ES6 and much more, we have to convert all these files into plain language that every browser can understand.
Also, webpack-dev server is only for development purposes and we won't be running it into production.
Is virtual DOM/Real DOM just for development purposes? or
are those react libraries also trans-piled while building the minified files? If later is the case , react is run on background mode on client's browser? I want to know how react takes care of client side routing after the building the app.
How do you manage github repositories for Node-React app? Do you keep two different repositories one for front end and other for back-end? Whats the industry standard?
If you keep two repository, how do you deploy the front-end code? As you can't run the webpack-dev-server into production. Nor you can specify the public static (build folder) in your back-end(express server) as they are separated in two repos. How does, either the integration of these two repositories take place( lets say we have two AWS EC2 instance, one for each) or front-end get served from the front-end repo??). Can you actually use something like npm serve in production ??
what am I trying to do ?
I want to deploy my node-react app on AWS. I have only one repository on github. I have one folder "client" inside my repo where all the react code sits with its package.json file. All the other files for server are inside root folder (server doesn't have its own folder and files are scattered inside root folder). So there are two package.json files, one inside root folder for server and one inside client folder.I am planning to run my-node app on a docker container.
Please help me understand the core concepts and standard practices for code hosting and deployment keeping large scale enterprise application in picture.
I would not go into explaining all the points in your question here because, #Arnav Yagnik and #PrivateOmega have both done a brilliant job at explaining most of them. I would definitely recommend you to read their answers properly and read the links provided for more information before reading this answer.
I would like to address your question of deploying a Node-React application. In production, generally, we have different deployments (or "repositories" as you mention in your question) for both the front-end (React) and back-end (Node). This allows your back-end to sit in an EC2 instance, for example, with auto-scaling to make sure that it can cope up with all the requests coming in.
As mentioned in the previous answers, and in your question as well, webpack compiles and minifies the React files into simple HTML and JS files, which most browsers can run (I'm not going to explain VirtualDOM here because it has already been perfectly explained in other answers). You would then take these minified files and serve them from an S3 bucket for example, because again, it is a single page application (also discussed in the other answers) and the business logic is already in the minified JS files and its just simply sending all requests to your back-end server.
Now for front-end, you can use TravisCI for example to deploy the build folder (the one you talk about in your question) to an EC2 instance and serve your files using NGINX or if you can configure a CDN deployment properly, you can serve the files from an S3 bucket for the most optimal performance.
You can think of serving the React application like sending a cryptic block of code to your user's browser. Now you can deploy this cryptic block of code to a publicly available S3 bucket, and serve it from there. Again, because of webpack and minification/uglification, no on would be able to make any proper sense of what your original code was, remember that you can still access all the code in Chrome's Sources tabs for example.
I would like to address this with different approach.
Server Rendered Pages : The concept has not changed, server when encountered with a DOC request it has to respond with a html. Now HTML may or may not contain scripts(can be either inline or a external server address). In case of question's context you can still ship HTML where it will download scripts that you have written(may include react or not). for most cases you can ship empty html with scripts tags which will download the scripts over network and execute them which would contain all the rendering logic.
To Answer your questions :
1st : There is no background mode in a single threaded JS(unless we want to talk about workers but we can leave them out for this discussion). By writing in code you are not interacting with any DOM. You are instructing your components(extended by React) when to change their state and when to re-render(setState). React internally calculates the virtual DOM and compare to Real DOM to calculate actual changes that are to be made on Real DOM(this is very abstract answer, to get more understanding please read react docs, Baseline here is you are not interacting with any DOM just instructing React core library when to update and what is the updated state)
2nd : If you want to support SSR(server rendered pages). I would suggest to make 2 folders , client(this would include all client components and logic) and server(would include all server side logic) with different package.json as packages differ for both applications.There is no such industry standard here, what floats your boat should work but generally making directories based on logical entities should satisfy separation and maintainability, if in future you think you want to fork out server and client in separate repos , it would definitely make the process easy.
3rd : You shun running webpack-dev-server in production. Files are generally not obfuscated hence payload is heavy(not to forget your written code is out there). Even if you want to make different repos, server can spit out html and html can request scripts with your client server.
How to deploy : Deploy your code and run :
node server/app.js
and in app.js you can write the location block what you have mentioned.
P.S. : If you just need a server with that location block. do you really need a express server? You can upload the client build to a CDN and route your domain to serve index.html from the CDN(s3 bucket can also be used here)
I would like to start off with clearing up the terminologies as much as I can.
Like you said server rendered pages was a more prominent standard in the past, but it hasn't changed at all with the introduction of React, because even React has the support for Server rendering or SSR, which means HTML pages are generated at server side and then served to clients using browser.
And client side rendering means, a HTML page is loaded to browser and then javascript code renders things on top of those HTML pages and make them interactive.
And single page application concept is that we have only a single HTML file or base HTML page on top of which based on user interactions and data from server it is rewritten continuously.
Virtual Dom is an amazing concept introduced by React. React library code recreates the structure of all elements(called DOM elements) of a HTML page in the memory in a tree form. This enables React algorithm called Fiber to reconcile appropriate changes as per route update or any other changes first on this tree like structure before translating them onto the real elements in the HTML page.
Babel is a transpiler to transpile latest features that browser engines haven't started supporting to code that they can understand, usually ES6+ code into pre-ES6 because all browser supports that. In React application, if you have written application using JSX syntax, babel supports transforming JSX into normal javascript also.
Yes, breaking up of pages into many components is possible due to compositional nature of components by React which means we can build complex things by combining small and more focussed things.
At the end before serving it to end users, we can't have web application lag due to the huge size of code, so during the build process, things like minifying(removing whitespace etc) and other optimization like combining multiple javascript files into one etc are done, and then compact bundle is served from build folder like you said.
Yes, build folder is where webpack does the minifcation and combination to create a bundle as small as possible. It is basic HTML and JS files that is understood by every browser, and if the code contains something that a particular browser doesn't support, appropriate support code or something called polyfill is also bundled with it. Technically you can't say browsers only understand pre-ES6 code because a lot of browser engines have implemented plenty of ES6 features already.
Webpack dev server is just used to serve a webpack application over a port like a node.js server and gives us features like live-reloading which is needed when you constantly make changes to your application codebase and it isn't needed at production because like we said previously, at production time it's just HTML and JS and nobody ever makes any changes on these files.
Virtual DOM is a memory representation or concept used by React Code just like we have stacks and queues and it not just used at development time. Yes and No. Because I think appropriate parts of react source code which is required to run the application would also be bundled before generating the production bundle.
I would say, don't have a preset way of things, because it is totally upto the developer and the team, because I have seen people using 2 seperate repos because frontend people work on frontend things whereas backend people work on backend things. But there's also a case when everyone's a fullstack developer and you can Technically have it in a single repo with a single package.json and use the backend to serve the frontend files and you have to manually install each react dependency and cannot directly use CRA or create-react-app like generator.
What has 2 repositories to do with front-end deployment in production? You don't need to run webpack-dev-server to server files in production. You can create a production bundle and then setup any http server to serve the generated bundle.
Regarding your current scenario I would say instead of having 2 package.json, you can go with a single package.json and install all dependencies together or go with a monorepo approach using something like lerna or yarn workspaces.
But for a total beginner I would suggest 2 separate repositories to encounter less problems.
And a bonus point if you are not aware, you can write React in pre-ES6 code and also without JSX as well.
1) virtual DOM is basically to say that you are calling a function of react not the actual function which does manipulation on the real DOM
like this one
document.getElementById("demo").innerHTML ="Helloworld"
modifies the actual dom
but this
ReactDOM.render(
<HelloMessage name="Taylor" />,
document.getElementById('demo')
);
if you see this properly you aren't doing anything directly on the dom you are just giving the react function control to do things , internally react take cares of modifying the that dom element demo whenever the react wants to re-render it based on its own logic which is what they claim as optimized which is why people use it in first place. Yes when you build your code with webpack it does include react in it which is part of that minified code, so if you see any of the error stacktrace in development you do see react is the starting point for it
2) I think its a choice to be made, as there are not restrictions on this
3) Coming to deployment , In general if you want use nodejs you might choose expressjs server type of deployment but otherwise generally its better to use a high performance server like Nginx or Apache or else if you just don't want to get into this whole drama of things people generally use heroku based deployment or else people are using special platforms like netlify,surge.sh these days (its super easy to deploy on these platforms).
I believe others have done a pretty good job explaining the React Virtual DOM. In a simple and practical way, I’ll attempt to explain how I (would) manage the deployment of a dynamic website (including medium-sized enterprise systems) using NodeJS and React. I’ll also attempt not to bore you.
Imagine for once that React never existed and that you were building a traditional Server-Side Rendered application. Each time the user hits a route, the controller works with the model to perform some business logic and returns a view. In NodeJS, this view is usually compiled using a template engine such as handlebars. If you reflect for a second, it becomes obvious that the view could be any html content which is then sent back to the browser as a response.
This is a typical response that could be sent back:
<html>
<head>
<title>Our Website</title>
<style></style>
<script src="/link/to/any/JS/script"></script>
</head>
<body>
<h1>Hello World </h1>
</body>
</html>
If this response hits the browser, obviously “Hello World” is displayed on the screen.
Now, with this simple approach, we can do powerful things!
OPTION 1:
We can designate one controller to handle all incoming routes app.get("*", controllerFunc) and render one view for our entire server.
OPTION 2:
We could ask multiple controllers to handle different routes and render route-specific views from our server.
OPTION 3:
We could ask multiple controllers to handle different routes and generate pages on-the-fly (i.e. dynamically) from our server.
If we were building a traditional web application, option 3 would be the only reasonable standard. Here, pages are generated dynamically for different routes. However, with option 1, we can produce a quality Single-Page Application where the response sent to the server is an empty html page but with the built JS script that has the ability to manipulate the DOM – Yes, React! Here’s what such a response might look like:
<html>
<head>
<title>Our Website</title>
<style></style>
<script src="/link/to/any/JS/script"></script>
</head>
<body>
<h1>Hello World </h1>
<div id="root"> </div>
<script async type=”text/javascript” src="/link/to/our/transpiled/ReactSPA.js"></script>
<!--async attribute is important to ensure that the script has access to the DOM whenever it loads. This also makes the script non-blocking -->
</body>
</html>
Clearly, we’re giving all the responsibility to the generated SPA and all routing logic is handled on the client-side (See, react-router-dom). On the server side, we can introduce the concept in option 2 and tweak NodeJS route handlers to listen to another specific route for any REST API communication. If you’re familiar with NodeJS, the order in which routes are registered either by app.get() or app.post() matters.
However, using option 1, we can quickly become limited and only able to serve one Single-Page application from that server. Why? Because we have asked one controller to handle all non-API incoming routes and render one view. We also risk serving an unnecessarily bloated JS file. Users are served the complete website when all they probably wanted was just the landing page.
If we look to the option 2 though, we can tweak things a lot more and serve multiple Single-Page Applications for different routes, all from our server. This approach helps to reduce the sizes of the JS build being sent to the browser. A typical example would be a website that has a welcome page (or an introduction directory), a login page and a dashboard.
By assigning controllers for different routes, we can build SPAs uniquely for those routes. SPA for the intro page, another for the login page, and then another for the dashboard. Yes, the browser would have to load while transitioning between the three, but at least we highly increase initial render time for our website. We can also use the more secure option of cookie for authorization rather than the less secure option of storing session tokens on localStorage.
In a more advanced setting, we could have dynamic websites with different React components rendered as widgets within the dynamically generated page. Actually, this is what Facebook does.
The way to build such SPAs or components is pretty simple. Start up a react project and configure webpack to render the production-ready JS file into your preferred public static directory within the server-side repo. The <script> specified in the view can then easily load these built react components since they exist within the scope of the server-side’s public directory.
In essence, this means one repo with several client directories and one server directory where the destination of the production build files to be generated by webpack for each client project is set to the server’s public static directory. So, each client side’s directory is a project (either full SPA or simple React Component as a widget) with it’s own webpack.config and package.json file. In fact you can have two separate config files – production and development. Then, to build, you use npm ~relevant command~ for either production or development build.
You could then go ahead to host it the way you would host any NodeJS application. Because, the main application is the NodeJS - that's where the server is. Replace NodeJS with PHP and Apache/NGINX, the concept still remains the same.
I'm still new to AWS and just following the documentation and asking questions here when I get stuck. Please excuse me if this question sounds really noobish.
So far, I've deployed the following:
EB to deploy my REST API
RDS to deploy my psql database
Lambda functions to handle things like authentication & sending JWTs, uploading images to S3, etc.
I have got my basic back end (no caching (just started learning about redis), etc. set up yet, just the bare bones so far) deployed.
I'm still developing my front end, and have not even thought about how I will be deploying it yet (probably another deployment on EB, since I am using universal react). I am just developing it locally but using my production env variables now so I am hitting my deployed API, etc.
One of the MAJOR things I have no idea on how to do is detecting incoming requests from client side to get the client's location by IP address. This is so that I can return the INITIAL results in your general location just like yelp, foursquare, etc. do when you go to to their sites.
For now, I am just building a web app on desktop so I just want to worry about getting the IP address to get the general area of the user. My use case is something similar to other sites you might have used which provides an INITIAL result set for things in your area (think foursquare or yelp).
Here are my questions:
What would be a good way to do this? I'm thinking of handling this in my front end react universal deployment since it will be a node server with rendered page caching. Is this a terrible idea? It would work something like
(1) request from client comes in
(2) get IP from request and lookup the IP location using some service (still not sure what I'm going to use, have found a few plus a nodejs library called node-geoip). Preferably, I can get the zip code since I am trying to save having to do so many queries by unique locations in my database, and instead return results in the zip code and the front end will show an initial map with the initial results in that zip code.
(3) return to client the rendered page with those location params if it exists, otherwise create it, send it, and cache it.
Is the above a really dumb idea? Maybe you have already done something like this, and could share your wisdom :)
Is there an AWS service which can already handle something like this for me? Perhaps there's some functionality which can already do this.
Thanks.
AGAIN - I apologize if this is long winded. I don't know anyone in real life who can help me and I feel alone :(. I appreciate the help you guys can provide.
There are two parts to this:
Getting the user's IP address. You mentioned you're using 'EB' - I presume you mean AWS ELB (Elastic Load Balancer)? If so, then you need to read the X-Forwarded-For HTTP header in your app code, since otherwise what you'll really detect is the ELB's IP address. X-Forwarded-For contains the user's real IP - or rather, the IP of the end-connection being made (there's no telling if this is really a VPN, Proxy or something else-- but it's as far as you can get with an IP.)
Querying an IP DB that can turn the addr into a location object. There are tons of libraries for you. Assuming you're using Node, you can use node-geoip as you mentioned. Or you can just search 'geoip service' on Google and find managed services, like Telize on Mashape. If you don't want to manage the DB lookup yourself or keep the thing up to date, then a managed service would help.
In either case, it's likely that you'll be doing asynchronous look-ups. In that case, you might want to use async/await to get the user's full object before injecting that into your React props and ultimately rendering it as a HTML string that's sent down to the client.
You could also use a library like redial to decorate your components with data requirements, and return a Promise you can await on to know when you're okay to render.
Since you probably want to enable client routing too (i.e. where the user can click on a route in their browser, and the server isn't touched at all), then you will probably need some way to retrieve the IP address/results based on that IP even when the server isn't involved in the initial render.
For that, you could write a REST service that retrieves the results. Or write a GraphQL back-end that gets the data. It doesn't matter how you write it, since the server will have access to the X-Forwarded-For header and can use that to retrieve the results and send back location-aware data.
FYI, I'm writing a React starter kit (called ReactNow) that uses rxjs for handling async streams. It's not ready yet, but it might help you figure out the code layout that would offer a balanced mix between rendering on the server, and writing universal code that requires some heavy lifting from the server.
I'm trying to figure out how best to architect my app. But for starters trying to understand typical practices with respect to where to put things, and how the app should wire up to things like server.js, how server.js should work, and how you keep a persistent connection open for the website, and any backend services or modules.
This is a general question but let me try to be more specific, as specific as I can since I am new to Node.. and for a basis to start on with this question.
Lets say I plan on designing a simple Express App.
I've got this kind of structure for example so far:
Right now in server.js, I am just playing around with trying to connect to a mySQL database. So I've got a connection pool I'm creating, one call to the store to retrieve data, requires at the time for the node-mysql middleware I'm using, etc.
app.js just has very simple code, it's not modular yet, or even production ready but that's just me playing with the code, spiking things out. So in it I have you're typical stuff like setting the view, var app = express();, importing an express route definition from another module in my routes\index.js, stuff like that.
if I'm going to keep a database connection open, or other things open, how is that best organized/done by convention?
If you look at this example code, he's moving the var app = express() definition into service.js: https://github.com/madhums/node-express-mongoose-demo/blob/master/server.js. He keeps an open connection running and started from there, which makes sense, hence "server".
so should app.js do much? what is best practice or scope of what this should be doing. Once I start modularizing things out into their own .js files and node modules, how does the app.js morph through all those refactorings, meaning in the end what's its role and is it very thin in the end where it's just used to wire stuff up?
Then what should www.js which is now required by express 4 have and it's role?
It's kinda hard for me to start with just one aspect so I'm kinda going all over the place here in the above. I just want to know common conventions for putting stuff in app.js vs. server.js and then best way to keep and managed open connections to things...both in the backend and front-end such as http requests coming in, what should be the central point? routes of course but then so is app.js responsible for referencing routes?
I have found a few resources such as this but looking for more so if you know any or have any input, please reply. I'm more interested in the talk around app.js, server.js, connections, www.js, and where things should wire up to each other with these particular specific parts. I realize the rest is up to you on how you wanna name folders, etc.
There is no right way and (arguably) no wrong way. There are which are better than others, but then someone might say that they don't like this way and you should do it the other way and so on, until your project is over the deadline.
I often refer to this blog post about best practices when developing an express app.
You could also try one of yeoman generators. Choose one that suits most/all of your needs.
Bottom line, there is sadly still no answer to best structure of an app, I would recommend you to pick something that works best for you (and your team) and stick with it. Consistency is the most important thing to keep in mind while developing and JavaScript community it clearly lacking it.
I'm creating a module that exports a method that can may be called several times by any code in node.js using it. The method will be called usually from views and it will output some html/css/js. Some of this html/css/js however only needs to be output once per page so I'd like to output it only the first time the module is called per request. I can accomplish doing it the first time the module is called ever but again the method of my module can be called several times across several requests for the time the server is up so I specifically want to run some specific code only once per page.
Furthermore, I want to do this while requiring the user to pass as little to my method as possible. If they pass the request object when creating the server I figure I can put a variable in there that will tell me if my method was already called or not. Ideally though I'd like to avoid even that. I'm thinking something like the following from within my module:
var http = require('http');
http.Server.on('request', function(request, response){
console.log('REQUEST EVENT FIRED!');
// output one-time css
});
However this doesn't work, I assume it's because I'm not actually pointing to the Server emitter that was/may have been created in the script that was originally called. I'm new to node.js so any ideas, clues or help is greatly appreciated. Thanks!
Setting a variable on the request is an accepted pattern. Or on the response, if you don't even want to pass the request to your function.
One more thing you can do is indeed, like you write, have the app add a middleware and have that middleware either output that thing.
I'm not sure if I completely understand your "problem" but what you are trying to achieve seems to me like building a web application using Node.js. I think you should use one of the web frameworks that are available for Node so you can avoid reinventing the wheel (writing routing, static files serving etc. yourself).
Express framework is a nice place to start. You can find tons of tutorials around the internet and it has strong community: http://expressjs.com/