I'm building an app on nodejs in Ubuntu 20.
Mainly the app have to handle a series of cars that each user submit to the server through a form. Through this form the user specify the name of the car, the model, an image of the car and other informations ...
I manage the submitted image using gridfs and store them into mongodb with all the other datas.
Each time that a user load the site, a 30 rows table with the uploaded cars like the above one is displayed so the server must manage on each site route request
reading the cars from the db
rendering the html and it's css.minified - js.minified
35/40 requests for rendering the images of the cars in the table and other images
I'm thinking that this 35/40 requests for reading-rendering the images from mongodb can be handled from a secondary nodejs server instead of the main one that manage the app.
I need to make the main app lighter to allow more users on the site, and to do so my idea was to have 2 node applications
The main app that serves the html pages and read/serve all the main informations, like the cars names etc..
The images app that handle all the requests for upload/render the images
But my concern is, Does this solution makes sense or it will only make the server busier to have two node applications instead of one?
Node really is designed for this type of architecture, and breaking up your services into micro-services is actually a good idea when it makes sense ( some will argue this point but personally I feel like the separation of concerns works well when you can scale horizontally for increased load)
Just a few comments ( take with a grain of salt) but..
I would not 'store' the image in the DB ( I'm taking this that you are storing a base64 object for the image?)
Use an object store like s3 to store your images
Use a CDN for static resources
If you architect your stack in such a way, and offload specific request to other services you can greatly reduce load AND increase scalability by not tethering the node server to 1 file system etc.
As an example, if you were to have a stack like
Front-end server that uses say ejs to render things to the client
processing server to do image uploads
Object store to store / serve images (static resources too, js, css etc)
DB server (postgres, mongo etc)
CDN to cache static resources
Redis to cache DB queries / API responses ( if needed )
The front end server should be able to run stateless, meaning it does not have any dependencies to the server (e.g images, css etc)
The processing server, this endpoint will just handle processing images, sending to the object store and updating the DB
The object store will house all images
DB serer to store data
The CDN will cache all request to further reduce server requests
Redis cache to cache API calls, again this is as needed and really depends on how you have it configured to pull your data, may or may not be needed.
The idea here is you are creating an application that can scale out horizontally and is now a great candidate for clustering, containers etc. Because each server has no dependencies to the server it can scale as needed to handle more load.
Related
I am building a MVC application of a real-time stores stock application, where: the model is redis, the views is the template engine - ejs, and the controller is express.js.
I have built already the routes and the views, but I find it's hard to define the model.
In the model package I have written the connection to the redis local docker server, as well as some functions that will return the stock amount of a product based on the store name and the product name and another function which return the list of stores name. This data is crucial for some charts I am updating in the ejs files. Therefore, in a certain ejs file I am rendering the page with multiple data in the json. For example, I have a store.ejs file which requires the stores name and also the products amount. Hence, in the store.js controller file I am rendering as following:
res.render("pages/stores", {storesName: storesName, productsAmount: productsAmount})
Before rendering the page I have to extract the data from my redis server, however, since these get redis methods return a promise, I can't really extract all of them but only one - making only get request from the redis server. So I am sure that this isn't the correct way to do so.
Therefore, I will be glad to see what you think about that.
Thanks!
So I'm trying to make a full website for the first time, e-commerce. Of course the user data should be stored in database, and reached by a backend like nodejs. But what about the non-private data, like all the products.
Is there any difference between having all the products as an object inside my react code, in a like Product.js file vs Having all the products on the server, and fetch them ? Which one is recommended ?
One difference I can think of is, fetching them from database would make the initial load of the website faster, since the user isn't downloading all the data until they visit products page. But that can be achieved with react's built in lazy loading anyways.
So which one of those is recommended ? And why ? Thanks.
option1- Keep products as object in frontend. And use lazy loading so user doesn't download all products only if they visit products page, instead of the initial visit.
option2- Fetch them from database to frontend directly. Normally bad practice but; from a whole different, second database. The firebase database for example. The other database (mongodb) which has private data will never be accessed from the front-end.
option3- Fetch them from database -> to backend -> to frontend. I'm guessing this approach isn't good because it would make the load quite slow ?
Option 3 is likely the best for your use case. The additional latency incurred by going through the backend server is going to be negligible at worst, possibly a couple of milliseconds (provided the database and backend server are closely located, i.e. within the same datacentre).
A public firestore instance could work well, however there doesn't seem to be any good reason to complicate the stack and have two databases. If you are going to use a mongodb database anyway, you may as well commit to this route. Exposing your data via an API running on a backend server is always preferable in this case. In the simplest case, you would just have API routes returning the data in the "products" table. However in the future the advantages of this approach will show, for example; what if you want to restrict certain products to a release date, prevent certain users from seeing specific products. Both of these examples could be achieved through access-control managed by your backend server, acting as a "gatekeeper" to the data held within your database. Likewise you can implement rate limiting etc.
You could make use of Next.js or some other SSR (server-side rendering) technology to pre-render the page with product information embedded within the initial HTML document. This would save you the second round trip time after page load. It would also mean google search, embedded links in apps like twitter messenger etc would have the correct product metadata to display in the preview.
Normal AJAX request:
HTTP GET
Client ------------------> Frontend Server
HTML |
Client <------------------ Frontend Server
products_request
Client ------------------> Backend Server
| database query
Backend Server ------------------> mongodb
product_data |
Backend Server <------------------ mongodb
products_data |
Client <------------------ Backend Server
Next.js:
HTTP GET
Client ------------------> Frontend Server
|
|
| products_request
|
V database query
Backend Server ------------------> mongodb
product_data |
Backend Server <------------------ mongodb
|
| products_data
HTML page V
Client ------------------> Frontend Server
I would point you here for a more in-depth Node.js, Express, Mongodb tutorial if that is the stack you want to use.
I'm currently working on an analytics webapp with a react frontend and node (express) backend.
Describing the functionality in a nutshell:
A user will be able to login on the website, search for a YouTube username and then the YouTube API is called, the data will be stored in a mysql db via my express API and also the data will be used to calucalte some statistics which afterwards are displayed in a dashboard.
Now I was wondering if I should:
Call the YouTube API from the frontend, i.e. inside my react code, do the calculations display them and and then store it in the DB via my express API.
Or, from the react app call an endpoint in my express API that will then call the YouTube API, store the data in the DB and then pass on the data to my react app.
Are there any best practices or up-/downsides to either approach?
When answering questions like these, it's important to remember that the client-side is different for each and every user that visits your website, their internet speed, their GPU & CPU power, etc., but the server is most commonly held in a stable container and much more powerful than a client.
The proper way would be the following:
1. Obtain a search query from a client
Meaning you should get the user's search query from an input, or any other form of control (text area, checkbox, etc.), this way client is doing the least business logic, as it should. The client should always focus more on UI / UX rather than business logic.
2. Send query to the server
Let the server use the query you've just obtained from client, call the youtube api from the server (either explicitly using Axios, or find a node.js youtube library), and do all the necessary calculation on the backend
3. Send processed data to the client
Let client receive the data in the form which is ready for use (iterations, mappings, etc.) - again separating concerns, server - business logic, client - UI / UX
Now to be fair, the example you have will most commonly be done all on the client-side, since it is not as computationally heavy as other enterprise examples, but this is a good rule to follow for big projects, and no one would really complain if you did it this way, since it would be the proper way.
I am newbie to AWS development (but have extensive experience on traditional development).
I need to build a web app with ReactJS frontend, NodeJs/Express backend, MySQL. Its SaaS app possibly with thousands of clients. There will be a use case where we have a Parent client having hundreds of Child clients.
So, parent-child relationship within clients itself. Child's settings supersede parents. Each client (doesn't matter child or parent) will have its unique logo and style. Child may or may not override logos and styles. If Child doesn't override it gets from Parent Client. and so on..
I can handle logos/styles/settings at the time of client's onboarding using some configuration tool. Thus, I will upload/change the logos/styles/settings for parent and/or child clients- at the time of client's implementation. I need ability to change these logos/styles/settings, later, whenever clients demand so.
What are my options on how to design the app: (again, I am newbie to AWS)?
Storage-wise, what's the best place to store logos/styles/settings? If AWS S3, will it provide me certain folder layout to handle parent-child or should I dump all images/styles(css) in single folder with client's prefix on each item?
Other option, pulling of images/styles/settings during runtime when site renders. Thus, I will to determine parent-child relationships for every click on web app and determine where to grab the resources from. Little overhead at runtime since I am pushing the parent-child logic at runtime instead of configuration-time/one-time.
Any thoughts/alternate design/suggestions/pros&cons with respect to AWS environment?
Assets are definitely best place in Amazon S3, each asset is referred to as an object within Amazon S3. You give the object a key such as client/main.css. By doing this you could separate out each client into their own prefix (you might see this to look like a subfolder within the GUI).
With setting it depends how sensitive they are, if it is simply for your frontend then you could store a JSON file in S3 within the same prefix as your assets. Otherwise if there should be some security over the settings you can use DynamoDB which boasts "DynamoDB offers consistent single-digit millisecond latency".
As Chris Williams has already mentioned, use S3 as your raw data store for images, js, css, html, other assets. Additionally, you can set up a cloudfront distribution in front of these assets to serve them quickly to your customers. Cloudfront has edge support as well so your website will be performant globally.
Theres a lot of great resources on S3 + Cloudfront for website content serving available online.
I'm trying to build a NodeJS REST API project based on the so called "micro architecture" (basically multiple smaller NodeJS projects that can run totally independent, but at the same time work together).
Currently users are able to upload images from the app, and my NodeJS backend then processes and saves them appropriately.
Now, what I want to do is the following:
User selects an image to upload from the app -> The app makes a request to the "Main API" endpoint -> The Main API endpoint then forwards this request to the "Image Service" -> Once the Image Service (which is a totally different server) has successfully finished, it should return the URL where the image is stored to the Main API server endpoint, which will then return the info back to the app.
My question is, how do I forward the image upload request from one server to another? Ideally, I don't want the Main API to store the image temporarily and then make a request to the Image Service.
What I'd like is try and forward the data the Main API receives straight to the Image Service server. I guess you could say I want to "stream" the data from one place to another without having to temporarily store on disk or memory. I literally just want it to "tunnel" from one server to another.
Is this possible and is this an efficient way? I just want 1 central point for the app to access, I don't want it to know about this Image Service server. I'd like the app to only ever make requests to the Main API, which will then call my other little services as required.
I'm using NodeJS, Express, Multer (for image uploads) and Digital Ocean hosting (if that should make any difference at all).
What you would basically be doing is setting up a proxy server that will pass requests straight through to another machine and back. There are a few libraries out there to help with this, and this article in particular http://blog.vanamco.com/proxy-requests-in-node-js/ will explain how to go about setting it up even though they are really just trying to get around HTTPS, the same concept applies here.
In short, you get the file upload POST, and then immediately just make that same request to another server and when the response is returned, immediately return it back to the front end. Your entry point can be set up as a hub, and you can proxy requests through to other servers or even just handle them on the same server if necessary.