node.js express e-commerce number of views - node.js

I'm working on an e-commerce website project. I want to count views on each product and display on the single product display page. I know it can be easily implemented by adding a count into express routes and then load into database.
But it will be a burden for the DB connection if for each view I need to connect to the DB and increment the index.
I have a second solution but not sure if it is a better solution since I didn't have any experience on these fields.
The solution is : use a variable to count number of views for each item, and send a query every day to record this variable, or load into a json file every X (minutes/hours..)
What is the best way to count these stuff without sacrificing the performance of the website?
Any suggestions?

I would store the counter against each endpoint in a Redis server. It's in-memory so read/writes are fast. And you can persist it to disk too.
Check out this redis client for Node.js.

Related

Storing data temporarily in nodejs application

I'm developing a nodejs back-end application which will fetch data from third-party Hotel API provider based on the user input from the Angular application. User should be able to filter and sort the received data like filtering price, hotel rating and sorting price, hotel name etc. but unfortunately API doesn't support this. So I thought to store that data in nodejs temporarily but I'm not sure what's the right approach. Will Redis support this?. A good suggestion will be really appreciated.
Redis should be able to support something like this. That or you could do all of the sorting client side and save all the hotel information in local or session storage. Either route you go with you'll need to make sure to save the entire response with a unique key so that it is easy to fetch, or if you save individual values to Redis make sure each has a key to query against. Also keep in mind Redis is best for caching information for short term periods rather than long term solutions like PostgreSQL and MySQL. But for just temp responses, it should be a fine approach.

Sequelize - Change Watcher (one database, two different apps)

I have two different Node Projects that access to the same database with sequelize.
One of the Node Apps (kind of backoffice) updates some tables and the other one use the data of that tables to performs some operation.
The thing is that this data should not be changed constantly and the second app need to be as fast as possible, thats why the second app querys the tables once (when app starts) and then stores this data in memory so it can do the operations faster (because the are no i/o to database).
My problem is that sometimes, this data may change throw the first app, and as this two apps have no contact between them (for security reasons) the only way I see is to have some "dirty" flag in some table of the database, and then make the first app to change it after some update and the second app to query each X seconds in order to check if this flag has been changed.
I don't like this approach and that's why I'm posting this question:
Does Sequelize provides a better or fancy way to do this ?
like some kind of "changes/dirty" watcher
Thanks in advance

Document download counter? Using Node.js, express, mongo, mongoose

I'm working on a medium size company's intranet. The website is hosted on-site, and will have many links to documents hosted on the same server.
Does anybody know what's the best/easiest way to keep count of downloads of each document?
Website developed using Node.js, express, mongo, mongoose.
Simplest thing to do would be to just have a mongo document holding the metadata about each file, and then increment the “downloads” field of that every time it’s downloaded.
Slightly harder but probably more useful would be to log info about each download either in its own mongo record or to the system logs. In that case you’re going to capture things like which user, when, etc, which you’d then count to get the total, but you could also do things like see which user did it or more complex things like which department is doing the most downloading, etc.
To keep track of all that are using your file, that you can do is that keep a logger as middleware in express as a result you will be a find who all have access to file by searching the log. morgan is a good logger which helps you this.

NodeJS 6.x Express 4.x PostgreSQL 9.x dynamic routes and dynamic views with dynamic SQL

I am new to Node.js 6 and Express 4. I am wondering if something like this is possible todo? It appears wildcards can be used in the routeing of node. Is it possible to have a database driven app that is dynamic with routes and views? What I mean is something like the following URL's
/ <- can be anything
/xyz
/15/abc/xyz
So node/express would hit the database for the URL of / and then dynamically take the values in the row for / lookup it found, and output the path to the view template page with the SQL query ready to be used in the view template file that is local on disk. I know their is no way to dynamically generate the html because the SQL will be different for each view template URL. So that would haft to be a hard file with template engine like handlebars, etc. It appears node/express can dynamically deal with routes on the fly so this should be possible todo I think.
So when node/express get the URL of /xyz it will go into the database look up the URL and then output the SQL in the lookup row and call the path to the view template for that row it found in the database. Database could be a json file not sql too. Do not know what would be faster since both would be in RAM.
I am wondering if anyone has ever tried this? If they have dose anything like this or dose anyone know of a boilerplate with this kind of a setup on github? I can see several problems.
Handling 404 errors
Database pools, Ways to reduce the open and closing sockets. So when 100 URL requests would not have a 1,000 open and close socket requests. It would have just 1 open socket request and do all the SQL via that socket. Or have 64 sockets for 64 cpu system. Not open and closing socket every time you hit the URL.
Run app under PM2 Clustering so it will use all the CPU's not just one CPU.
I would like any input. How you would over come the problems listed or boilerplate to something like this if it is out their already?
What you're describing is a RESTful API. Generally, the very first item in the URL path is static if you're serving webpages, since otherwise you wouldn't be able to serve anything else from the root path, such as -- like you noticed -- error pages. So you'll commonly see URLs like www.mystore.com/products/1234.
Your #2 and #3 have nothing to do with routing. Connection pools don't work how you think: it's about reusing and managing the lifespans of many connections, which are picked up and released by your app as needed. You don't want to be sending all your SQL over one socket (since a long-running query would halt everything else until it completed), and the number of open sockets isn't limited by how many CPUs you have.
Clustering is just as possible with a RESTful app as it is with a non-RESTful one.

Real-Time Database Messaging

We've got an application in Django running against a PGSQL database. One of the functions we've grown to support is real-time messaging to our UI when data is updated in the backend DB.
So... for example we show the contents of a customer table in our UI, as records are added/removed/updated from the backend customer DB table we echo those updates to our UI in real-time via some redis/socket.io/node.js magic.
Currently we've rolled our own solution for this entire thing using overloaded save() methods on the Django table models. That actually works pretty well for our current functions but as tables continue to grow into GB's of data, it is starting to slow down on some larger tables as our engine digs through the current 'subscribed' UI's and messages out appropriately which updates are needed as which clients.
Curious what other options might exist here. I believe MongoDB and other no-sql type engines support some constructs like this out of the box but I'm not finding an exact hit when Googling for better solutions.
Currently we've rolled our own solution for this entire thing using
overloaded save() methods on the Django table models.
Instead of working on the app level you might want to work on the lower, database level.
Add a PostgreSQL trigger after row insertion, and use pg_notify to notify external apps of the change.
Then in NodeJS:
var PGPubsub = require('pg-pubsub');
var pubsubInstance = new PGPubsub('postgres://username#localhost/tablename');
pubsubInstance.addChannel('channelName', function (channelPayload) {
// Handle the notification and its payload
// If the payload was JSON it has already been parsed for you
});
See that and that.
And you will be able to to the same in Python https://pypi.python.org/pypi/pgpubsub/0.0.2.
Finally, you might want to use data-partitioning in PostgreSQL. Long story short, PostgreSQL has already everything you need :)

Resources