Caching posts using redis - node.js

I have a forum which contains groups, new groups are created all the time by users, currently I'm using node-cache with ttl to cache groups and it's content (posts, likes and comments).
The server worked great at the begging but the performance decreased when more people start using the app, so I decided to use the node.js Cluster module as the next step to improve performance.
The node-cache will cause a consistency problem, the same group could be cached in two workers, so if one of them changed, the other will not know (unless you do).
The first solution that came to my mind is using redis to store the whole group and it's content with the help of redis datatypes (sets and hash objects), but I don't know how efficient this could be.
The other solution is using redis to map requests to the correct worker, in this case the cached data is distributed randomly in workers, so when a worker receives a request that related to some group, he checks the group owner(the worker that holds this group instance in-memory) in redis and ask him to get the wanted data using node-ipc and then return it to the user.
Is there any problem with the first solution?
The second solution does not provides a fairness (if all the popular groups landed in the same worker), is there a solution for this?
Any suggestions?
Thanks in advance

Related

Question about Redis Implementation on NodeJS

Why should you use Redis to optimize your NodeJS application?
Why POST, PUT and DELETE methods should never be cached?
How is the caching process?
Why do we cache?
Things to install to use Redis on NodeJS?
What is an example of an app that uses the Redis implementation?
Is there any alternative or better than Redis?
Is it too hard to implement Redis in NodeJS?
What happens when we don’t use Redis?
Can we use Redis in any OS?
According to several sources that i have been searched.
By using Redis we can use cache database that gives clients faster data retrieval.
a. The POST method itself is semantically meant to post something to a resource. POST cannot be cached because if you do something once vs twice vs three times, then you are altering the server's resource each time. Each request matters and should be delivered to the server.
b. The PUT method itself is semantically meant to put or create a resource. It is an idempotent operation, but it won't be used for caching because a DELETE could have occurred in the meantime.
c. The DELETE method itself is semantically meant to delete a resource. It is an idempotent operation, but it won't be used for caching because a PUT could have occurred in the meantime.
We can simplify the method like this :
a. client request data X with ID "id1".
b. the system will check the data X in cache database on RAM.
c. if the data X available in cache database, clients will retrieve the data from cache database in RAM.
d. if data unavailable in cache database, the system will retrieve the data from API and then deliver it to clients also save it on cache database at the same time.
To shorten the data retrieval time.
Redis in npm.
Twitter, GitHub, Weibo, Pinterest, Snapchat.
Memcached, MongoDB, RabbitMQ, Hazelcast, and Cassandra are the most popular alternatives and competitors to Redis.
The redis community is quite large, you can see lots of tutorials and manuals. You should be fine
No cache , slower speed to query data and slowing performance
Redis works in most POSIX systems like Linux, *BSD, and OS X, without external dependencies, but there is no official support for Windows builds.
According to many sources,
It'll gives clients faster retrieval of similar/repeated data. Therefore it's called cached database.
Because, Commands (POST,PUT,DELETE) may include many variable, thus differ to each client. Also, not worth the cache. you might want to read more about CQRS.
One of the many methods, in oversimplified terms:
a. client request certain data A with request ID req-id-1.
b. cache will be stored in high speed memory (RAM).
c. if another client request data with ID req-id-1, instead of reading from slower drives, it'll deliver from cache in RAM.
d. if data A is updated, cache with req-id-1 will be deleted. and repeats to step a.
same as answer 1.
redis or ioredis in npm, and a redis process running.
Mostly app/site with a lot of GET request such as news portal. If it's well known, high probability it implements redis.
this is opinionated question. here's a list of redis-like DB,
as long as you read the manual/tutorial, it should be fine.
no cache, thus, saving ram but slower query.
works in most POSIX systems

Persist data for 24 hours

I need to build a microservice that scrapes a message once a day and persists it somewhere. It does not need to be accessible after 24 hours (it can be deleted). It doesn't really matter where or how, but I need to access it from an Express.js endpoint and return the message. Currently we use Redis and MongoDB for data persistence. It feels wrong to create a whole collection for one tiny service, and I'm not sure of an application of Redis that would fulfill this task. What's my best option? Open to any suggestions, thank you!
You can use YUGABYTE DB, and you can set TABLE LIVE= 24 Hours, then data will be deleted.
Redis provide an expiration mechanism out of the box. You can associate a timeout to a key, and it will be automatically deleted after the timeout has expired. Some official documentation here
Redis also provides logical databases, if you want to keep this expiring keys separated from the rest of your application. So you do not need to spin up another machine. Some official documentation here

How to avoid database from being hit hard when API is getting bursted?

I have an API which allows other microservices to call on to check whether a particular product exists in the inventory. The API takes in only one parameter which is the ID of the product.
The API is served through API Gateway in Lambda and it simply queries against a Postgres RDS to check for the product ID. If it finds the product, it returns the information about the product in the response. If it doesn't, it just returns an empty response. The SQL is basically this:
SELECT * FROM inventory where expired = false and product_id = request.productId;
However, the problem is that many services are calling this particular API very heavily to check the existence of products. Not only that, the calls often come in bursts. I assume those services loop through a list of product IDs and check for their existence individually, hence the burst.
The number of concurrent calls on the API has resulted in it making many queries to the database. The rate can burst beyond 30 queries per sec and there can be a few hundred thousands of requests to fulfil. The queries are mostly the same, except for the product ID in the where clause. The column has been indexed and it takes an average of only 5-8ms to complete. Still, the connection to the database occasionally time out when the rate gets too high.
I'm using Sequelize as my ORM and the error I get when it time out is SequelizeConnectionAcquireTimeoutError. There is a good chance that the burst rate was too high and it max'ed out the pool too.
Some options I have considered:
Using a cache layer. But I have noticed that, most
of the time, 90% of the product IDs in the requests are not repeated.
This would mean that 90% of the time, it would be a cache miss and it
will still query against the database.
Auto scale up the database. But because the calls are bursty and I don't
know when they may come, the autoscaling won't complete in time to
avoid the time out. Moreover, the query is a very simple select statement and the CPU of the RDS instance hardly crosses 80% during the bursts. So I doubt scaling it would do much too.
What other techniques can I do to avoid the database from being hit hard when the API is getting burst calls which are mostly unique and difficult to cache?
Use cache in the boot time
You can load all necessary columns into an in-memory data storage (redis). Every update in database (cron job) will affect cached data.
Problems: memory overhead of updating cache
Limit db calls
Create a buffer for ids. Store n ids and then make one query for all of them. Or empty the buffer every m seconds!
Problems: client response time extra process for query result
Change your database
Use NoSql database for these data. According to this article and this one, I think choosing NoSql database is a better idea.
Problems: multiple data stores
Start with a covering index to handle your query. You might create an index like this for your table:
CREATE INDEX inv_lkup ON inventory (product_id, expired) INCLUDE (col, col, col);
Mention all the columns in your SELECT in the index, either in the main list of indexed columns or in the INCLUDE clause. Then the DBMS can satisfy your query completely from the index. It's faster.
You could start using AWS lambda throttling to handle this problem. But, for that to work the consumers of your API will need to retry when they get 429 responses. That might be super-inconvenient.
Sorry to say, you may need to stop using lambda. Ordinary web servers have good stuff in them to manage burst workload.
They have an incoming connection (TCP/IP listen) queue. Each new request coming in lands in that queue, where it waits until the server software accept the connection. When the server is busy requests wait in that queue. When there's a high load the requests wait for a bit longer in that queue. In nodejs's case, if you use clustering there's just one of these incoming connection queues, and all the processes in the cluster use it.
The server software you run (to handle your API) has a pool of connections to your DBMS. That pool has a maximum number of connections it it. As your server software handles each request, it awaits a connection from the pool. If no connection is immediately available the request-handling pauses until one is available, then handles it. This too smooths out the requests to the DBMS. (Be aware that each process in a nodejs cluster has its own pool.)
Paradoxically, a smaller DBMS connection pool can improve overall performance, by avoiding too many concurrent SELECTs (or other queries) on the DBMS.
This kind of server configuration can be scaled out: a load balancer will do. So will a server with more cores and more nodejs cluster processes. An elastic load balancer can also add new server VMs when necessary.

Firebase Admin - practical limit on the number of listeners

I am working with firebase-admin on Node.js, and initially we started with denormalizing most of the data. As the project grew, we started duplicating data for different views in one Node process. On one hand, this was done to simplify the client access to data, on the other hand to support more complex queries.
We are now running into a scenario where we need a lot of individual listeners. One example could be to listen to child_added on "/chats/$uid/" (for each user $uid) to compute certain statistics on on each user's chats. Obviously, the number of listeners then grows with the number of users.
So far everything works well, but how well does this approach scale? And more importantly, is there a practical limit on the number of listeners?

Is this MEAN stack design-pattern suitable at the 1,000-10,000 user scale?

Let's say that when a user logs into a webapp, he sees a list of information.
Let's say that list of information is served by one of two dynos (via heroku), but that the list of information originates from a single mongo database (i.e., the nodejs dynos are just passing the mongo information to a user when he logs into the webapp).
Question: Suppose I want to make it possible for a user to both modify and add to that list of information.
At a scale of 1,000-10,000 users, is the following strategy suitable:
User modifies/adds to data; HTTP POST sent to one of the two nodejs dynos with the updated data.
Dyno (whichever one it may be) takes modification/addition of data and makes a direct query into the mongo database to update the data.
Dyno sends confirmation back to the client that the update was successful.
Is this OK? Would I have to likely add more dynos (heroku)? I'm basically worried that if a bunch of users are trying to access a single database at once, it will be slow, or I'm somehow risking corrupting the entire database at the 1,000-10,000 person scale. Is this fear reasonable?
Short answer: Yes, it's a reasonable fear. Longer answer, depends.
MongoDB will queue the responses, and handle them in the order it receives. Depending on how much of it is being served from memory, it may or maybe not be fast enough.
NodeJS has the same design pattern, where it will queue responses it doesn't process, and execute them when the resources become available.
The only way to tell if performance is being hindered is by monitoring it, and seeing if resources consistently hit a threshold you're uncomfortable with passing. On the upside, during your discovery phase your clients will probably only notice a few milliseconds of delay.
The proper way to implement that is to spin up a new instance as the resources get consumed to handle the traffic.
Your database likely won't corrupt, but if your data is important (and why would you collect it if it isn't?), you should be creating a replica set. I would probably go with a replica set of data before I go with a second instance of node.

Resources