Users last-access time with CouchDB - couchdb

I am new to CouchDB, but that is not related to the problem. The question is simple, yet not clear to me.
For example: Boris was on the site 5 seconds ago and viewing his profile Ivan sees it.
How to correctly implement this feature (users last-access time)?
The problem is that, if we update users profile document in CouchDB, for ex. property last_access_time, each time a page is refreshed, than we will have the most relevant information (with MySQL we did it this way), but on the other hand, we will have _rev of the document somewhere about 100000++ by the end of the day.
So, how do you do that or do you have any ideas?

This is not a full answer but a possible optimization. It will work in addition to any other answers here.
Instead of storing the latest timestamp, update the timestamp only if it has changed by e.g. 5 seconds, or 60 seconds.
Assume a user refreshes every second for a day. That is 86,400 updates. But if you only update the timestamp at 5-second intervals, that is 17,280; for 60-seconds it is 1,440.
You can do this on the client side. When you want to update the timestamp, fetch the current document and check the old timestamp. If it is less than 5 seconds old, don't do anything. Otherwise, update it normally.
You can also do it on the server side. Write an _update function in CouchDB, which you can query like e.g. POST /db/_design/my_app/_update/last-access/the_doc_id?time=2011-01-31T05:05:31.872Z. The update function will do the same thing: check the old timestamp, and either do nothing, or update it, depending on the elapsed time.

If there was (a large) part of a document that is relatively static, and (a small) part that is highly dynamic, I would consider splitting it into two different documents.
Another option might be to use something more suited to the high write throughput of small pieces of data of that nature such as Redis or possibly MongoDB, and (if necessary) have a background task to occasionally write the info to CouchDB.

CouchDB has no problem with rapid document updates. Just do it, like MySQL. High _rev is no problem.
The only thing is, you have to be responsible about your couch from day 1. All CouchDB users must do this anyway, however you may have to do it sooner. (Applications with few updates have lower risk of a full disk, so developers can postpone this work.)
Poll your database and run compaction if it needs it (based on size, document count, seq_id number)
Poll your views and run compaction too
Always have enough disk capacity and i/o bandwidth to support compaction. Mathematical worst-case: you need 2x the database size, and 2x the write speed; however, most applications require less. Since you are updating documents, not adding them, you will need way less.

Related

CouchDB works slow for the first time

The problem I am facing with couchDB is whenever I hit any database for the first time, it fetches quite slowly, though the speed is increased from the second time. Is there any workaround we can do so that this glitch gets removed for the first time as well?
Secondary indexes in CouchDB are not updated during document write operations (doc). So the delay is because the view is actually generated for the first time.
For CouchDB 3.x: look into tuning background indexing
For CouchDB 2.x and before: upgrade and/or prefetch your views regularly so they've been built at the moment you need them quickly available
Ah, and if you're doing mango queries, then make sure required indexes are defined in the first place so you're not rescanning the DB every time :)

Managing constantly changing data in Database

I need some advice on how to architect my data in monogoDB. I have this app, where users can view, add, edit and remove credit and debit transactions. Below is how the data looks.
The balance column here is dynamic. For example if someone adds a transaction dates 10-09-2017, all the amount in the balance field thereafter needs to change in that moment to reflect the new transaction. Right now, I am not saving this balance field at all in the database and is calculating it every time when the user loads the page, reloads it, and also when editing, deleting, adding a transaction. Now it is fast, but I assume, in the future, when the user has a lot of transactions, they will become slow as these calculations needs to be done before the user is displayed the data table. Is there a more efficient way to do this?
Also I am doing the calculations on the client side, so the load is on the client's device and not on server. I think if it is on server side, and a lot of users start using it, the API requests will become much slower and not unusable at all after a while. Is this the right way?
PS : Also it was hard making sure the reader understand my questions but I have tried my best. Please let me know if I should explain this in more details or if I should add any more details.
It is not a question about mongodb, it is a question about user interface.
Will you really display the whole history of transactions at once?
You should either utilize pagination (simplest) or reload on scroll to load your data.
Before you get problems because of the balance cell calculation, it is more likely that you experience problems because of:
Slow loading from network (almost certainly)
Slow page interaction because of DOM size (maybe)
Show the first 100 to 500 transactions and provider the user with some way to load earlier entries.
Update - Regarding server-side balance calculation:
You could calculate balance on server-side and store it into a second collection which serves as a cache. If a transaction insertion happens in the past, you recalculate the cache. To speed this up, you can utilize snapshots:
Within a third collection, you could store the current balance in certain intervals, e.g. with the following data structure:
{ Balance: 150000, Date: 2017-02-03, LastTransactionId: 546 }
When a transaction is inserted in the past, take the most recent snapshot before that past moment and recalculate the cache based on that. This way, you. can keep the number of recalculated transactions pretty small.

Dynamodb infrequently scheduled scan

I am implementing a session table with nodejs which will grow to a huge number of items. each hash key is a uuid representing a user.
In order to delete the expired sessions, I must scan the table for expired attribute and delete old sessions. I am planning to do this scan once a few days, and other than that, I don't really need high read capacity.
I came out with 2 solutions, and i would like to hear some feedback about them.
1) UpdateTable to higher capacities for only that scheduled routine, and after the scan is done, simply reduce the table capacities to it's original values.
2) Perform the scan, and when retrieving the 'LastEvaluatedKey' after an x*MB read, create a initiation delay (for not consuming all read/sec units), and then continue the scan with 'ExclusiveStartKey'.
If you're doing a scan, option 1 is your best best. This is the only real way to guarantee that you won't effect your application performance while the scan is ongoing.
The only thing you need to be sure of is that you only run this operation once a day -- I believe you can only DOWNGRADE throughput units on a DynamoDB table 2x's per day (at most).
This is an old question, but I saw it through a related question.
There is now a much better native solution: DynamoDB Time to Live
It allows you to specify one attribute per table that serves as the time to live value for each item. You can then set the attribute per item with a Unix-Timestamp that specifies when the item should be deleted.
Within about 24 hours of that timestamp, the item will be deleted at no additional charge.

Running query on database after a document/row is of certain age

What is the best practice for running a database-query after any document in a collection become of certain age?
Let's say this is a node.js web-system with mongoDB, with a collection of posts. After a new post is inserted, it should be updated with some data after 60 minutes.
Would a cron-job that checks all posts with (age < one hour) every minute or two be the best solution? What would be the least stressing solution if this system has >10.000 active users?
Some ideas:
Create a second collection as a queue with a "time to update" field which would contain the time at which the source record needs to be updated. Index it, and scan through looking for values older than "now".
Include the field mentioned above in the original document and index it the same way
You could just clear the value when done or reset it to the next 60 minutes depending on behavior (rather than inserting/deleting/inserting documents into the collection).
By keeping the update-collection distinct, you have a better chance of always keeping the entire working set of queued updates in memory (compared to storing the update info in your posts).
I'd kick off the update not as a web request to the same instance of Node but instead as a separate process so as to not block user-requests.
As to how you schedule it -- that's up to you and your architecture and what's best for your system. There's no right "best" answer, especially if you have multiple web servers or a sharded data system.
You might use a capped collection, although you'd run the risk of potentially losing records needing to be updated (although you'd gain performance)

Solr for constantly updating index

I have a news site with 150,000 news articles. About 250 new articles are added daily to the database at an interval of 5-15 minutes. I understand that Solr is optimized for millions of records and my 150K won't be a problem for it. But I am worried the frequent updation will be a problem, since the cache gets invalidated with every update. In my dev server, cold load of a page takes 5-7 seconds to load (since every page runs a few MLT queries).
Will it help, if I split my index into two - An archive index and a latest index. The archive index will be updated once every day.
Can anyone suggest any ways to optimize my installation for a constantly updating index?
Thanks
My answer is: test it! Don't try to optimize yet if you don't know how it performs. Like you said, 150K is not a lot, it should be quick to build an index of that size for your tests. After that, run a couple of MLT queries from a different concurrent threads (to simulate users) while you index more documents to see how it behaves.
One setting that you should keep an eye on is auto-commit. Since you are indexing constantly, you can't commit at each document (you will bring Solr down). The value that you will choose for this setting will let you tune the latency of the system (how many times it takes for new documents to be returned in results) while keeping the system responsive.
Consider using mlt=true in the main query instead of issuing per-result MoreLikeThis queries. You'll save the roundtrips and so it will be faster.

Resources