This article explains very clearly how to implement a voting system with MongoDB, and to limit one vote per user and per object.
I have one extra requirement. I need the votes of a given user to be visible for the objects displayed. For example, if I am displaying 20 tweets, and the user has voted on 3 of those tweets, I want those votes to be visible. (For example, using a green up-arrow.)
One solution is to send to the client, for each question, the set of voters. Another solution is to send to the client the set of votes he has cast. I do not see either solution as a scalable one. Any suggestions?
This is something you would do client side.
Once you have the object that contains the vote cound and the array of voters you can check to see if the current user's id is within the array of voters while you iterate over the set of (stories, tweets, what have you)
Does that make sense?
Not a full answer, but a link to a good voting library (fast!!!) for ruby/mongoid. Should be easily portable to node.js, perhaps mongoose.
https://github.com/vinova/voteable_mongo
I need something similar eventually, perhaps we should chat (I am martin_sunset on node.js on freenode)
Related
I am trying to make an online teacher-doubt solving MERN application.
The workflow is as follows:
A user of the app clicks on the "Ask Doubt" button associated with the teacher.
The user makes the payment.
The user is then added to the queue where he/she waits for the doubts of the people ahead of of him/her to be resolved by that teacher. (Edit: The user and the teacher go into a chat room then and the others will wait in the queue)
I also want to display the number of people in the queue already in the queue so that the user can only pay if they have enough time.
I cannot guarantee the average time for each doubt session so I cannot ask the user to come in after some x amount of time.
Also, feel free to suggest some other implementation if u feel my approach isn't good.
Although there might be other ways to solve the problem too, I think using Cassandra as your database might be one of the solutions for you.
You can design the use the teacher id (or name) as your Partition Key and use the timestamp as your clustering column.
When you want to get the details of the person who is next in the queue, u can simply query the first student under that partition (teacher id).
When you are done asking, you can delete then delete the first row.
As I said before, there might be other ways, but this is certainly one of the simpler solutions if scalability is what you want.
Edit: Since you dont have a Doubt Model, you could create a queue model. Which consist of an Array with the user ids.
Then you can do queue.array.length to get the amount of people in the queue. And when they are in the chatroom or finished. You update the array by removing the user from the array.
If a user creates a new activity and wants all their followers to see it except 1, how can this be implemented? Do we simply push the activity, and then immediately delete it from the specific follower's timeline feed? This seems like a hack.
https://github.com/GetStream/stream-js/issues/210
this use case hasn't come up before. Why would someone want everyone except one person to see a post? Do they want that person to unfollow them? Are there "rings" or levels of people to choose from when posting? If that's the case, you can create separate feeds with follows to them for those levels (and will likely need to use the TO field as well since fanout only goes 1 level deep).
There's no built in mechanism to specify which feeds to fan out to or which not to. The fanout is intended to happen as fast as possible (milliseconds) so doing those kinds of checks wouldn't be optimal. Your solution to quickly delete from that feed will work.
I am building social network app. i have used node.js(express) and mongodb as backend.
Now, i want to list all posts based on different sorting critrea.
user can set his business category(optional). i have store each user current location in user documents.
Now, i need the post list of all user including friends based on below sorting criteria
post from near by friends (lowest distance first)
post from user who has same category as logged in user if logged in user has set business category
post from user who has different business category
post from user who has not set business category (latest first(created at desc))
how should i build structure for friend system in order to achieve above sorting posts?
Any help would be appreciated.
Thanks
First you should pay attention to the memory - this structure grows with N^2 speed where N - posts count.
This task seems to be quiet complex so I suggest to add some logic on backend.
For example, look at CQRS+ES (I think it's suitable technology in your case). So, posts are events and you would have sorted list for every user on read side. And you would update all these structures on every post. (You can read about CQRS here http://cqrs.nu/ and check simple CQRS framework with express support here https://reimagined.github.io/resolve )
In addition I recomend you to limit posts count for every users' list.
Hello Stackoverflow,
I'm writing API's for quite a bit of time right now and now it came to work with one of these bigger api's. Started wondering how to shape this API, as many times I've seen on a bigger platforms that one big entity (for example product page in shop) is loaded separately (We can see that item body loaded, but comments are still fetching etc.).
Usually what I've done was attaching comments as a relation in SQL query, so my frontend queried single API Endpoint like:
http://api.example.com/items/:id
And it returned all necessary data like seller info, photos etc.
Logically seller info and photos are small pieces of data (Item can only have 1 seller and no more than 10 photos for example), but number of comments might be way larger collection with relationship (comment author).
Does it make sense to separate one endpoint into 2 independent endpoints like:
http://api.example.com/items/:id
http://api.example.com/items/:id/comments
What are downsides of this approach? Is it common practice? Or maybe I misunderstood some concept?
One downside might be 2 request performed, but on the other hand, first endpoint should return data faster (as it's lighter than fetching n of comments), so page might be displayed faster and display spinner for comments section. This way I'll be able to paginate comments too.
Are there any improvements that might be included in this separation of endpoints? Or maybe I'm totally wrong and it should be done totally different way?
I think it is a good approach if:
The number of comments of one item can be large, because with this approach you could paginate it easier.
If you are going to need to access to the comments of one item without needing rest of item information
I think any of the previous conditions justify this decition, and yes, it is common approach.
I've wondered about this for some time now. I'm wondering webforums implement the option to highlight something you haven't read. How the forum knows.
Since most webforums have a function to show you all posts since your last visit, they must save the last time you visited one of their pages in your userdata in the database.
But that doesn't explain how individual topics are still highlighted after you've read just one.
A many to many table connecting a user to a topic/post with flags for read/favorite etc.
Many web forums store a huge list of the last time you looked at each topic you've looked at.
This gets out of hand quickly, but there are mitigations. See Determining unread items in a forum
Keeping track of what posts a visitor has read is of course not that much of a big deal. Since it's highly likely that the number of posts a visitor read will be much less than the posts not read. So, if you know what posts a visitor has read, you also know what posts this visitor didn't read. To make this less computational intensive you'd normally do this only over a certain period of time, say the last two weeks. Everything before that time will be considered read.
Usually, this list of "unread" items only shows changes that have been made since the last time you logged out.
Use the user's last activity date/time to mark items as "unread" (any activity in a topic after that time is marked "unread"). Then store in a Session variable, a list of topic IDs that the user viewed since last login. Combining these two would give you a relatively accurate list of unread topics.
Of course this data would then be lost on log-out or session expire and the cycle would start again without sacrificing an unnecessary amount of SQL queries.
On the custom forum I used to work with, we used a combination of your last visit time (updated every time you viewed another page - usually cookied), and a "mark read" button on each topic that added a date/time value to a SQL table containing your UserID, the TopicID and the Date/Time.
Thus to view new topics we would look at your last visit date and anything created after that point in time was a new topic.
Once you entered a topic any topic you had clicked "mark read" on would only show the initial topic and then any replies with a date/time added after you clicked the mark read button. If you have fewer viewers and performance to spare you could basically set it up to add an entry to the table for every topic the user clicks on, when they click on it.
Another option you have, and I have actually seen this done before in a vBulletin installation, is to store a comma separated list of viewed topic ids client-side in a cookie.
Server-side, the only thing stored was the time of the user's previous visit. The forum system used this in conjunction with the information in the user's cookie to show 'as read' for any topic where either
Last modified date (ie last post) older than the user's previous visit
Topic ID found in the user's cookie as a topic the user has visited this session.
I'm not saying it's a good idea, but I thought I'd mention it as an alternative - the obvious way to do it has already been stated in other answers, ie store it server-side as a relation table (many to many table).
I guess it does have the advantage of putting less burden on the server of keeping that information.
The downsides are that it ties it to the session, so once a new session is started everything that occurred before the last session is considered 'already read'. Another downside is that a cookie can only hold so much information, and a user may view hundreds of topics in a session, so it approaches the storage limit of the cookie.
One more approach:
Make sure your stylesheet shows a clear difference between visited and non-visited links, taking advantage of the fact that browsers remember visited pages persistently.
For this to work, however, you'd need to have consistent URLs for topics, and most forum systems don't tend to do this. Another downside to this is that users may clear their history, or use more than one browser. This therefore puts this measure into the 'not highly reliable category'; you would probably just do this to augment whatever other measure you are using to track viewed topics.