Getstream- How to filter fake likes? - getstream-io

I have the user feed.If the user posts the activity, the same follower like two times,the likes count increased.how to avoid that?
when i post the activity, the followers can like the activity multiple times

The best way to avoid that is to not send duplicate reactions to Stream. The React library already enforces this. While we do not enforce uniqueness for reaction kinds, support for this will be added soon.

Related

Managing messages in chat room with MongoDB and Graphql

I am wondering how to manage messages in my chat room. My assumptions are:
There is a collection rooms with fields like id, messages, participants
There can be many rooms with many participants
Now, I have doubts:
should I have separate collection with messages (id, author, text, where author is a reference to users collection)?
Or maybe should I keep simple objects in messages instead of documents with refs?
I can imagine that collection with messages will be huuuuge (if is not cleared). Will Mongo handle it? Or maybe there is a better way for doing that?
Regards
It depends on the scale of what you're building.
I would say that, what meatsacks like you and me consider to be huge amounts is often peanuts for database systems (be it relational or a nosql datastore).
It's hard to say without knowing anything about the project, but i suspect you'll likely be better off, if you design your data model based on correctness/usefulness, and worry about performance as a next step.
Based on the entities you describe (rooms, messages, participants, users, ...) i'm picturing an application such as Discord. In such a case i would think of rooms and users as first order entities and both participants and messages as (big) ordered lists of data belonging to a room (while each entry in both obviously also has a reference to its personal user/author).

Should i create a new route or have a variable to decide what operation to do?

I am making a route for showing transactions to the users now for this, I have a route transactions but I am using this route for showing filtered transactions.
Now the question is should I make the same route for both fetching All Transactions and Filtered Transactions(It is self-broken into many categories) or should I have a different route for both of them ??
I mean will there be any performance enhancement if I will use any one of the approach or there will be no difference (which I think is).
From a RESTFUL point of view, if your transactions are always splitted into categories, I think you could have
/transactions
for all transactions and
/category/:categoryId/transactions
for the transactions of one category.
But, if you are going to get transactions of several categories at once, probably it will be a better approach to have only
/transactions?categories=categoryId1&categoryId2....
and with query parameters filtering by categories.

Two streams for inter-related models?

If we have users and posts - and I can follow a user (and see all their posts), or follow a particular post (and see all it's edits/updates), would each post be pushed to two seperate streams, one for the user and another for the post?
My concern is that if a user follows an idea, and also the user feed, their aggregated activity-feed could show multiple instances of the same idea, one from each feed.
Every unique activity will only appear at most once in a feed. To make an activity have the exact same internal ID, you might try using the to field. This add an activity into different feed groups with the same activity UUID.
If this is not possible, at least make an activity unique, by both entering the same time and foreign_id values. This will make an activity unique as well.
Cheers!

Aggregate feed, removing duplicates in getstream

I have followed the adivce here stackoverflow aggregate answer
I am grouping posts together(shares for same post together, likes for same posts together, regular posts as single activities). What I'm noticing, however, is that I end up with duplicates for a user. If a user shares a post, and also likes the post, it shows up twice on their getstream feed.Right now, I have to do filtering on my own backend service with a certain order(If you share a post, remove the activity if you also liked it).If you like a post, then remove the regular post.Is there a better way to solve this problem of duplicates?
One idea that comes to mind: when you post an activity of a share, make sure you send a foreign_id and time (sending both will avoid duplicates in our system), then if you also 'like' the activity you could store a like counter in the activity metadata, and send an update with the foreign_id and incrementing the like count.
Keep in mind that updates don't push to aggregated feeds or notification feeds, though, so you'd still want to push that 'like' activity to those feeds, too.

Database Design for "Likes" in a social network (MongoDB)

I'm building a photo/video sharing social network using MongoDB. The social network has a feed, profiles and a follower model. I basically followed a similar approach to this article for my "social feed" design. Specifically, I used the fan-out on write with bucket approach when users posts stories.
My issue is when a user "likes" a story. I'm currently also using the fan-out on write approach that basically increments/decrements a story's "like count" for every user's feed. I think this might be a bad design since users "like" more frequently than they post. Users can quickly saturate the server by liking and unliking a popular post.
What design pattern do you guys recommend here? Should I use fan-out on read? Keep using Fan-out on write with Background workers? If the solution is "background workers", what approach do you recommend using for background workers? 'm using Node.js.
Any help is appreciated!
Thanks,
Henri
I think the best approach is:
1. increasing-decreasing a counter in your database to keep track of the number of like
2. insert in a collection called 'like' each like as a single document, where you track the id of the users who likes the story and the id of the liked story.
Then if you just need the number of likes you can access the counter data and it's really fast, instead if you need to know where the likes where from you will query the collection called 'like' querying by story id and get all users' ids who liked the story.
The documents i am talking about in the like collection will be like so:
{_id: 'dfggsdjtsdgrhtd'
'story_id': 'ertyerdtyfret',
'user_id': 'sdrtyurertyuwert'}
You can store the counter in the story's document itself:
{
...
likes: 56
}
You can also keep track of last likes in your story's document (for example 1000. last because mongodb's documents have limited size to 16 mb and if your application scales so much you will meet problem in storing potential unlimited data in a single document). With this approach you can easily query the 'likes' collection and get last likes.
When someone unlikes a story you can simply remove the like document from 'like' collection, or, as better approach, (e.g: you are sending notification when someone's story is liked), just store in that document that was unliked, so that if it will be liked again by the same user you will have checked that the like was already inserted and you won't send another notification.
example:
first time insert:
{_id: 'dfggsdjtsdgrhtd'
'story_id': 'ertyerdtyfret',
'user_id': 'sdrtyurertyuwert'
active: true}
When unliked update to this
{_id: 'dfggsdjtsdgrhtd'
'story_id': 'ertyerdtyfret',
'user_id': 'sdrtyurertyuwert'
active: false}
When each like is added check if there's an existing document with the same story id and the same user id. If there is, if active is false it means the user already liked and unliked the story so that if it will be liked again you won't send already-sent notification!

Resources