Does Getstream support "seen" and "unseen" posts?
Essentially, I'd like to be able to show the user the number of new posts that have been posted to the feed since the last time they visited it.
After they see the new posts on the feed, reset the number of unseen posts to 0.
I'm aware that the notification feed has similar capabilities but best practices wise, it doesn't seem like a good idea to use that instead of a flat feed (maybe i'm wrong)
UPDATE SCENARIO
Every user has a (global_feed_notifications:user_uuid) that follows (global_feed_flat:1)
A user adds an activity to their (user_posts_flat:user_uuid)
The activity has a to:["global_feed_flat:1"]
The expectation is that (global_feed_notifications:user_uuid) would receive the activity as an unseen and unread notification due to a fanout.
UPDATE
The scenario failed.
export function followDefaultFeedsOnStream(userapp){
const streamClient = stream.connect(STREAM_KEY, STREAM_SECRET);
const globalFeedNotifications = streamClient.feed(feedIds.globalFeedNotifications, userapp);
globalFeedNotifications.follow(feedIds.globalFeedFlat, '1');
}
export function addPostToStream(userapp, post){
const streamClient = stream.connect(STREAM_KEY, STREAM_SECRET);
const userPosts = streamClient.feed(feedIds.userPosts, userapp);
//expansion point: if posts are allowed to be friends only,
//calculate the value of the 'to' field from post.friends_only or post.private
const activity = {
actor: `user:${userapp}`,
verb: 'post',
object: `post:${post.uuid}`,
post_type: post.post_type,
foreign_id: `foreign_id:${post.uuid}`,
to: [`${feedIds.globalFeedFlat}:1`],
time: new Date()
}
userPosts.addActivity(activity)
.then(function(response) {
console.log(response);
})
.catch(function(err) {
console.log(err);
});
}
UPDATE
Well I'm not sure what happened but it suddenly started working after a day.
Unread and unseen is only supported on notification feeds. You could set up your aggregation format to {{ id }} to avoid any aggregations but still leverage the power of unread and unseen indicators.
Related
After having watched numerous videos about GCP Firestore, I'm still asking myself what is a best way to work with huge amount of data coming from Firestore ?
Like 800 000 products, I would like to display all of them in such datatable with Quasar:
How do I make it work in real-time, like listening for each item's changes without having exceeded my quota usage to avoid pay high bills for stupid code ?
Backend:
app.get('/products/get', async (req, res) => {
const snapshot = await db.collection('products3P').limit(20).get()
const result = snapshot.docs.map(doc => doc.data())
res.json(result)
})
Frontend:
<q-table title="Products"
:rows="rows"
:columns="columns"
:filter="filter"
row-key="name"
no-data-label="No results"
/>
// Fetching results from backend once page is loaded:
api.get('/api/products/get').then(({ data }) => {
data.forEach(i => {
rows.value.push({
name: i.name,
sku: i.sku,
model: i.model,
brand: i.brand,
description: i.description
})
})
})
Binance's Market page is a perfect example, what is the best solutions to make datatables efficient ?
Any links or suggestions will be highly appreciate.
You can paginate and show only 20 (or the amount you prefer) items at a time. When the user changes the page load the next set of items and show them. Quasar has a loading state in their tables which you can use while the next documents are loading for the first time.
To show realtime updates, you can listen to documents of those 20 items only.
db.collection("products3P").orderBy("somefield")
.onSnapshot((querySnapshot) => {
const products = [];
querySnapshot.forEach((doc) => {
products.push(doc.data().name);
});
console.log("Current products: ", products.join(", "));
// Update the Table
});
You can set that array (of objects) to that table (rows.values) and render the data. You can make sure the changes are just current documents being updated and not new one's being added (if you want to) by checking the change type.
querySnapshot.docChanges().forEach((change) => {
if (change.type === "modified") {
console.log("Modified product: ", change.doc.data());
}
// else don't update in that array
})
To update the changes in existing array (rows), you can follow this answer to update an item in array.
Do note that your server is not involved in this process. If you need to do this through your servers then you'll have to use web sockets but using Firestore directly as shown above is definitely easier. Also make sure you detach listeners of the previous products.
I am building an API and came across an issue that I have a few ideas of how to solve, but I was wondering what is the most optimal one. The issue is the following:
I have a Product model which has, for the sake of simplicity one field called totalValue.
I have another model called InventoryItems, which, whenever is updated, the totalValue of Product must also be updated.
For example, if the current totalValue of a product is say $1000, when someone purchases 10 screws at a cost of $1 each, a new InventoryItem record will be created:
InventoryItem: {
costPerItem: 1,
quantity: 10,
relatedToProduct: "ProductXYZ"
}
At the same time of creation of that item, totalValue of the respective ProductXYZ must be updated to now $1100.
The question is what is the most efficient and user-friendly way to do this?
Two ways come to my mind (and keep in mind that the code bellow is kinda pseudo, I have intentionally omitted parts of it, that are irrelevant for the problem at hand):
When the new InventoryItem is created, it also queries the database for the product and updates it, so both things happen in the same function that creates the inventory item:
function async createInventoryItem(req, res) {
const item = { ...req.body };
const newInventoryItem = await new InventoryItem({...item}).save();
const foundProduct = await Product.find({ name: item.relatedtoProduct }).exec();
foundProduct.totalValue = foundProduct.totalValue + item.costPerItem * item.quantity;
foundProduct.save();
res.json({ newInventoryItem, newTotalOfProduct: foundProduct.totalValue });
}
That would work, my problem with that is that I will no longer have "a single source of truth" as that approach will make it hard to update the code, as updating a given Product will be scattered all over the project.
The second approach that comes to my mind is that, when I receive the request to create the item, I do create the item, and then I make an internal request to the other endpoint that handles product updates, something like:
function async createInventoryItem(req, res) {
const item = { ...req.body };
const newInventoryItem = await new InventoryItem({...item}).save();
const totalCostOfNewInventoryItem = item.costPerItem * item.quantity;
// THIS is the part that I don't know how to do
const putResponse = putrequest("/api/product/update", {
product: item.relatedtoProduct,
addToTotalValue: totalCostOfNewInventoryItem
});
res.json({ newInventoryItem, newTotalOfProduct: putResponse.totalValue });
}
This second approach solves the problem of the first approach, but I don't know how to implement it, and it is I'm guessing a form of requests chaining or rerouting? Also I am guessing that the second approach will not have a performance penalty, since node will be sending requests to itself, so no time lost in accessing servers across the world or whatever)
I am pretty sure that the second approach is the one that I have to take (or is there another way that I am currently not aware of??? I am open to any suggestions, I am aiming for performance), but I am unsure of exactly how to implement it.
I'm trying to make a messaging system that writes each message to a mongo entry. I'd like the message entry to reflect the user that sends the message, and the actual message content. This is the message schema:
const MessageSchema = new Schema({
id: {
type: String,
required: true
},
messages: {
type: Array,
required: true
},
date: {
type: Date,
default: Date.now
}
});
And this is where I either create a new entry, or append to an existing one:
Message.findOne({ id: chatId }).then(message => {
if(message){
Message.update.push({ messages: { 'name': user.name, 'message': user.message } })
} else {
const newMessage = new Message(
{ id: chatId },
{ push: { messages: { 'name': user.name, 'message': user.message } } }
)
newMessage
.save()
.catch(err => console.log(err))
}
})
I'd like the end result to look something like this:
id: '12345'
messages: [
{name: 'David', message: 'message from David'},
{name: 'Jason', message: 'message from Jason'},
etc.
]
Is something like this possible, and if so, any suggestions on how to get this to work?
This questions contains lots of topics (in my mind at least). I really want to try to break this questions to its core components:
Design
As David noted (first comment) there is a design problem here - an ever-growing array as a sub document is not ideal (please refer to this blog post for more details).
On the over hand - when we imagine how a separate collection of messages will looks like, it will be something like this:
_id: ObjectId('...') // how do I identify the message
channel_id: 'cn247f9' // the message belong to a private chat or a group
user_id: 1234 // which user posted this message
message: 'hello or something' // the message itself
Which is also not that great because we are repeating the channel and user ids as a function of time. This is why the bucket pattern is used
So... what is the "best" approach here?
Concept
The most relevant question right now is - "which features and loads this chat is suppose to support?". I mean, many chats are only support messages display without any further complexity (like searching inside a message). Keeping that in mind, there is a chance that we store in our database an information that is practically irrelevant.
This is (almost) like storing a binary data (such an image) inside our db. we can do this, but with no actual good reason. So, if we are not going to support a full-text search inside our messages, there is no point to store the messages inside our db.. at all
But.. what if we want to support a full-text search? well - who said that we need to give this task to our database? we can easily download messages (using pagination) and make the search operation on the client side itself (while keyword not found, download previous page and search it), taking the loads out of our database!
So.. it seems like that messages are not ideal for storage in database in terms of size, functionality and loads (you may consider this conclusion as a shocking one)
ReDesign
Using a hybrid approach where messages are stored in a separated collection with pagination (the bucket pattern supports this as described here)
Store messages outside your database (since your are using Node.js you may consider using chunk store), keeping only a reference to them in the database itself
Set your page with a size relevant to your application needs and also with calculated fields (for instances: number of current messages in page) to ease database loads as much as possible
Schema
channels:
_id: ObjectId
pageIndex: Int32
isLastPage: Boolean
// The number of items here should not exceed page size
// when it does - a new document will be created with incremental pageIndex value
// suggestion: update previous page isLastPage field to ease querying of next page
messages:
[
{ userId: ObjectID, link: string, timestamp: Date }
]
messagesCount: Int32
Final Conclusion
I know - it seems like a complete overkill for such a "simple" question, but - Dawid Esterhuizen convinced me that designing your database to support your future loads from the very beginning is crucial and always better than simplifying db design too much
The bottom line is that the question "which features and loads this chat is suppose to support?" is still need to be answered if you intend to desgin your db efficiently (e.g. to find the Goldilocks zone where your design suits your application needs in the most optimal way)
I am running an iOS app where I display a list of users that are currently online.
I have an API endpoint where I return 10 (or N) users randomly, so that you can keep scrolling and always see new users. Therefore I want to make sure I dont return a user that I already returned before.
I cannot use a cursor or a normal pagination as the users have to be returned randomly.
I tried 2 things, but I am sure there is a better way:
At first what I did was sending in the parameters of the request the IDs of the user that were already seen.
ex:
But if the user keeps scrolling and has gone through 200 profiles then the list is long and it doesnt look clean.
Then, in the database, I tried adding a field to each users "online_profiles_already_sent" where i would store an array of the IDs that were already sent to the user (I am using MongoDB)
I can't figure out how to do it in a better/cleaner way
EDIT:
I found a way to do it with MySQL, using RAND(seed)
but I can't figure out if there is a way to do the same thing with Mongo
PHP MySQL pagination with random ordering
Thank you :)
I think the only way that you will be able to guarentee that users see unique users every time is to store the list of users that have already been seen. Even in the RAND example that you linked to, there is a possibility of intersection with a previous user list because RAND won't necessarily exclude previously returned users.
Random Sampling
If you do want to go with random sampling, consider Random record from MongoDB which suggests using an an Aggregation and the $sample operator. The implementation would look something like this:
const {
MongoClient
} = require("mongodb");
const
DB_NAME = "weather",
COLLECTION_NAME = "readings",
MONGO_DOMAIN = "localhost",
MONGO_PORT = "32768",
MONGO_URL = `mongodb://${MONGO_DOMAIN}:${MONGO_PORT}`;
(async function () {
const client = await MongoClient.connect(MONGO_URL),
db = await client.db(DB_NAME),
collection = await db.collection(COLLECTION_NAME);
const randomDocs = await collection
.aggregate([{
$sample: {
size: 5
}
}])
.map(doc => {
return {
id: doc._id,
temperature: doc.main.temp
}
});
randomDocs.forEach(doc => console.log(`ID: ${doc.id} | Temperature: ${doc.temperature}`));
client.close();
}());
Cache of Previous Users
If you go with maintaining a list of previously viewed users, you could write an implementation using the $nin filter and store the _id of previously viewed users.
Here is an example using a weather database that I have returning entries 5 at a time until all have been printed:
const {
MongoClient
} = require("mongodb");
const
DB_NAME = "weather",
COLLECTION_NAME = "readings",
MONGO_DOMAIN = "localhost",
MONGO_PORT = "32768",
MONGO_URL = `mongodb://${MONGO_DOMAIN}:${MONGO_PORT}`;
(async function () {
const client = await MongoClient.connect(MONGO_URL),
db = await client.db(DB_NAME),
collection = await db.collection(COLLECTION_NAME);
let previousEntries = [], // Track ids of things we have seen
empty = false;
while (!empty) {
const findFilter = {};
if (previousEntries.length) {
findFilter._id = {
$nin: previousEntries
}
}
// Get items 5 at a time
const docs = await collection
.find(findFilter, {
limit: 5,
projection: {
main: 1
}
})
.map(doc => {
return {
id: doc._id,
temperature: doc.main.temp
}
})
.toArray();
// Keep track of already seen items
previousEntries = previousEntries.concat(docs.map(doc => doc.id));
// Are we still getting items?
console.log(docs.length);
empty = !docs.length;
// Print out the docs
docs.forEach(doc => console.log(`ID: ${doc.id} | Temperature: ${doc.temperature}`));
}
client.close();
}());
I have encountered the same issue and can suggest an alternate solution.
TL;DR: Grab all Object ID of the collections on first landing, randomized using NodeJS and used it later on.
Disadvantage: slow first landing if have million of records
Advantage: subsequent execution is probably quicker than the other solution
Let's get to the detail explain :)
For better explain, I will make the following assumption
Assumption:
Assume programming language used NodeJS
Solution works for other programming language as well
Assume you have 4 total objects in yor collections
Assume pagination limit is 2
Steps:
On first execution:
Grab all Object Ids
Note: I do have considered performance, this execution takes spit seconds for 10,000 size collections. If you are solving a million record issue then maybe used some form of partition logic first / used the other solution listed
db.getCollection('my_collection').find({}, {_id:1}).map(function(item){ return item._id; });
OR
db.getCollection('my_collection').find({}, {_id:1}).map(function(item){ return item._id.valueOf(); });
Result:
ObjectId("FirstObjectID"),
ObjectId("SecondObjectID"),
ObjectId("ThirdObjectID"),
ObjectId("ForthObjectID"),
Randomized the array retrive using NodeJS
Result:
ObjectId("ThirdObjectID"),
ObjectId("SecondObjectID"),
ObjectId("ForthObjectID"),
ObjectId("FirstObjectID"),
Stored this randomized array:
If this is a Server side script that randomized pagination for each user, consider storing in Cookie / Session
I suggest Cookie (with timeout expired linked to browser close) for scaling purpose
On each retrieval:
Retrieve the stored array
Grab the pagination item, (e.g. first 2 items)
Find the objects for those item using find $in
.
db.getCollection('my_collection')
.find({"_id" : {"$in" : [ObjectId("ThirdObjectID"), ObjectId("SecondObjectID")]}});
Using NodeJS, sort the retrieved object based on the retrived pagination item
There you go! A randomized MongoDB query for pagination :)
I'm handling charges and customers' subscriptions with Stripe, and I want to use these handlings as a Hoodie plugin.
Payments and customer's registrations and subscriptions appear normally in Stripe Dashboard, but what I want to do is update my _users database in CouchDB, to make sure customer's information are saved somewhere.
What I want to do is updating the stripeCustomerId field in org.couchdb.user:user/bill document, from my _users database which creates when logging with Hoodie. And if it is possible, to create this field if it does not exist.
In hoodie-plugin's document, the update function seems pretty ambiguous to me.
// update a document in db
db.update(type, id, changed_attrs, callback)
I assume that type is the one which is mentioned in CouchDB's document, or the one we specify when we add a document with db.add(type, attrs, callback) for example.
id seems to be the doc id in couchdb. In my case it is org.couchdb.user:user/bill. But I'm not sure that it is this id I'm supposed to pass in my update function.
I assume that changed_attrs is a Javascript object with updated or new attributes in it, but here again I have my doubts.
So I tried this in my worker.js:
function handleCustomersCreate(originDb, task) {
var customer = {
card: task.card
};
if (task.plan) {
customer.plan = task.plan;
}
stripe.customers.create(customer, function(error, response) {
var db = hoodie.database(originDb);
var o = {
id: 'bill',
stripeCustomerId: 'updatedId'
};
hoodie.database('_users').update('user', 'bill', o, function(error) {
console.log('Error when updating');
addPaymentCallback(error, originDb, task);
});
db.add('customers.create', {
id: task.id,
stripeType: 'customers.create',
response: response,
}, function(error) {
addPaymentCallback(error, originDb, task);
});
});
}
And between other messages, I got this error log:
TypeError: Converting circular structure to JSON
And my file is not updated : stripeCustomerId field stays null.
I tried to JSON.stringify my o object, but It doesn't change a thing.
I hope than some of you is better informed than I am on this db.update function.
Finally, I decided to join the Hoodie official IRC channel, and they solved my problem quickly.
Actually user.docs need an extra API, and to update them you have to use hoodie.account instead of hoodie.database(name)
The full syntax is:
hoodie.account.update('user', user.id, changedAttrs, callback)
where user.id is actually the account name set in Hoodie sign-up form, and changedAttrs an actual JS object as I thought.
Kudos to gr2m for the fix; ;)