thanks for your help, I am new to firebase, I am designing an application with Node.js, what I want is that every time it detects changes in a document, a function is invoked that creates or updates the file system according to the new structure of data in the firebase document, everything works fine but the problem I have is that if the document is updated with 2 or more attributes the makeBotFileSystem function is invoked the same number of times which brings me problems since this can give me performance problems or file overwriting problems since what I do is generate or update multiple files.
I would like to see how the change can be expected but wait until all the information in the document is finished updating, not attribute by attribute, is there any way? this is my code:
let botRef = firebasebotservice.db.collection('bot');
botRef.onSnapshot(querySnapshot => {
querySnapshot.docChanges().forEach(change => {
if (change.type === 'modified') {
console.log('bot-changes ' + change.doc.id);
const botData = change.doc.data();
botData.botId = change.doc.id;
//HERE I CREATE OR UPDATE FILESYSTEM STRUCTURE, ACCORDING Data changes
fsbotservice.makeBotFileSystem(botData);
}
});
});
The onSnapshot function will notify you anytime a document changes. If property changes are commited one by one instead of updating the document all at once, then you will receive multiple snapshots.
One way to partially solve the multiple snapshot thing would be to change the code that updates the document to commit all property changes in a single operation so that you only receive one snapshot.
Nonetheless, you should design the function triggered by the snapshot so that it can handle multiple document changes without breaking. Given that document updates will happen no matter if by single/multiple property changes your code should be able to handle those. IMHO the problem is the filesystem update rather than how many snaphots are received
You should use docChanges() method like this:
db.collection("cities").onSnapshot(querySnapshot => {
let changes = querySnapshot.docChanges();
for (let change of changes) {
var data = change.doc.data();
console.log(data);
}
});
Related
In the Blazor application I am developing, I have code that performs the basic CRUD operations.
<ButtonRowTemplate>
<Button Color="Color.Success" Clicked="context.NewCommand.Clicked">New</Button>
<Button Color="Color.Primary" Clicked="context.EditCommand.Clicked">Edit</Button>
<Button Color="Color.Danger" Clicked="context.DeleteCommand.Clicked">Delete</Button>
<Button Color="Color.Link" Clicked="context.ClearFilterCommand.Clicked">Clear Filter</Button>
</ButtonRowTemplate>
The code works but only in the cached memory (context). The changes are not propagated to the DB. The API's I have written in the past use a straight update command similar to:
replaceResponse = await this.container.ReplaceItemAsync<T>(itemBody, itemBody.Id, new PartitionKey(itemBody.someID));
I am trying to sync the cache / context with the DB. I have unsuccessfully tried to tie in a database update in parallel with the cache update using code similar to the code above. Is this correct or is there a more serial way of updating the cache and then pushing those context changes to the DB? In a sense, committing those changes.
I should also add that I am using the Microsoft.Azure.DocumentDB.Core. I would like to get the code working before moving to the Microsoft.Azure.Cosmos library. I appreciate any assistance as I am getting back into front end development using Blazor.
I have a solution but the question in my mind is it right? More specifically was there an easier way to do it like commit the changes from the cache?
WHAT I DID
First, I did a little more digging into the Blazorise Grid component. There is a parameter called RowInserted. I believe this triggers a function after a row is add to the cache.
<DataGrid TItem="PowderInfo"
Data="#powderlist"
#bind-SelectedRow="#selectedLoad"
RowInserted="AddNewDoc"
Then I added a new function called AddNewDoc to the page code.
protected async Task AddNewDoc()
{
var item = new PowderInfo();
item = this.powderlist.ElementAt(this.powderlist.Count() - 1);
item.id = this.powderlist.Count().ToString();
await getPowderData.AddNewDocument(item);
}
item is an instance of my document type.
powderlist is my local cache.
this.powderlist.ElementAt gets the cached item that was just
added just added.
item.id is propagated using the count value.
finally, a call to the DAL's AddNewDocument with the new document.
public async Task AddNewDocument(PowderInfo item)
{
await CosmosDBRepository<PowderInfo>.CreateItemAsync(item, "Powders");
}
And now my database is updated. But again, since the cache is coming from the DAL, was there another way to update the DB?
we have a big enterprise Node.JS application with multiple microservices that can in parallel access DB entity called context. At some moment we started to have concurrency issues, i.e. two separate microservices did load same context, did different changes and saved, resulting in loss of data. Because of this we have rewritten our DB layer to use mongoose with full optimistic concurrency enabled (via scheme option optimisticConcurrency). This work fine and now when we get version error we reload latest version of the context, reapply all the changes again and save. Problem is that this reapplication creates duplicities in code. Our general approach can be expressed by following pseudo code:
let document = Context.find(...);
document.foo = 'bar'
document.bar = 'foo'
try {
document.save()
} catch(mongooseVersionError) {
let document = Context.find(...);
// DUPLICATE CODE HERE, DOING SAME ASSIGNMENTS (foo, bar) AGAIN!
document.foo = 'bar'
document.bar = 'foo'
document.save()
}
What we would like to use instead is some automated tracking of all changes on document so that when we get mongoose versioning error we can reapply these changes automatically. Any idea how to do it in most elegant way? Does mongoose support something like this out of the box? I know that we could check in presave hook document attributes one by one via Document.prototype.isModified() method but this seems to me like quite worky and unflexible approach.
I am trying to build a logging mechanism, to log changes done to a record. I am currently logging previous and new record. However, as the site is very busy, I expect the logfile to grow seriously huge. To avoid this, I plan to only capture the modified fields only.
Is there a way to capture only the modifications done to a record (in REACT), so my {request.body} will have fewer fields?
My Server-side is build with NODE.JS and the client-side is REACT.
One approach you might want to consider is to add an onChange(universal) or onTextChanged(native) listener to the text field and store the form update in a local state/variables.
Finally, when a user makes an action (submit, etc.) you can send the updated data to the logging module.
The best way I found and works for me is …
on the api server-side, where I handle the update request, before hitting the database, I do a difference between the previous record and {request.body} using lodash and use the result to send to my update database function
var _ = require('lodash');
const difference = (object, base) => {
function changes(object, base) {
return _.transform(object, function (result, value, key) {
if (!_.isEqual(value, base[key])) {
result[key] = (_.isObject(value) && _.isObject(base[key])) ? changes(value, base[key]) : value;
}
});
}
return changes(object, base);
}
module.exports = difference
I saved the above code in a file named diff.js and included it in my server-side file.
It worked good.
Thanks for giving the idea...
I have a waterfall of dialogs in Bot Framework SDK3,
each dialog does something, until it switches to dialog with tableSvc.retrieveEntity which correctly identifies a required to be retrieved entity (according to given PartionKey & RowKey) from Azure Table...
...but the entity which is retreived (I check it with console.log('Result') is outdated (one step [a few seconds, which pass during conversation of user with a bot] behind the actual data stored in Azure Tables - the real data which needs to be retrieved in this dialog...)
The Conversation is not closed yet (it will be later) - it is important to store and retrieve actual data at this stage...
How to get actual data in this dialog?
Well, for those of you who had similar problem...
I guess, it has to do with event loop of Node.js...
I'm not sure whether it is a bullet-proof solution, or a temporary 'hack',
but I put it like this and it works (when I try to use setTimeout 0 ms - it does not work for me, when I set it to 500ms - it works, so I guess 1000 ms could be a safe temporary hack..before I find better solution)..
If someone knows a better, more robust, solution, please, update this thread.
setTimeout( () => {
tableSvc.retrieveEntity('table', pkey, rkey, funcdtion(error, result, response) {
if(!error) {
var res1 = result.Data._;
console.log(res1); // Now it prints actual data stored in 'table' - which I really need, and not its previous (outdated) version
} else {
console.log('Some error happened...');
};
});
}, 1000);
I just started the Meteor js, and I'm struggling in its publish method. Below is one publish method.
//Server side
Meteor.publish('topPostsWithTopComments', function() {
var topPostsCursor = Posts.find({}, {sort: {score: -1}, limit: 30});
var userIds = topPostsCursor.map(function(p) { return p.userId });
return [
topPostsCursor,
Meteor.users.find({'_id': {$in: userIds}})
];
});
// Client side
Meteor.subscribe('topPostsWithTopComments');
Now I'm not getting how I can use publish data on client. I meant I want to use data which will be given by topPostsWithTopComments
Problem is detailed below
When a new post enters the top 30 list, two things need to happen:
The server needs to send the new post to the client.
The server needs to send that post’s author to the client.
Meteor is observing the Posts cursor returned on line 6, and so will send the new post down as soon as it’s added, ensuring the client will receive the new post straight away.
However, consider the Meteor.users cursor returned on line 7. Even if the cursor itself is reactive, it’s now using an outdated value for the userIds array (which is a plain old non-reactive variable), which means its result set will be out of date as well.
This is why as far as that cursor is concerned, there is no need to re-run the query and Meteor will happily continue to publish the same 30 authors for the original 30 top posts ad infinitum.
So unless the whole code of the publication runs again (to construct a new list of userIds), the cursor is no longer going to return the correct information.
Basically what I need is:
if any changes happens in Post, then it should have the updated users list. without calling user collection again. I found some user full mrt modules.
link1 |
link2 |
link3
Please share your views!
-Neelesh
When you publish data on the server you're just publishing what the client is allowed to query. This is for security. After you subscribe to your publication you still need to query what the publication returned.
if(Meteor.isClient) {
Meteor.subscribe('topPostsWithTopComments');
// This returns all the records published with topPostsWithComments from the Posts Collection
var posts = Posts.find({});
}
If you wanted to only publish posts that the current user owns you would want to filter them out in the publish method on the server and not on the client.
I think #Will Brock already answered your question but maybe it becomes more clear with an abstract example.
Let's construct two collections named collectiona and collectionb.
// server and client
CollectionA = new Meteor.Collection('collectiona');
CollectionB = new Meteor.Collection('collectionb');
On the server you could now call Meteor.publish with 'collectiona' and 'collectionb' separately to publish both record sets to the client. This way the client could then also separately subscribe to them.
But instead you can also publish multiple record sets in a single call to Meteor.publish by returning multiple cursors in an array. Just like in the standard publishing procedure you can of course define what is being sent down to the client. Like so:
if (Meteor.isServer) {
Meteor.publish('collectionAandB', function() {
// constrain records from 'collectiona': limit number of documents to one
var onlyOneFromCollectionA = CollectionA.find({}, {limit: 1});
// all cursors in the array are published
return [
onlyOneFromCollectionA,
CollectionB.find()
];
});
}
Now on the client there is no need to subscribe to 'collectiona' and 'collectionb' separately. Instead you can simply subscribe to 'collectionAandB':
if (Meteor.isClient) {
Meteor.subscribe('collectionAandB', function () {
// callback to use collection A and B on the client once
// they are ready
// only one document of collection A will be available here
console.log(CollectionA.find().fetch());
// all documents from collection B will be available here
console.log(CollectionB.find().fetch());
});
}
So I think what you need to understand is that there is no array sent to the client that contains the two cursors published in the Meteor.publish call. This is because returning an array of cursors in the function passed as an argument to your call to Meteor.publish merely tells Meteor to publish all cursors contained in the array. You still need to query the individual records using your collection handles on the client (see #Will Brock's answer).