In the Blazor application I am developing, I have code that performs the basic CRUD operations.
<ButtonRowTemplate>
<Button Color="Color.Success" Clicked="context.NewCommand.Clicked">New</Button>
<Button Color="Color.Primary" Clicked="context.EditCommand.Clicked">Edit</Button>
<Button Color="Color.Danger" Clicked="context.DeleteCommand.Clicked">Delete</Button>
<Button Color="Color.Link" Clicked="context.ClearFilterCommand.Clicked">Clear Filter</Button>
</ButtonRowTemplate>
The code works but only in the cached memory (context). The changes are not propagated to the DB. The API's I have written in the past use a straight update command similar to:
replaceResponse = await this.container.ReplaceItemAsync<T>(itemBody, itemBody.Id, new PartitionKey(itemBody.someID));
I am trying to sync the cache / context with the DB. I have unsuccessfully tried to tie in a database update in parallel with the cache update using code similar to the code above. Is this correct or is there a more serial way of updating the cache and then pushing those context changes to the DB? In a sense, committing those changes.
I should also add that I am using the Microsoft.Azure.DocumentDB.Core. I would like to get the code working before moving to the Microsoft.Azure.Cosmos library. I appreciate any assistance as I am getting back into front end development using Blazor.
I have a solution but the question in my mind is it right? More specifically was there an easier way to do it like commit the changes from the cache?
WHAT I DID
First, I did a little more digging into the Blazorise Grid component. There is a parameter called RowInserted. I believe this triggers a function after a row is add to the cache.
<DataGrid TItem="PowderInfo"
Data="#powderlist"
#bind-SelectedRow="#selectedLoad"
RowInserted="AddNewDoc"
Then I added a new function called AddNewDoc to the page code.
protected async Task AddNewDoc()
{
var item = new PowderInfo();
item = this.powderlist.ElementAt(this.powderlist.Count() - 1);
item.id = this.powderlist.Count().ToString();
await getPowderData.AddNewDocument(item);
}
item is an instance of my document type.
powderlist is my local cache.
this.powderlist.ElementAt gets the cached item that was just
added just added.
item.id is propagated using the count value.
finally, a call to the DAL's AddNewDocument with the new document.
public async Task AddNewDocument(PowderInfo item)
{
await CosmosDBRepository<PowderInfo>.CreateItemAsync(item, "Powders");
}
And now my database is updated. But again, since the cache is coming from the DAL, was there another way to update the DB?
Related
we have a big enterprise Node.JS application with multiple microservices that can in parallel access DB entity called context. At some moment we started to have concurrency issues, i.e. two separate microservices did load same context, did different changes and saved, resulting in loss of data. Because of this we have rewritten our DB layer to use mongoose with full optimistic concurrency enabled (via scheme option optimisticConcurrency). This work fine and now when we get version error we reload latest version of the context, reapply all the changes again and save. Problem is that this reapplication creates duplicities in code. Our general approach can be expressed by following pseudo code:
let document = Context.find(...);
document.foo = 'bar'
document.bar = 'foo'
try {
document.save()
} catch(mongooseVersionError) {
let document = Context.find(...);
// DUPLICATE CODE HERE, DOING SAME ASSIGNMENTS (foo, bar) AGAIN!
document.foo = 'bar'
document.bar = 'foo'
document.save()
}
What we would like to use instead is some automated tracking of all changes on document so that when we get mongoose versioning error we can reapply these changes automatically. Any idea how to do it in most elegant way? Does mongoose support something like this out of the box? I know that we could check in presave hook document attributes one by one via Document.prototype.isModified() method but this seems to me like quite worky and unflexible approach.
thanks for your help, I am new to firebase, I am designing an application with Node.js, what I want is that every time it detects changes in a document, a function is invoked that creates or updates the file system according to the new structure of data in the firebase document, everything works fine but the problem I have is that if the document is updated with 2 or more attributes the makeBotFileSystem function is invoked the same number of times which brings me problems since this can give me performance problems or file overwriting problems since what I do is generate or update multiple files.
I would like to see how the change can be expected but wait until all the information in the document is finished updating, not attribute by attribute, is there any way? this is my code:
let botRef = firebasebotservice.db.collection('bot');
botRef.onSnapshot(querySnapshot => {
querySnapshot.docChanges().forEach(change => {
if (change.type === 'modified') {
console.log('bot-changes ' + change.doc.id);
const botData = change.doc.data();
botData.botId = change.doc.id;
//HERE I CREATE OR UPDATE FILESYSTEM STRUCTURE, ACCORDING Data changes
fsbotservice.makeBotFileSystem(botData);
}
});
});
The onSnapshot function will notify you anytime a document changes. If property changes are commited one by one instead of updating the document all at once, then you will receive multiple snapshots.
One way to partially solve the multiple snapshot thing would be to change the code that updates the document to commit all property changes in a single operation so that you only receive one snapshot.
Nonetheless, you should design the function triggered by the snapshot so that it can handle multiple document changes without breaking. Given that document updates will happen no matter if by single/multiple property changes your code should be able to handle those. IMHO the problem is the filesystem update rather than how many snaphots are received
You should use docChanges() method like this:
db.collection("cities").onSnapshot(querySnapshot => {
let changes = querySnapshot.docChanges();
for (let change of changes) {
var data = change.doc.data();
console.log(data);
}
});
Observablecollection is bound to gridview in UWP project. If I try to clear and add data it fails with an error because it can only be modified on the UI thread.
I have set up service broker with SQL to notify the app when there is a change to the data. This is working correctly. However, every time I try to clear and modify the observablecollection I get an exception thrown.
using (SqlDataReader dr = cmd.ExecuteReader())
{
while (dr.Read())
{
EmployeeLists.Add(new Employee { Name = dr[0].ToString(), Loc = dr[2].ToString() });
}
}
This is the code I'm using at first to populate the observable collection. I want to listen for changes which is working. But how do I update the changes and sync them to the observable collection?
I have tried clearing the employeelists observablecollection and then adding everything again. It seems clunky, but doesn't work anyway because It says I cannot modify from another thread. I have tried several solutions online, but I'm not that familiar with ASYNC programming. Can anyone point me in the right direction?!
I am trying to build a logging mechanism, to log changes done to a record. I am currently logging previous and new record. However, as the site is very busy, I expect the logfile to grow seriously huge. To avoid this, I plan to only capture the modified fields only.
Is there a way to capture only the modifications done to a record (in REACT), so my {request.body} will have fewer fields?
My Server-side is build with NODE.JS and the client-side is REACT.
One approach you might want to consider is to add an onChange(universal) or onTextChanged(native) listener to the text field and store the form update in a local state/variables.
Finally, when a user makes an action (submit, etc.) you can send the updated data to the logging module.
The best way I found and works for me is …
on the api server-side, where I handle the update request, before hitting the database, I do a difference between the previous record and {request.body} using lodash and use the result to send to my update database function
var _ = require('lodash');
const difference = (object, base) => {
function changes(object, base) {
return _.transform(object, function (result, value, key) {
if (!_.isEqual(value, base[key])) {
result[key] = (_.isObject(value) && _.isObject(base[key])) ? changes(value, base[key]) : value;
}
});
}
return changes(object, base);
}
module.exports = difference
I saved the above code in a file named diff.js and included it in my server-side file.
It worked good.
Thanks for giving the idea...
I have an ASP.NET MVC 3 (.NET 4) web application.
This app fetches data from an Oracle database and mixes some information with another Sql Database.
Many tables are joined together and lot of database reading is involved.
I have already optimized the best I could the fetching side and I don't have problems with that.
I've use caching to save information I don't need to fetch over and over.
Now I would like to build a responsive interface and my goal is to present the users the order headers filtered, and load the order lines in background.
I want to do that cause I need to manage all the lines (order lines) as a whole cause of some calculations.
What I have done so far is using jQuery to make an Ajax call to my action where I fetch the order headers and save them in a cache (System.Web.Caching.Cache).
When the Ajax call has succeeded I fire off another Ajax call to fetch the lines (and, once again, save the result in a cache).
It works quite well.
Now I was trying to figure out if I can move some of this logic from the client to the server.
When my action is called I want to fetch the order header and start a new thread - responsible of the order lines fetching - and return the result to the client.
In a test app I tried both ThreadPool.QueueUserWorkItem and Task.Factory but I want the generated thread to access my cache.
I've put together a test app and done something like this:
TEST 1
[HttpPost]
public JsonResult RunTasks01()
{
var myCache = System.Web.HttpContext.Current.Cache;
myCache.Remove("KEY1");
ThreadPool.QueueUserWorkItem(o => MyFunc(1, 5000000, myCache));
return (Json(true, JsonRequestBehavior.DenyGet));
}
TEST 2
[HttpPost]
public JsonResult RunTasks02()
{
var myCache = System.Web.HttpContext.Current.Cache;
myCache.Remove("KEY1");
Task.Factory.StartNew(() =>
{
MyFunc(1, 5000000, myCache);
});
return (Json(true, JsonRequestBehavior.DenyGet));
}
MyFunc crates a list of items and save the result in a cache; pretty silly but it's just a test.
I would like to know if someone has a better solution or knows of some implications I might have access the cache in a separate thread?!
Is there anything I need to be aware of, I should avoid or I could improve ?
Thanks for your help.
One possible issue I can see with your approach is that System.Web.HttpContext.Current might not be available in a separate thread. As this thread could run later, once the request has finished. I would recommend you using the classes in the System.Runtime.Caching namespace that was introduced in .NET 4.0 instead of the old HttpContext.Cache.