Trouble Updating Observable Collection when Bound to Gridview - UWP C# - observablecollection

Observablecollection is bound to gridview in UWP project. If I try to clear and add data it fails with an error because it can only be modified on the UI thread.
I have set up service broker with SQL to notify the app when there is a change to the data. This is working correctly. However, every time I try to clear and modify the observablecollection I get an exception thrown.
using (SqlDataReader dr = cmd.ExecuteReader())
{
while (dr.Read())
{
EmployeeLists.Add(new Employee { Name = dr[0].ToString(), Loc = dr[2].ToString() });
}
}
This is the code I'm using at first to populate the observable collection. I want to listen for changes which is working. But how do I update the changes and sync them to the observable collection?
I have tried clearing the employeelists observablecollection and then adding everything again. It seems clunky, but doesn't work anyway because It says I cannot modify from another thread. I have tried several solutions online, but I'm not that familiar with ASYNC programming. Can anyone point me in the right direction?!

Related

CRUD operations on a Cosmos DB from a Blazor Server App

In the Blazor application I am developing, I have code that performs the basic CRUD operations.
<ButtonRowTemplate>
<Button Color="Color.Success" Clicked="context.NewCommand.Clicked">New</Button>
<Button Color="Color.Primary" Clicked="context.EditCommand.Clicked">Edit</Button>
<Button Color="Color.Danger" Clicked="context.DeleteCommand.Clicked">Delete</Button>
<Button Color="Color.Link" Clicked="context.ClearFilterCommand.Clicked">Clear Filter</Button>
</ButtonRowTemplate>
The code works but only in the cached memory (context). The changes are not propagated to the DB. The API's I have written in the past use a straight update command similar to:
replaceResponse = await this.container.ReplaceItemAsync<T>(itemBody, itemBody.Id, new PartitionKey(itemBody.someID));
I am trying to sync the cache / context with the DB. I have unsuccessfully tried to tie in a database update in parallel with the cache update using code similar to the code above. Is this correct or is there a more serial way of updating the cache and then pushing those context changes to the DB? In a sense, committing those changes.
I should also add that I am using the Microsoft.Azure.DocumentDB.Core. I would like to get the code working before moving to the Microsoft.Azure.Cosmos library. I appreciate any assistance as I am getting back into front end development using Blazor.
I have a solution but the question in my mind is it right? More specifically was there an easier way to do it like commit the changes from the cache?
WHAT I DID
First, I did a little more digging into the Blazorise Grid component. There is a parameter called RowInserted. I believe this triggers a function after a row is add to the cache.
<DataGrid TItem="PowderInfo"
Data="#powderlist"
#bind-SelectedRow="#selectedLoad"
RowInserted="AddNewDoc"
Then I added a new function called AddNewDoc to the page code.
protected async Task AddNewDoc()
{
var item = new PowderInfo();
item = this.powderlist.ElementAt(this.powderlist.Count() - 1);
item.id = this.powderlist.Count().ToString();
await getPowderData.AddNewDocument(item);
}
item is an instance of my document type.
powderlist is my local cache.
this.powderlist.ElementAt gets the cached item that was just
added just added.
item.id is propagated using the count value.
finally, a call to the DAL's AddNewDocument with the new document.
public async Task AddNewDocument(PowderInfo item)
{
await CosmosDBRepository<PowderInfo>.CreateItemAsync(item, "Powders");
}
And now my database is updated. But again, since the cache is coming from the DAL, was there another way to update the DB?

mongoose optimistic concurrency - how to track changes on document when versioning error handling is needed

we have a big enterprise Node.JS application with multiple microservices that can in parallel access DB entity called context. At some moment we started to have concurrency issues, i.e. two separate microservices did load same context, did different changes and saved, resulting in loss of data. Because of this we have rewritten our DB layer to use mongoose with full optimistic concurrency enabled (via scheme option optimisticConcurrency). This work fine and now when we get version error we reload latest version of the context, reapply all the changes again and save. Problem is that this reapplication creates duplicities in code. Our general approach can be expressed by following pseudo code:
let document = Context.find(...);
document.foo = 'bar'
document.bar = 'foo'
try {
document.save()
} catch(mongooseVersionError) {
let document = Context.find(...);
// DUPLICATE CODE HERE, DOING SAME ASSIGNMENTS (foo, bar) AGAIN!
document.foo = 'bar'
document.bar = 'foo'
document.save()
}
What we would like to use instead is some automated tracking of all changes on document so that when we get mongoose versioning error we can reapply these changes automatically. Any idea how to do it in most elegant way? Does mongoose support something like this out of the box? I know that we could check in presave hook document attributes one by one via Document.prototype.isModified() method but this seems to me like quite worky and unflexible approach.

Firebase doc changes

thanks for your help, I am new to firebase, I am designing an application with Node.js, what I want is that every time it detects changes in a document, a function is invoked that creates or updates the file system according to the new structure of data in the firebase document, everything works fine but the problem I have is that if the document is updated with 2 or more attributes the makeBotFileSystem function is invoked the same number of times which brings me problems since this can give me performance problems or file overwriting problems since what I do is generate or update multiple files.
I would like to see how the change can be expected but wait until all the information in the document is finished updating, not attribute by attribute, is there any way? this is my code:
let botRef = firebasebotservice.db.collection('bot');
botRef.onSnapshot(querySnapshot => {
querySnapshot.docChanges().forEach(change => {
if (change.type === 'modified') {
console.log('bot-changes ' + change.doc.id);
const botData = change.doc.data();
botData.botId = change.doc.id;
//HERE I CREATE OR UPDATE FILESYSTEM STRUCTURE, ACCORDING Data changes
fsbotservice.makeBotFileSystem(botData);
}
});
});
The onSnapshot function will notify you anytime a document changes. If property changes are commited one by one instead of updating the document all at once, then you will receive multiple snapshots.
One way to partially solve the multiple snapshot thing would be to change the code that updates the document to commit all property changes in a single operation so that you only receive one snapshot.
Nonetheless, you should design the function triggered by the snapshot so that it can handle multiple document changes without breaking. Given that document updates will happen no matter if by single/multiple property changes your code should be able to handle those. IMHO the problem is the filesystem update rather than how many snaphots are received
You should use docChanges() method like this:
db.collection("cities").onSnapshot(querySnapshot => {
let changes = querySnapshot.docChanges();
for (let change of changes) {
var data = change.doc.data();
console.log(data);
}
});

NSFetchedResultsController with external changes?

I'm reading a CoreData database in a WatchKit extension, and changing the store from the parent iPhone application. I'd like to use NSFetchedResultsController to drive changes to the watch UI, but NSFetchedResultsController in the extension doesn't respond to changes made to the store in the parent application. Is there any way to get the secondary process to respond to changes made in the first process?
Some things to try/consider:
Do you have App Groups enabled?
If so, is your data store in a location shared between your host app and the extension?
If so does deleting the cached data, as referenced here help?
Read this answer to very similar question: https://stackoverflow.com/a/29566287/1757229
Also make sure you set stalenessInterval to 0.
I faced the same problem. My solution applies if you want to update the watch app on main app updates, but it could be easily extended to go both ways.
Note that I'm using a simple extension on NSNotificationCenter in order to be able to post and observe Darwin notification more easily.
1. Post the Darwin notification
In my CoreData store manager, whenever I save the main managed object context, I post a Darwin notification:
notificationCenter.addObserverForName(NSManagedObjectContextDidSaveNotification, object: self.managedObjectContext, queue: NSOperationQueue.mainQueue(), usingBlock: { [weak s = self] notification in
if let moc = notification.object as? NSManagedObjectContext where moc == s?.managedObjectContext {
notificationCenter.postDarwinNotificationWithName(IPCNotifications.DidUpdateStoreNotification)
}
})
2. Listen for the Darwin notification (but only on Watch)
I listen for the same Darwin notification in the same class, but making sure I am on the actual watch extension (in order to avoid to refresh the context that just got updated). I'm not using a framework (must target also iOS 7) so I just added the same CoreDataManager on both main app and watch extension. In order to determine where I am, I use a compile time flag.
#if WATCHAPP
notificationCenter.addObserverForDarwinNotification(self, selector: "resetContext", name: IPCNotifications.DidUpdateStoreNotification)
#endif
3. Reset context
When the watch extension receives the notification, it resets the MOC context, and sends an internal notification to tell FRCs to update themselves. I'm not sure why, but it wasn't working fine without using a little delay (suggestions are welcome)
func resetContext() {
self.managedObjectContext?.reset()
delay(1) {
NSNotificationCenter.defaultCenter().postNotificationName(Notifications.ForceDataReload, object: self.managedObjectContext?.persistentStoreCoordinator)
}
}
4. Finally, update the FRCs
In my case, I was embedding a plain FRC in a data structure so I added the observer outside of the FRC scope. Anyway you could easily subclass NSFetchedResultsController and add the following line in its init method (remember to stop observing on dealloc)
NSNotificationCenter.defaultCenter().addObserver(fetchedResultController, selector: "forceDataReload:", name: CoreDataStore.Notifications.ForceDataReload, object: fetchedResultController.managedObjectContext.persistentStoreCoordinator)
and
extension NSFetchedResultsController {
func forceDataReload(notification: NSNotification) {
var error : NSError?
if !self.performFetch(&error) {
Log.error("Error performing fetch update after forced data reload request: \(error)")
}
if let delegate = self.delegate {
self.delegate?.controllerDidChangeContent?(self)
}
}
At WWDC ‘17, Apple introduced a number of new Core Data features, one of which is Persistent History Tracking or NSPersistentHistory. But as of the time of writing, its API is still undocumented. Thus, the only real reference is the What’s New in Core Data WWDC session.
More info and an example here

Error registering SharePoint WebDeleting event receiver in some environments

I am trying to register a WebDeleting event receiver within SharePoint. This works fine in my development environment, but not in several staging environments. The error I get back is "Value does not fall within the expected range.". Here is the code I use:
SPSecurity.RunWithElevatedPrivileges(delegate()
{
using (SPSite elevatedSite = new SPSite(web.Site.ID))
{
using (SPWeb elevatedWeb = elevatedSite.OpenWeb(web.ID))
{
try
{
elevatedWeb.AllowUnsafeUpdates = true;
SPEventReceiverDefinition eventReceiver = elevatedWeb.EventReceivers.Add(new Guid(MyEventReciverId));
eventReceiver.Type = SPEventReceiverType.WebDeleting;
Type eventReceiverType = typeof(MyEventHandler);
eventReceiver.Assembly = eventReceiverType.Assembly.FullName;
eventReceiver.Class = eventReceiverType.FullName;
eventReceiver.Update();
elevatedWeb.AllowUnsafeUpdates = false;
}
catch (Exception ex)
{
// Do stuff...
}
}
}
});
I realize that I can do this through a feature element file (I am trying that approach now), but would prefer to use the above approach.
The errors I consistently get in the ULS logs are:
03/11/2010 17:16:57.34 w3wp.exe (0x09FC) 0x0A88 Windows SharePoint Services Database 6f8g Unexpected Unexpected query execution failure, error code 3621. Additional error information from SQL Server is included below. "The statement has been terminated." Query text (if available): "{?=call proc_InsertEventReceiver(?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)}"
03/11/2010 17:16:57.34 w3wp.exe (0x09FC) 0x0A88 Windows SharePoint Services General 8e2s Medium Unknown SPRequest error occurred. More information: 0x80070057
Any ideas?
UPDATE - Some interesting things I have learned...
I modified my code to do the following:
I call EventReceivers.Add() without the GUID since most examples I see do not do that
Gave the event receiver a Name and Sequence number since most examples I see do that
I deployed this change along with some extra trace statements that go to the ULS logs and after doing enough iisresets and clearing the GAC of my assembly, I started to see my new trace statements in the ULS logs and I no longer got the error!
So, I started to go back towards my original code to see what change exactly helped. I finally ended up with the original version in source control and it still worked :-S.
So the answer is clearly that it is some caching issue. However, when I was originally trying to get it to work I tried IISRESETs, restarting some SharePoint services OWSTimer (this, I believe runs the event hander, but probably isn't involved in the event registration where I am getting the error), and even a reboot to make sure no assembly caching was going on - and that did not help before.
The only thing I have to go on is maybe following steps such as:
Clear the GAC of the assembly that contains the registration code and event hander class.
Do an IISRESET.
Uninstall the WSP.
Do an IISRESET.
Install the WSP.
Do an IISRESET.
To get it working I never did a reboot or restarted SharePoint services, but I had done those prior to getting it working (before changing my code).
I suppose I could dig more with Reflector to see what I can find, but I believe you get to a dead end (unmanaged code) pretty quick. I wonder what could be holding on to the old DLL? I can't imagine that SQL Server would be in some way. Even so, a reboot would have fixed that (the entire farm, including SQL Server are on the same machine in this environment).
So, it appears that the whole problem was creating the event receiver by providing the GUID.
EventReceiverDefinition eventReceiver = elevatedWeb.EventReceivers.Add(new Guid(MyEventReciverId));
Now I am doing:
EventReceiverDefinition eventReceiver = elevatedWeb.EventReceivers.Add();
Unfortunately this means when I want to find out if the event is already registered, I must do something like the code below instead of a single one liner.
// Had to use the below instead of: web.EventReceivers[new Guid(MyEventReceiverId)]
private SPEventReceiverDefinition GetWebDeletingEventReceiver(SPWeb web)
{
Type eventReceiverType = typeof(MyEventHandler);
string eventReceiverAssembly = eventReceiverType.Assembly.FullName;
string eventReceiverClass = eventReceiverType.FullName;
SPEventReceiverDefinition eventReceiver = null;
foreach (SPEventReceiverDefinition eventReceiverIter in web.EventReceivers)
{
if (eventReceiverIter.Type == SPEventReceiverType.WebDeleting)
{
if (eventReceiverIter.Assembly == eventReceiverAssembly && eventReceiverIter.Class == eventReceiverClass)
{
eventReceiver = eventReceiverIter;
break;
}
}
}
return eventReceiver;
}
It's still not clear why things seemed to linger and require some cleanup (iisreset, reboots, etc.) so if anyone else has this problem keep that in mind.

Resources