Data tracking in DocumentDB - azure

I was trying to keep the history of data (at least one step back) of DocumentDB.
For example, if I have a property called Name in document with value "Pieter". Now I am changing that to "Sam", I have to maintain the history , it was "Pieter" previously.
As of now I am thinking of a pre-trigger. Any other solutions ?

Cosmos DB (formerly DocumentDB) now offers change tracking via Change Feed. With Change Feed, you can listen for changes on a particular collection, ordered by modification within a partition.
Change feed is accessible via:
Azure Functions
DocumentDB (SQL) SDK
Change Feed Processor Library
For example, here's a snippet from the Change Feed documentation, on reading from the Change Feed, for a given partition (full code example in the doc here):
IDocumentQuery<Document> query = client.CreateDocumentChangeFeedQuery(
collectionUri,
new ChangeFeedOptions
{
PartitionKeyRangeId = pkRange.Id,
StartFromBeginning = true,
RequestContinuation = continuation,
MaxItemCount = -1,
// Set reading time: only show change feed results modified since StartTime
StartTime = DateTime.Now - TimeSpan.FromSeconds(30)
});
while (query.HasMoreResults)
{
FeedResponse<dynamic> readChangesResponse = query.ExecuteNextAsync<dynamic>().Result;
foreach (dynamic changedDocument in readChangesResponse)
{
Console.WriteLine("document: {0}", changedDocument);
}
checkpoints[pkRange.Id] = readChangesResponse.ResponseContinuation;
}

If you're trying to make an audit log I'd suggest looking into Event Sourcing.Building your domain from events ensures a correct log. See https://msdn.microsoft.com/en-us/library/dn589792.aspx and http://www.martinfowler.com/eaaDev/EventSourcing.html

Related

Changing a value in an Azure Cosmos DB

I've inherited a project at work that uses Azure Cosmos DB. It's completely new to me. In the CosmosDB, we have a bunch of user preferences that are saved. I've discovered a typo in the settings that I need to fix. However, I cannot figure out how to modify the value.
So far I've found the query explorer and I want to run this query:
Update c
set c.Setting = REPLACE(c.Setting, 'N*m', 'N-m')
but query explorer only supports select, not update.
I tried to use Azure Storage Explorer, but when I try to access the document I get nothing except a modal saying "Hold on! We are still working on this." Seriously Microsoft?
My current thinking is to upload a stored procedure and run that. But I'm not sure where to start. My other thinking is to write a small c# application that iterates through each user document and updates them individually. Something like this:
currId = 0;
databaseId = ...;
collectionId = ...;
collectionLink = ...;
while (currId < maxUserId) {
var response = await client.ReadDocumentAsync(UriFactory.CreateDocumentUri(databaseId, collectionId, currId.ToString()));
if (response.Resource != null) {
var upserted = response.Resource;
upserted.SetPropertyValue("Setting", "N-m");
response = await client.UpsertDocumentAsync(collectionLink, upserted);
}
currId++;
}
But boy if that doesn't seem like a dumb idea...
What's the best way to update a single value in a CosmosDB Document?

How to Speed Up Contract API CustomerID Search?

I'm trying to search the existing Customers and return the CustomerID if it exists. This is the code I'm using which works:
var CustomerToFind = new Customer
{
MainContact = new Contact
{
Email = new StringSearch { Value = emailIn }
}
};
var sw = new Stopwatch();
sw.Start();
//see if any results
var result = (Customer)soapClient.Get(CustomerToFind);
sw.Stop();
Debug.WriteLine(sw.ElapsedMilliseconds);
However, I've finding it appears extremely slow to the point of being unusable. For example on the DEMO dataset, on my i7-6700k # 4GHz with 24gb ram and SSD running SQL Server 2016 Developer Edition locally a simple email search takes between 3-4seconds. However on my production dataset with 10k Customer records, it takes over 60 seconds and times out.
Is this typical using Contract based soap? Screen based soap seems much faster and almost instant. If I perform a SQL select on the database tables in Microsoft Management Studio I can also return the result instantly.
Is there a better quick way to query if a Customer with email address = "test#test.com" exists and return the Customer ID?
Try using GetList instead of Get. It's better suited for "search for smth" scenarios.
When using GetList, depending on which endpoint you're using, there are two more optimizations. In Default/5.30.001 endpoint there's a second parameter to GetList which you should set to false. In Default/6.00.001 endpoint there's no second parameter but there is additional property in the entity itself, called ReturnBehavior. Either set it to OnlySpecified and then add *Return to required fields, like this:
var CustomerToFind = new Customer
{
ReturnBehavior = ReturnBehavior.OnlySpecified,
CustomerID = new StringReturn(),
MainContact = new Contact
{
Email = new StringSearch { Value = emailIn }
}
};
or set it to OnlySystem and then use ID on returned entity to request the full entity.

DocumentDb optimizing the resume of a non-partitioned change feed

I am creating an un-partitioned change feed that I want to resume e.g. poll for new changes every X seconds. The checkpoint variable below holds the last response continuation response.
private string checkpoint;
private async Task ReadEvents()
{
FeedResponse<dynamic> feedResponse;
do
{
feedResponse = await client.ReadDocumentFeedAsync(commitsLink, new FeedOptions
{
MaxItemCount = this.subscriptionOptions.MaxItemCount,
RequestContinuation = checkpoint
});
if (feedResponse.ResponseContinuation != null)
{
checkpoint = feedResponse.ResponseContinuation;
}
// Code to process docs goes here...
} while (feedResponse.ResponseContinuation != null);
}
Note the use of the "if" block around the checkpoint. This is done because if I leave this out the responseContinuation gets set to null, which will basically restart the polling cycle as setting the request continuation to null will pull the 1st set of documents in the change feed.
However, the downside is each polling loop will replay the previous set of documents rather than just any additional changes. Is there anything I can do in order to optimized this further or is this a limitation of the change feed API?
In order to read change feed, you must use CreateDocumentChangeFeedQuery (which never resets ResponseContinuation), instead of ReadDocumentFeed (which sets to null when there are no more results).
See https://learn.microsoft.com/en-us/azure/documentdb/documentdb-change-feed#working-with-the-rest-api-and-sdk for a sample.

"Order By" When Retrieving From Acumatica Web Service API

I was wondering if there was a way to add an "Order By" clause when retrieving data from Acumatica through the Web Service API?
IN202500Content IN202500 = oScreen.IN202500GetSchema();
oScreen.IN202500Clear();
Command[] oCmd = new Command[] {IN202500.StockItemSummary.ServiceCommands.EveryInventoryID,
IN202500.StockItemSummary.InventoryID,
IN202500.StockItemSummary.Description,
IN202500.StockItemSummary.ItemStatus,
IN202500.GeneralSettingsItemDefaults.ItemClass,
IN202500.GeneralSettingsItemDefaults.LotSerialClass,
IN202500.PriceCostInfoPriceManagement.DefaultPrice,
};
Filter[] oFilter = new Filter[] {new Filter
{
Field = new Field {ObjectName = IN202500.StockItemSummary.InventoryID.ObjectName,
FieldName = "LastModifiedDateTime"},
Condition = FilterCondition.GreaterOrEqual,
Value = SyncDate
}
};
String[][] sReturn = oScreen.IN202500Export(oCmd, oFilter, iMaxRecords, true, false);
I would like to sort the results for example by DefaultPrice, so that I can retrieve the Top 200 most expensive items in my list (using iMaxRecords = 200 in this case)
I haven't seen any parameters that allows me to do the sorting yet.
I ran into this when I developed a round robin assignment system and the short answer is using the Acumatica API you cant do a sort on the results you have to do it outside of the API (This info came from a friend closely tied to the Acumatica product).
I came up with two options:
Query your DB directly... There are always reasons not to do this but it is much faster than pulling the result from the API and will allow you to bypass the BQL Acumatica uses and write an SQL statement that does EXACTLY what you want providing a result that is easier to work with than the jagged array Acumatica sends.
You can use some Linq and build a second array[][] that is sorted by price and then trim it to the top 200 (You would need all results from Acumatica first).
// This is rough but should get you there.
string[][] MaxPriceList = sReturn.OrderBy(innerArray =>
{
if () // This is a test to make sure the element is not null
{
decimal price;
if (//test decimal is not null))
return price;
}
return price.MaxValue;
}).Take(200).ToArray(); //Take 200 is a shot but might work

Orchard background task not persisting PartRecords to the database

I'm trying to use a background task to gather Likes/Comments from the Facebook Graph APi and use that to drive our blog's trending.
Here the trendingModels have already been populated and are being used to fill in the TrendingParts.GraphId and TrendingParts.TrendingValue.
I'm not getting any exceptions and the properties on TrendingPart point to the fields in the TrendingPartRecord.
Yet nothing persists to the database, any ideas why?
_orchardsServices is IOrchardServices
var articleParts = _orchardService.ContentManager.GetMany<TrendingPart>(
trendingModels.Select(r => r.OrchardId).ToList(),
VersionOptions.Published,
QueryHints.Empty);
// Cycle through the records and update them from the matching model
foreach (var articlePart in articleParts)
{
ArticleTrendingModel trendingModel = trendingModels.Where(r => r.OrchardId == articlePart.Id).FirstOrDefault();
if(trendingModel != null)
{
// Not persisting to the database, WHY?
// What's missing?
// If I'm understanding things properly nHibernate should push this to the db autoMagically.
articlePart.GraphId = trendingModel.GraphId;
articlePart.TrendingValue = trendingModel.TrendingValue;
}
}
Edit:
It's probably worth noting that I can update and publish the fields on the TrendingPart in the admin panel but the saved changes don't appear in the MyModule_TrendingPartRecord table.
The solution was to change my Service to a transient dependency using ITransientDependency.
The service was holding a reference to the PartRecords array and because it was treated as a Singleton it never disposed and the push to the database was never made.

Resources