Add or replace entity in Azure Table Storage - azure

I'm working with Windows Azure Table Storage and have a simple requirement: add a new row, overwriting any existing row with that PartitionKey/RowKey. However, saving the changes always throws an exception, even if I pass in the ReplaceOnUpdate option:
tableServiceContext.AddObject(TableName, entity);
tableServiceContext.SaveChangesWithRetries(SaveChangesOptions.ReplaceOnUpdate);
If the entity already exists it throws:
System.Data.Services.Client.DataServiceRequestException: An error occurred while processing this request. ---> System.Data.Services.Client.DataServiceClientException: <?xml version="1.0" encoding="utf-8" standalone="yes"?>
<error xmlns="http://schemas.microsoft.com/ado/2007/08/dataservices/metadata">
<code>EntityAlreadyExists</code>
<message xml:lang="en-AU">The specified entity already exists.</message>
</error>
Do I really have to manually query for the existing row first and call DeleteObject on it? That seems very slow. Surely there is a better way?

As you've found, you can't just add another item that has the same row key and partition key, so you will need to run a query to check to see if the item already exists. In situations like this I find it helpful to look at the Azure REST API documentation to see what is available to the storage client library. You'll see that there are separate methods for inserting and updating. The ReplaceOnUpdate only has an effect when you're updating, not inserting.
While you could delete the existing item and then add the new one, you could just update the existing one (saving you one round trip to storage). Your code might look something like this:
var existsQuery = from e
in tableServiceContext.CreateQuery<MyEntity>(TableName)
where
e.PartitionKey == objectToUpsert.PartitionKey
&& e.RowKey == objectToUpsert.RowKey
select e;
MyEntity existingObject = existsQuery.FirstOrDefault();
if (existingObject == null)
{
tableServiceContext.AddObject(TableName, objectToUpsert);
}
else
{
existingObject.Property1 = objectToUpsert.Property1;
existingObject.Property2 = objectToUpsert.Property2;
tableServiceContext.UpdateObject(existingObject);
}
tableServiceContext.SaveChangesWithRetries(SaveChangesOptions.ReplaceOnUpdate);
EDIT: While correct at the time of writing, with the September 2011 update Microsoft have updated the Azure table API to include two upsert commands, Insert or Replace Entity and Insert or Merge Entity

In order to operate on an existing object NOT managed by the TableContext with either Delete or SaveChanges with ReplaceOnUpdate options, you need to call AttachTo and attach the object to the TableContext, instead of calling AddObject which instructs TableContext to attempt to insert it.
http://msdn.microsoft.com/en-us/library/system.data.services.client.dataservicecontext.attachto.aspx

in my case it was not allowed to remove it first, thus I do it like this, this will result in one transaction to server which will first remove existing object and than add new one, removing need to copy property values
var existing = from e in _ServiceContext.AgentTable
where e.PartitionKey == item.PartitionKey
&& e.RowKey == item.RowKey
select e;
_ServiceContext.IgnoreResourceNotFoundException = true;
var existingObject = existing.FirstOrDefault();
if (existingObject != null)
{
_ServiceContext.DeleteObject(existingObject);
}
_ServiceContext.AddObject(AgentConfigTableServiceContext.AgetnConfigTableName, item);
_ServiceContext.SaveChangesWithRetries();
_ServiceContext.IgnoreResourceNotFoundException = false;

Insert/Merge or Update was added to the API in September 2011. Here is an example using the Storage API 2.0 which is easier to understand then the way it is done in the 1.7 api and earlier.
public void InsertOrReplace(ITableEntity entity)
{
retryPolicy.ExecuteAction(
() =>
{
try
{
TableOperation operation = TableOperation.InsertOrReplace(entity);
cloudTable.Execute(operation);
}
catch (StorageException e)
{
string message = "InsertOrReplace entity failed.";
if (e.RequestInformation.HttpStatusCode == 404)
{
message += " Make sure the table is created.";
}
// do something with message
}
});
}

The Storage API does not allow more than one operation per entity (delete+insert) in a group transaction:
An entity can appear only once in the transaction, and only one operation may be performed against it.
see MSDN: Performing Entity Group Transactions
So in fact you need to read first and decide on insert or update.

You may use UpsertEntity and UpsertEntityAsync methods in the official Microsoft Azure.Data.Tables TableClient.
The fully working example is available at https://github.com/Azure-Samples/msdocs-azure-data-tables-sdk-dotnet/blob/main/2-completed-app/AzureTablesDemoApplicaton/Services/TablesService.cs --
public void UpsertTableEntity(WeatherInputModel model)
{
TableEntity entity = new TableEntity();
entity.PartitionKey = model.StationName;
entity.RowKey = $"{model.ObservationDate} {model.ObservationTime}";
// The other values are added like a items to a dictionary
entity["Temperature"] = model.Temperature;
entity["Humidity"] = model.Humidity;
entity["Barometer"] = model.Barometer;
entity["WindDirection"] = model.WindDirection;
entity["WindSpeed"] = model.WindSpeed;
entity["Precipitation"] = model.Precipitation;
_tableClient.UpsertEntity(entity);
}

Related

How to solve "Changes cannot be saved to the database from the event handler"?

In POOrderEntry as a POLine is created or deleted, I need to push a reference back to a custom DAC that originates the PO Line. For instance, if the PO Line is deleted, my custom DAC has the reference removed in Events.RowDeleted via:
using (PXTransactionScope ts = new PXTransactionScope())
{
Base.Caches[typeof(MyDAC)].SetValueExt<MyDAC.pOType>(row, null);
Base.Caches[typeof(MyDAC)].SetValueExt<MyDAC.pONbr>(row, null);
Base.Caches[typeof(MyDAC)].SetValueExt<MyDAC.pOLineNbr>(row, null);
Base.Caches[typeof(MyDAC)].Update(row);
Base.Caches[typeof(MyDAC)].Persist(PXDBOperation.Update);
ts.Complete(Base);
}
I have tried to allow the normal Persist to save the values, but it doesn't unless I call Persist (last line of my example above). The result is an error via Acuminator of "Changes cannot be saved to the database from the event handler". As I look at this, I wonder if it should be in an Long Operation instead of a Transaction Scope, but the error from Acuminator tells me I'm doing this wrong. What is the proper way to achieve my update back to "MyDAC" for each PO Line?
I have also tried initializing a graph instance for MyDAC's graph, but I get a warning about creating a PXGraph in an event handler so I can't "legally" call the graph where MyDAC is maintained.
My code compiles and functions as desired, but the error from Acuminator tells me there must be a more proper way to accomplish this.
You can add a view to the graph extension.
Then in the row deleted you will use your view.Update(row) to update your custom dac.
During the base graph persist your records will commit as long as there are no other errors found in other events.
The way you have it now commits your changes with a chance the row that was being deleted is never deleted.
Also with this change there is no need to use PXTransactionScope.
An example might look something like this...
public class POOrderEntryExtension : PXGraphExtension<POOrderEntry>
{
public PXSelect<MyDac> MyView;
protected virtual void _(Events.RowDeleted<POLine> e)
{
//get your row to update from e.Row
var myRow = PXSelect...
myRow.pOType = null;
myRow.pONbr = null;
myRow.pOLineNbr = null;
MyView.Update(myRow);
}
}

ServiceStack: Update<T>(...) produces 'duplicate entry'

I have tried reading the docs, but I don't get why the Update method produces a "Duplicate entry" MySQL error.
The docs says
In its most simple form, updating any model without any filters will update every field, except the Id which is used to filter the update to this specific record:
So I try it, and pass in an object, like below. A row with id 2 already exists.
using (var _db = _dbFactory.Open())
{
Customer coreObject = new Customer(...);
coreObject.Id = 2;
coreObject.ObjectName = "a changed value";
_db.Update<Customer>(coreObject); // <-- error "duplicate entry"
}
Yes, there are options using .Save and such, but what am I missing with the .Update? As I read it, it should use its Id property to update the row in the db, not insert a new row?
The issue with this method is that you're updating a generic object T but your Update API says to update the Concrete Customer type:
public void MyTestMethod<T>(T coreObject) where T : CoreObject
{
long id = 0;
using (var _db = _dbFactory.Open())
{
id = _db.Insert<T>(coreObject, selectIdentity: true);
if (DateTime.Now.Ticks == 0)
{
coreObject.Id = (uint)id;
_db.Delete(coreObject);
}
if (DateTime.Now.Ticks == 0)
{
_db.DeleteById<Customer>(id);
}
if (DateTime.Now.Ticks == 0)
{
coreObject.Id = (uint)id;
coreObject.ObjectName = "a changed value";
_db.Update<Customer>(coreObject);
}
}
}
Which OrmLite assumes that you're using a different/anonymous object to update the customer table, similar to:
db.Update<Customer>(new { Id = id, ObjectName = "a changed value", ... });
Which as it doesn't have a WHERE filter will attempt to update all rows with the same primary key.
What you instead want is to update the same entity, either passing in the Generic Type T or have it inferred by not passing in any type, e.g:
_db.Update<T>(coreObject);
_db.Update(coreObject);
Which will use OrmLite's behavior of updating entity by updating each field except for Primary Keys which it instead used in the WHERE expression to limit the update to only update that entity.
New Behavior in v5.1.1
To prevent accidental misuse like this I've added an Update API overload in this commit which will use the Primary Key as a filter when using an anonymous object to update an entity, so your previous usage:
_db.Update<Customer>(coreObject);
Will add the Primary Key to the WHERE filter instead of including it in the SET list. This change is available from v5.1.1 that's now available on MyGet.

deleting the entries in Azure Table Storage that Are Certain Days Old Automation

I want to automate to delete One month before entries in Azure Table Storage.
Now in Azure Table Storage I have one year entries. I delete it manually. but for the future, I need to automate this process. I need to delete one month old entries from Azure table storage.
If you have the partitionKey and the rowKey, you can Delete entries directly as the documentation mentions here https://msdn.microsoft.com/en-us/library/dd135727.aspx
, otherwise you will need to select entries first, know their (partitionKey, rowKey), then delete them.
Basically, to delete old entities in Azure Table Storage, you can first query them with Filtering on DateTime Properties, and then delete then in a loop.
And regarding
but for the future, I need to automate this process
If you have an Azure App Service, you can scheduling WebJobs to satisfy your requirement.
Additionally, you also can leverage Function App to implement the delete operations and use Azure Scheduler to configure a schedule to call your api.
Any further concern, please feel free to let me know.
You need to query the whole table, and find the entities whose Timestamp is older than 1 month, and delete all of them. Batch operation may be useful here since it supports deleting multiple entities with same partition key via one request.
Query entities REST API: https://msdn.microsoft.com/en-us/library/azure/dd179421.aspx
Delete entity REST API: https://msdn.microsoft.com/en-us/library/azure/dd135727.aspx
Perform entity group transaction in REST API: https://msdn.microsoft.com/en-us/library/azure/dd894038.aspx
If you're using C#, you can refer to this documentation: https://azure.microsoft.com/en-us/documentation/articles/storage-dotnet-how-to-use-tables/#delete-an-entity
Take a look at the following Powershell solution for querying a storage table. You can incorporate this into some type of automation solution to run on a regular basis.
https://github.com/chriseyre2000/Powershell/tree/master/Azure2
This thread might be a little old, but I recently wrote a library to manage Azure Tables lifecycle.
Source/docs: https://github.com/pflajszer/AzureTablesLifecycleManager
Usage for your use case (and examples provided on the source page):
Using IQueryBuilder
public async Task<DataTransferResponse<T>> DoSomethingWithDataOlderThanAYearUsingQueryBuilder<T>(int option) where T : class, ITableEntity, new()
{
// this will return all the tables since it's an empty query:
var tableQuery =
new QueryBuilder();
// this will return all the data older than 1 year ago:
var dataQuery =
new QueryBuilder()
.AppendCondition(ODataPredefinedFilters.TimestampLessThanOrEqual(DateTime.Now.AddYears(-1)));
var dtr = new DataTransferResponse<T>();
switch (option)
{
case 1:
// this will move all the data that match the above filters to a new table:
var newTableName = "someNewTable";
newTableName.EnsureValidAzureTableName();
dtr = await _api.MoveDataBetweenTablesAsync<T>(tableQuery, dataQuery, newTableName);
break;
case 2:
// ...or delete it permanently:
dtr = await _api.DeleteDataFromTablesAsync<T>(tableQuery, dataQuery);
break;
case 3:
// ...or just fetch the data:
dtr = await _api.GetDataFromTablesAsync<T>(tableQuery, dataQuery);
break;
default:
break;
}
return dtr;
}
Using LINQ:
public async Task<DataTransferResponse<T>> DoSomethingWithDataOlderThanAYearUsingLINQExpression<T>(int option) where T : class, ITableEntity, new()
{
// this query will return all the tables:
Expression<Func<TableItem, bool>> tableQuery = x => true;
// this query will return all data in the above tables that matches the condition (all data older than 1 year ago)
Expression<Func<T, bool>> dataQuery = x => x.Timestamp < DateTime.Now.AddYears(-1);
var dtr = new DataTransferResponse<T>();
switch (option)
{
case 1:
// Moving the data to a new table:
var newTableName = "newTableName";
newTableName.EnsureValidAzureTableName();
dtr = await _api.MoveDataBetweenTablesAsync<T>(tableQuery, dataQuery, newTableName);
break;
case 2:
// this call will delete the data that match the above filters:
dtr = await _api.DeleteDataFromTablesAsync<T>(tableQuery, dataQuery);
break;
case 3:
// ...or just fetch the data:
dtr = await _api.GetDataFromTablesAsync<T>(tableQuery, dataQuery);
break;
default:
break;
}
return dtr;
}

Linq queries and optionSet labels, missing formatted value

I'm doin a simple query linq to retrieve a label from an optionSet. Looks like the formatted value for the option set is missing. Someone knows why is not getting generated?
Best Regards
Sorry for the unclear post. I discovered the problem, and the reason of the missing key as formattedvalue.
The issue is with the way you retrieve the property. With this query:
var invoiceDetails = from d in xrmService.InvoiceSet
where d.InvoiceId.Value.Equals(invId)
select new
{
name = d.Name,
paymenttermscode = d.PaymentTermsCode
}
I was retrieving the correct int value for the option set, but what i needed was only the text. I changed the query this way:
var invoiceDetails = from d in xrmService.InvoiceSet
where d.InvoiceId.Value.Equals(invId)
select new
{
name = d.Name,
paymenttermscode = d.FormattedValues["paymenttermscode"]
}
In this case I had an error stating that the key was not present. After many attempts, i tried to pass both the key value and the option set text, and that attempt worked just fine.
var invoiceDetails = from d in xrmService.InvoiceSet
where d.InvoiceId.Value.Equals(invId)
select new
{
name = d.Name,
paymenttermscode = d.PaymentTermsCode,
paymenttermscodeValue = d.FormattedValues["paymenttermscode"]
}
My guess is that to retrieve the correct text associated to that option set, in that specific entity, you need to retrieve the int value too.
I hope this will be helpful.
Best Regards
You're question is rather confusing for a couple reasons. I'm going to assume that what you mean when you say you're trying to "retrieve a label from an OptionSet" is that you're attempting to get the Text Value of a particular OptionSetValue and you're not querying the OptionSetMetadata directly to retrieve the actual LocalizedLabels text value. I'm also assuming "formatted value for the option set is missing" is referring to the FormattedValues collection. If these assumptions are correct, I refer you to this: CRM 2011 - Retrieving FormattedValues from joined entity
The option set metadata has to be queried.
Here is an extension method that I wrote:
public static class OrganizationServiceHelper
{
public static string GetOptionSetLabel(this IOrganizationService service, string optionSetName, int optionSetValue)
{
RetrieveOptionSetRequest retrieve = new RetrieveOptionSetRequest
{
Name = optionSetName
};
try
{
RetrieveOptionSetResponse response = (RetrieveOptionSetResponse)service.Execute(retrieve);
OptionSetMetadata metaData = (OptionSetMetadata)response.OptionSetMetadata;
return metaData.Options
.Where(o => o.Value == optionSetValue)
.Select(o => o.Label.UserLocalizedLabel.Label)
.FirstOrDefault();
}
catch { }
return null;
}
}
RetrieveOptionSetRequest and RetrieveOptionSetResponse are on Microsoft.Xrm.Sdk.Messages.
Call it like this:
string label = service.GetOptionSetLabel("wim_continent", 102730000);
If you are going to be querying the same option set multiple times, I recommend that you write a method that returns the OptionSetMetadata instead of the label; then query the OptionSetMetadata locally. Calling the above extension method multiple times will result in the same query being executed over and over.

How can I update a content item (draft) from a background task in Orchard?

I have a simple IBackgroundTask implementation that performs a query and then either performs an insert or one or more updates depending on whether a specific item exists or not. However, the updates are not persisted, and I don't understand why. New items are created just as expected.
The content item I'm updating has a CommonPart and I've tried authenticating as a valid user. I've also tried flushing the content manager at the end of the Sweep method. What am I missing?
This is my Sweep, slightly edited for brevity:
public void Sweep()
{
// Authenticate as the site's super user
var superUser = _membershipService.GetUser(_orchardServices.WorkContext.CurrentSite.SuperUser);
_authenticationService.SetAuthenticatedUserForRequest(superUser);
// Create a dummy "Person" content item
var item = _contentManager.New("Person");
var person = item.As<PersonPart>();
if (person == null)
{
return;
}
person.ExternalId = Random.Next(1, 10).ToString();
person.FirstName = GenerateFirstName();
person.LastName = GenerateLastName();
// Check if the person already exists
var matchingPersons = _contentManager
.Query<PersonPart, PersonRecord>(VersionOptions.AllVersions)
.Where(record => record.ExternalId == person.ExternalId)
.List().ToArray();
if (!matchingPersons.Any())
{
// Insert new person and quit
_contentManager.Create(item, VersionOptions.Draft);
return;
}
// There are at least one matching person, update it
foreach (var updatedPerson in matchingPersons)
{
updatedPerson.FirstName = person.FirstName;
updatedPerson.LastName = person.LastName;
}
_contentManager.Flush();
}
Try to add _contentManager.Publish(updatedPerson). If you do not want to publish, but just to save, you don't need to do anything more, as changes in Orchard as saved automatically unless the ambient transaction is aborted. The call to Flush is not necessary at all. This is the case both during a regular request and on a background task.

Resources