Azure Mobile Services Soft Delete Issue / Practices - azure

With soft delete turned on, I add a single record on the client, push, delete the added record push and then attempt to add a new record (and then push) with the same primary key as the initial record I get an exception. It would appear that EntityDomainManager just attempts to do a new insert without checking to see if the record is to be 'updated' instead of inserted.
However if I turn off soft delete in the domain manager constructor everything works fine.
We are using incremental sync, so soft delete as I understand it is required to make this work, so we don't end up with different pictures of what's right between mobile and server.
When is/are the recommended approach? A Custom EntityDomainManager (or other DomainManager)? If so it would be useful for more clarity on the interactions between the table controller and the domain manager.
I have constructed this custom domain manager which seems to work, but would appreciate any guidance/suggestions.
public class CustomEntityDomainManager<TData> : EntityDomainManager<TData> where TData : class, ITableData
{
public CustomEntityDomainManager(DbContext context, HttpRequestMessage request, ApiServices services)
: base(context, request, services)
{
}
public CustomEntityDomainManager(DbContext context, HttpRequestMessage request, ApiServices services, bool enableSoftDelete) : base(context, request, services, enableSoftDelete)
{
}
public async override Task<TData> InsertAsync(TData data)
{
if (data == null)
{
throw new ArgumentNullException("data");
}
// now then, if we have soft delete enabled & data has been provided with an id in it
if (EnableSoftDelete && data.Id != null)
{
// now look to see if the record exists and if it is deleted
// if so we look to remove the record before then attempting the insert
// record old value of deleted, since need to query to see if deleted.
var oldIncludeDeleted = IncludeDeleted;
try
{
IncludeDeleted = true;
var existingData = await this.Lookup(data.Id).Queryable.FirstOrDefaultAsync();
// if record exists, and its soft deleted then truly delete it
if (existingData != null && existingData.Deleted)
{
// now need to remove this record...
this.Context.Set<TData>().Remove(existingData);
}
}
finally
{
IncludeDeleted = oldIncludeDeleted;
}
}
if (data.Id == null)
{
data.Id = Guid.NewGuid().ToString("N");
}
return await base.InsertAsync(data);
}

This behavior is by design--we require that you do an explicit undelete before doing the update.
The solution you've presented is fine. You can also move the code to your table controller, assuming you only need this behavior in one table. If you need it in multiple tables, then the custom domain manager is the best approach.

Related

How do I attach data to custom fields on INTranCost during release of POReceiptEntry?

I need to attach custom data into new fields added to INTranCost when the PO Receipt occurs.
Following the breadcrumbs, it seems that POReceiptEntry -> Release Action eventually calls INDocumentRelease.ReleaseDoc that eventually creates INTranCost. I tried extending both POReceiptEntry and INDocumentRelease to add an event for INTranCost_RowInserted to publish a PXTrace message, but the trace doesn't appear, telling me that I'm not hitting the event that I expected. (Which explains why the real business logic I need included didn't fire.)
protected virtual void _(Events.RowInserted<INTranCost> e)
{
PXTrace.WriteInformation("This is it!");
}
Of course, I want to put real code in this spot, but I am just trying to make sure I'm hitting the event properly. This works on pretty much everything else I've done, including attaching similar data to INTranExt fields. I cannot get it to work for INTranCost so that I can add to INTranCostExt. At this point, I can't determine if it is location (which graph extension) or a special methodology required for this special case.
I also tried overriding events and putting a breakpoint on the code, but it's like I'm not even on the same process. (Yes, I checked that I am connected to the right Acumatica instance and that I have no errors.)
What event in which graph is required to capture the creation in INTranCost for a PO Receipt to update custom fields in INTranCostExt?
Using Request Profiler, I was able to determine that I was close but not deep enough. While the INTranCost object to insert was built in INDocumentRelease FILE, the actual insert was processed in INReleaseProcess graph in that same file.
I only need to execute this "push" from the data captured on the POLine when the INTranCost record is created, and LineNbr is a key field and therefore never updated after it is set. I need to be sure that I have enough data to make the connection back, and the primary key links me back to the INTran easily. That subsequently gets back to the POReceiptLine to the POLine where the data is maintained that needs the "current value" to be captured when the transaction is posted. Since I need to update the DAC Extension, I need to use an event that will allow an existing DAC.Update to apply my values. Therefore, I added an event handler on INTranCost_LineNbr_FieldUpdated since that value should not be "updated" after it is set initially.
Code that accomplished the task:
public class INReleaseProcess_Extension : PXGraphExtension<INReleaseProcess>
{
public override void Initialize()
{
base.Initialize();
}
protected virtual void _(Events.FieldUpdated<INTranCost.lineNbr> e)
{
INTranCost row = (INTranCost) e.Row;
INTran tran = PXSelect<INTran,
Where<INTran.docType, Equal<Required<INTran.docType>>,
And<INTran.refNbr, Equal<Required<INTran.refNbr>>,
And<INTran.lineNbr, Equal<Required<INTran.lineNbr>>
>>>>
.SelectSingleBound(Base, null, row.DocType, row.RefNbr, (int?) e.NewValue);
if (tran?.POReceiptType != null && tran?.POReceiptNbr != null)
{
PXResultset<POReceiptLine> Results = PXSelectJoin<POReceiptLine,
InnerJoin<POLine, On<POLine.orderType, Equal<POReceiptLine.pOType>,
And<POLine.orderNbr, Equal<POReceiptLine.pONbr>,
And<POLine.lineNbr, Equal<POReceiptLine.pOLineNbr>>>>,
InnerJoin<POOrder, On<POOrder.orderType, Equal<POLine.orderType>,
And<POOrder.orderNbr, Equal<POLine.orderNbr>>>>>,
Where<POReceiptLine.receiptType, Equal<Required<POReceiptLine.receiptType>>,
And<POReceiptLine.receiptNbr, Equal<Required<POReceiptLine.receiptNbr>>,
And<POReceiptLine.lineNbr, Equal<Required<POReceiptLine.lineNbr>>>>>>.
SelectSingleBound(Base, null, tran.POReceiptType, tran.POReceiptNbr, tran.POReceiptLineNbr);
if (Results != null)
{
foreach (PXResult<POReceiptLine, POLine, POOrder> result in Results)
{
POReceiptLine receipt = result;
POLine line = result;
POOrder order = result;
POLineExt pOLineExt = PXCache<POLine>.GetExtension<POLineExt>(line);
INTranCostExt iNTranCostExt = PXCache<INTranCost>.GetExtension<INTranCostExt>(row);
if (pOLineExt != null && iNTranCostExt != null)
{
Base.Caches[typeof(INTranCost)].SetValueExt<INTranCostExt.usrField>(row, pOLineExt.UsrField);
}
}
}
}
}
}

What does the "check in memory" mean in Orchard CMS?

I tried to customize the queries executed by Orchard.ContentManagement.DefaultContentManager but the following peace of code *1 render my efforts useless:
class DefaultContentManager
{
...
public virtual ContentItem Get(int id, VersionOptions options, QueryHints hints) {
...
// implemention of the query comes here
...
*1 -> // no record means content item is not in db
if (versionRecord == null) {
// check in memory
var record = _contentItemRepository.Get(id);
if (record == null) {
return null;
}
versionRecord = GetVersionRecord(options, record);
if (versionRecord == null) {
return null;
}
}
The query is executed correctly and it does not return any data (which was my goal) but afterwards a second attempt *1 is executed to still get the content item.
Why is this part of code there? What is its purpose? Also why does the comment state check in memory and then the repository (DB table) is queried.
It's already been verified at this point that the item doesn't exist in the database, but it may have just been created from code during the same request. In that case, the nHibernate session has the item, but the database doesn't have it yet. The repository hits the session, not the db directly, so if it's there, it'll retrieve it, but that'll happen in memory.

Processing an emaillist async in MVC4

I'm trying to make my MVC4-website check to see if people should be alerted with an email because they haven't done something.
I'm having a hard time figuring out how to approach this. I checked if the shared hosting platform would allow me to activate some sort of cronjob, but this is not available.
So now my idea is to perform this check on each page-request, which already seems suboptimal (because of the overhead). But I thought that with using an async it would not be in the way of people just visiting the site.
I first tried to do this in the Application_BeginRequest method in Global.asax, but then it gets called multiple times per page-request, so that didn't work.
Next I found that I can make a Global Filter which executes on OnResultExecuted, which would seemed promising, but still it's no go.
The problem I get there is that I'm using MVCMailer to send the mails, and when I execute it I get the error: {"Value cannot be null.\r\nParameter name: httpContext"}
This probably means that mailer needs the context.
The code I now have in my global filter is the following:
public override void OnResultExecuted(ResultExecutedContext filterContext)
{
base.OnResultExecuted(filterContext);
HandleEmptyProfileAlerts();
}
private void HandleEmptyProfileAlerts()
{
new Thread(() =>
{
bool active = false;
new UserMailer().AlertFirst("bla#bla.com").Send();
DB db = new DB();
DateTime CutoffDate = DateTime.Now.AddDays(-5);
var ProfilesToAlert = db.UserProfiles.Where(x => x.CreatedOn < CutoffDate && !x.ProfileActive && x.AlertsSent.Where(y => y.AlertType == "First").Count() == 0).ToList();
foreach (UserProfile up in ProfilesToAlert)
{
if (active)
{
new UserMailer().AlertFirst(up.UserName).Send();
up.AlertsSent.Add(new UserAlert { AlertType = "First", DateSent = DateTime.Now, UserProfileID = up.UserId });
}
else
System.Diagnostics.Debug.WriteLine(up.UserName);
}
db.SaveChanges();
}).Start();
}
So my question is, am I going about this the right way, and if so, how can I make sure that MVCMailer gets the right context?
The usual way to do this kind of thing is to have a single background thread that periodically does the checks you're interested in.
You would start the thread from Application_Start(). It's common to use a database to queue and store work items, although it can also be done in memory if it's better for your app.

How can I update a content item (draft) from a background task in Orchard?

I have a simple IBackgroundTask implementation that performs a query and then either performs an insert or one or more updates depending on whether a specific item exists or not. However, the updates are not persisted, and I don't understand why. New items are created just as expected.
The content item I'm updating has a CommonPart and I've tried authenticating as a valid user. I've also tried flushing the content manager at the end of the Sweep method. What am I missing?
This is my Sweep, slightly edited for brevity:
public void Sweep()
{
// Authenticate as the site's super user
var superUser = _membershipService.GetUser(_orchardServices.WorkContext.CurrentSite.SuperUser);
_authenticationService.SetAuthenticatedUserForRequest(superUser);
// Create a dummy "Person" content item
var item = _contentManager.New("Person");
var person = item.As<PersonPart>();
if (person == null)
{
return;
}
person.ExternalId = Random.Next(1, 10).ToString();
person.FirstName = GenerateFirstName();
person.LastName = GenerateLastName();
// Check if the person already exists
var matchingPersons = _contentManager
.Query<PersonPart, PersonRecord>(VersionOptions.AllVersions)
.Where(record => record.ExternalId == person.ExternalId)
.List().ToArray();
if (!matchingPersons.Any())
{
// Insert new person and quit
_contentManager.Create(item, VersionOptions.Draft);
return;
}
// There are at least one matching person, update it
foreach (var updatedPerson in matchingPersons)
{
updatedPerson.FirstName = person.FirstName;
updatedPerson.LastName = person.LastName;
}
_contentManager.Flush();
}
Try to add _contentManager.Publish(updatedPerson). If you do not want to publish, but just to save, you don't need to do anything more, as changes in Orchard as saved automatically unless the ambient transaction is aborted. The call to Flush is not necessary at all. This is the case both during a regular request and on a background task.

Add or replace entity in Azure Table Storage

I'm working with Windows Azure Table Storage and have a simple requirement: add a new row, overwriting any existing row with that PartitionKey/RowKey. However, saving the changes always throws an exception, even if I pass in the ReplaceOnUpdate option:
tableServiceContext.AddObject(TableName, entity);
tableServiceContext.SaveChangesWithRetries(SaveChangesOptions.ReplaceOnUpdate);
If the entity already exists it throws:
System.Data.Services.Client.DataServiceRequestException: An error occurred while processing this request. ---> System.Data.Services.Client.DataServiceClientException: <?xml version="1.0" encoding="utf-8" standalone="yes"?>
<error xmlns="http://schemas.microsoft.com/ado/2007/08/dataservices/metadata">
<code>EntityAlreadyExists</code>
<message xml:lang="en-AU">The specified entity already exists.</message>
</error>
Do I really have to manually query for the existing row first and call DeleteObject on it? That seems very slow. Surely there is a better way?
As you've found, you can't just add another item that has the same row key and partition key, so you will need to run a query to check to see if the item already exists. In situations like this I find it helpful to look at the Azure REST API documentation to see what is available to the storage client library. You'll see that there are separate methods for inserting and updating. The ReplaceOnUpdate only has an effect when you're updating, not inserting.
While you could delete the existing item and then add the new one, you could just update the existing one (saving you one round trip to storage). Your code might look something like this:
var existsQuery = from e
in tableServiceContext.CreateQuery<MyEntity>(TableName)
where
e.PartitionKey == objectToUpsert.PartitionKey
&& e.RowKey == objectToUpsert.RowKey
select e;
MyEntity existingObject = existsQuery.FirstOrDefault();
if (existingObject == null)
{
tableServiceContext.AddObject(TableName, objectToUpsert);
}
else
{
existingObject.Property1 = objectToUpsert.Property1;
existingObject.Property2 = objectToUpsert.Property2;
tableServiceContext.UpdateObject(existingObject);
}
tableServiceContext.SaveChangesWithRetries(SaveChangesOptions.ReplaceOnUpdate);
EDIT: While correct at the time of writing, with the September 2011 update Microsoft have updated the Azure table API to include two upsert commands, Insert or Replace Entity and Insert or Merge Entity
In order to operate on an existing object NOT managed by the TableContext with either Delete or SaveChanges with ReplaceOnUpdate options, you need to call AttachTo and attach the object to the TableContext, instead of calling AddObject which instructs TableContext to attempt to insert it.
http://msdn.microsoft.com/en-us/library/system.data.services.client.dataservicecontext.attachto.aspx
in my case it was not allowed to remove it first, thus I do it like this, this will result in one transaction to server which will first remove existing object and than add new one, removing need to copy property values
var existing = from e in _ServiceContext.AgentTable
where e.PartitionKey == item.PartitionKey
&& e.RowKey == item.RowKey
select e;
_ServiceContext.IgnoreResourceNotFoundException = true;
var existingObject = existing.FirstOrDefault();
if (existingObject != null)
{
_ServiceContext.DeleteObject(existingObject);
}
_ServiceContext.AddObject(AgentConfigTableServiceContext.AgetnConfigTableName, item);
_ServiceContext.SaveChangesWithRetries();
_ServiceContext.IgnoreResourceNotFoundException = false;
Insert/Merge or Update was added to the API in September 2011. Here is an example using the Storage API 2.0 which is easier to understand then the way it is done in the 1.7 api and earlier.
public void InsertOrReplace(ITableEntity entity)
{
retryPolicy.ExecuteAction(
() =>
{
try
{
TableOperation operation = TableOperation.InsertOrReplace(entity);
cloudTable.Execute(operation);
}
catch (StorageException e)
{
string message = "InsertOrReplace entity failed.";
if (e.RequestInformation.HttpStatusCode == 404)
{
message += " Make sure the table is created.";
}
// do something with message
}
});
}
The Storage API does not allow more than one operation per entity (delete+insert) in a group transaction:
An entity can appear only once in the transaction, and only one operation may be performed against it.
see MSDN: Performing Entity Group Transactions
So in fact you need to read first and decide on insert or update.
You may use UpsertEntity and UpsertEntityAsync methods in the official Microsoft Azure.Data.Tables TableClient.
The fully working example is available at https://github.com/Azure-Samples/msdocs-azure-data-tables-sdk-dotnet/blob/main/2-completed-app/AzureTablesDemoApplicaton/Services/TablesService.cs --
public void UpsertTableEntity(WeatherInputModel model)
{
TableEntity entity = new TableEntity();
entity.PartitionKey = model.StationName;
entity.RowKey = $"{model.ObservationDate} {model.ObservationTime}";
// The other values are added like a items to a dictionary
entity["Temperature"] = model.Temperature;
entity["Humidity"] = model.Humidity;
entity["Barometer"] = model.Barometer;
entity["WindDirection"] = model.WindDirection;
entity["WindSpeed"] = model.WindSpeed;
entity["Precipitation"] = model.Precipitation;
_tableClient.UpsertEntity(entity);
}

Resources