I'm getting a random Exception when I try to update an entity on a storage table. The exception I get is
System.Data.Services.Client.DataServiceRequestException: An error occurred while processing this request. ---> System.Data.Services.Client.DataServiceClientException: {"odata.error":{"code":"UpdateConditionNotSatisfied","message":{"lang":"en-US","value":"The update condition specified in the request was not satisfied.\nRequestId:2a205f10-0002-013b-028d-0bbec8000000\nTime:2015-10-20T23:17:16.5436755Z"}}} ---
I know that this might be a concurrency issue, but the thing is that there's no other process accessing that entity.
From time to time I get dozens of these exceptions, I restart the server and it starts working fine again.
public static class StorageHelper
{
static TableServiceContext tableContext;
static CloudStorageAccount storageAccount;
static CloudTableClient CloudClient;
static StorageHelper()
{
storageAccount = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("StorageConnectionString"));
CloudClient = storageAccount.CreateCloudTableClient();
tableContext = CloudClient.GetTableServiceContext();
tableContext.IgnoreResourceNotFoundException = true;
}
public static void Save(int myId,string newProperty,string myPartitionKey,string myRowKey){
var entity = (from j in tableContext.CreateQuery<MyEntity>("MyTable")
where j.PartitionKey == myId
select j).FirstOrDefault();
if (entity != null)
{
entity.MyProperty= myProperty;
tableContext.UpdateObject(entity);
tableContext.SaveChanges();
}
else
{
entity = new MyEntity();
entity.PartitionKey =MyPartitionKey;
entity.RowKey =MyRowKey;
entity.MyProperty= myProperty;
tableContext.AddObject("MyTable", entity);
tableContext.SaveChanges();
}
}
The code you've posted uses the very old table layer which is now obsolete. We strongly recommend you update to a newer version of the storage library and use the new table layer. See this StackOverflow question for more information. Also note that if you're using a very old version of the storage library these will eventually stop working as the service version they're using is going to be deprecated service side.
We do not recommend that customers reuse TableServiceContext objects as has been done here. They contain a variety of tracking that can cause performance issues as well as other adverse effects. These kind of limitations is part of the reason we recommend (as described above) moving to the newer table layer. See the how-to for more information.
On table entity update operations you must send an if-match header indicating an etag. The library will set this for you if you set the entity's etag value. To update no matter what the etag of the entity on the service, use "*".
I suggest you can consider using the Transient Fault Handling Application Block from Microsoft's Enterprise Library to retry when your application encounters such transient fault in Azure to minimize restarting the server every time when the same exception occurs.
https://msdn.microsoft.com/en-us/library/hh680934(v=pandp.50).aspx
While updating your entity, set ETag = "*".
Your modified code should look something like this -
if (entity != null)
{
entity.MyProperty= "newProperty";
tableContext.UpdateObject(entity);
tableContext.SaveChanges();
}
Related
I am reading some examples how to implement control of concurrency in DDD.
At first, I thought that it is responsability of the infraestrcuture, or the repository, because the domain only should to be responsible of the core bussiness. But I have seen many examples in which the domain entity has a property for the version.
Then, in the consumer, in the application layer, it is seen in this way:
public void MyApplicationMethod()
{
Order myOrder = _orderRepository.Get(1);
myOrder.UpdateComments("another comments");
//If I arrive here, I could update, so I can increase the version.
myOrder.IncreaseVersion();
_orderRepository.Commit();
}
But I don't know why is the consumer of the domain which has to increase the version. Perhaps the reason is because the domain isn't the responsability to control de concurrence or control the version, because it is not really a rule of the bussiness. But if this would be true, then why to have a property and a method to increase the version?
But if it is allowed or not bad solution that the domain knows about the version and has the method. Why to delegate the increase version to the client? Why not to increase the version in the update method of the entity? Something like that:
public class Order
{
private int _version;
bool _isVersionIncreased = false;
private void IncreaseVersion()
{
if(_isVersionIncreased == false)
{
_version += _version;
_isVersionIncreased = true;
}
}
public UpdateComments(string paramComments)
{
_comments = paramComments;
IncreaseVersion();
}
}
In this way I can simplify the code in the consumer. Also, if the entity is modified many times, with the variable _isVersionIncreased I can ensure I will increase only once the version, if I update many properties with different methods. The drawback is that I want to update again the same entity after a commit, I would have to rest the variable or load the entity again from the repository. But normally I use to create a new context for each action, so it is not a big problem.
In sumary, I would like to know which is a good way to control concurrency in DDD, and what problems could have my solution, to increase the version in the entity method, instead of do int in the consumer (application layer).
Thanks.
Suppose I create a model
public class Foo :TableEntity {
public int OriginalProperty {get;set;}
}
I then deploy a service that periodically updates the values of OriginalProperty with code similar to...
//use model-based query
var query = new TableQuery<Foo>().Where(…);
//get the (one) result
var row= (await table.ExecuteQueryAsync(query)).Single()
//modify and write it back
row.OriginalProperty = some_new_value;
await table.ExecuteAsync(TableOperation.InsertOrReplace(row));
At some later time I decide I want to add a new property to Foo for use by a different service.
public class Foo :TableEntity {
public int OriginalProperty {get;set;}
public int NewProperty {get;set;}
}
I make this change locally and start updating a few records from my local machine without updating the original deployed service.
The behaviour I am seeing is that changes I make to NewProperty from my local machine are lost as soon as the deployed service updates the record. Of course this makes sense in some ways. The service is unaware that NewProperty has been added and has no reason to preserve it. However my understanding was that the TableEntity implementation was dictionary-based so I was hoping that it would 'ignore' (i.e. preserve) newly introduced columns rather than delete them.
Is there a way to configure the query/insertion to get the behaviour I want? I'm aware of DynamicTableEntity but it's unclear whether using this as a base class would result in a change of behaviour for model properties.
Just to be clear, I'm not suggesting that continually fiddling with the model or having multiple client models for the same table is a good habit to get into, but it's definitely useful to be able to occasionally add a column without worrying about redeploying every service that might touch the affected table.
You can use InsertOrMerge instead of InsertOrReplace.
I'm getting a MissingFieldException for multiple OrmLite operations:
using (var db = DbFactory.Open())
{
var exp = db.From<Product>();
if (filter.Field1 != null)
exp.Where(w => w.Field1 == filter.Field1);
if (filter.Field2 != null)
exp.Where(w => w.Field2 == filter.Field2);
return db.LoadSelect(exp);
}
Also occurs with a simple AutoQuery RDBMS service.
[Api("Query.")]
[Route(“/query, "GET")]
public class QueryTransaction : QueryDb<Transaction, TransactionQueryRecord>,
IJoin<Transaction, Application>
{
[ApiMember(IsRequired = false, ParameterType = "query")]
public string TimeZoneId { get; set; }
}
The stack trace is the following:
System.MissingFieldException: Field not found: 'ServiceStack.OrmLite.OrmLiteConfig.UseParameterizeSqlExpressions'.
at ServiceStack.OrmLite.SqlServer.SqlServerOrmLiteDialectProvider.SqlExpression[T]()
at ServiceStack.OrmLite.OrmLiteExecFilter.SqlExpression[T](IDbConnection dbConn)
at ServiceStack.OrmLite.OrmLiteReadExpressionsApi.From[T](IDbConnection dbConn)
at ServiceStack.TypedQuery`2.CreateQuery(IDbConnection db, IQueryDb dto, Dictionary`2 dynamicParams, IAutoQueryOptions options)
at ServiceStack.AutoQuery.CreateQuery[From,Into](IQueryDb`2 dto, Dictionary`2 dynamicParams, Request req
I think that OrmLite is trying to find the property configuration OrmLiteConfig.UseParameterizeSqlExpressions, but it doesn't exist in the version v.4.0.60
When I run my integration tests with AppSelfHostBase everything is ok, but when I try in the browser sometimes work and other times throw the exception.
Missing method or field exceptions like this is an indication that you're mixing and matching dirty .dlls with different versions together. OrmLiteConfig.UseParameterizeSqlExpressions was removed a while ago after OrmLite switched to use parameterized queries, this error indicates that you have an old .dll that references it.
When you upgrade your ServiceStack projects you need to upgrade all dependencies and make sure all ServiceStack dependencies are referencing the same version (e.g v4.0.60 or the current latest v4.5.0). You can check the NuGet /packages folder to see the different versions your Solution uses. Deleting all but the latest version and rebuilding your solution will show build errors showing which projects were still referencing the older packages, which you'll want to update so that all projects are using the same version.
I am new to service fabric and started by looking at the MSDN articles covering the topic. I began by implementing the Hello World sample here.
I changed their original RunAsync implementation to:
var myDictionary = await this.StateManager.GetOrAddAsync<IReliableDictionary<int, DataObject>>("myDictionary");
while (!cancellationToken.IsCancellationRequested)
{
DataObject dataObject;
using (var tx = this.StateManager.CreateTransaction())
{
var result = await myDictionary.TryGetValueAsync(tx, 1);
if (result.HasValue)
dataObject = result.Value;
else
dataObject = new DataObject();
//
dataObject.UpdateDate = DateTime.Now;
//
//ServiceEventSource.Current.ServiceMessage(
// this,
// "Current Counter Value: {0}",
// result.HasValue ? result.Value.ToString() : "Value does not exist.");
await myDictionary.AddOrUpdateAsync(tx, 1, dataObject, ((k, o) => dataObject));
await tx.CommitAsync();
}
await Task.Delay(TimeSpan.FromSeconds(1), cancellationToken);
}
I also introduced a DataObject type and have exposed an UpdateDate property on that type.
[DataContract(Namespace = "http://www.contoso.com")]
public class DataObject
{
[DataMember]
public DateTime UpdateDate { get; set; }
}
When I run the app (F5 in visual studio 2015), a dataObject instance (keyed as 1) is not found in the dictionary so I create one, set UpdateDate, add it to the dictionary and commit the transaction. During the next loop, it finds the dataObject (keyed as 1) and sets UpdateDate, updates the object in the dictionary and commits the transaction. Perfect.
Here's my question. When I stop and restart the service project (F5 in visual studio 2015) I would expect that on my first iteration of the RunAsync that the dataObject (keyed as 1) would be found but it's not. I would expect all state to be flushed to its replica.
Do I have to do anything for the stateful service to flush its internal state to its primary replica?
From what I've read, it makes it sound as though all of this is handled by service fabric and that calling commit (on the transaction) is sufficient. If I locate the primary replica (in Service Fabric Explorer->Application View) I can see that the RemoteReplicator_xxx LastACKProcessedTimeUTC is updated once I commit the transaction (when stepping through).
Any help is greatly appreciated.
Thank you!
-Mark
This is a function of the default local development experience in Visual Studio. If you watch the Output window closely after hitting F5 you'll see a message like this:
The deployment script detects that there's an existing app of the same type and version already registered, so it removes it and deploys the new one. In doing that, the data associated with the old application is removed.
You have a couple of options to deal with this.
In production, you would perform an application upgrade to safely roll out the updated code while maintaining the state. But constantly updating your versions while doing quick iteration on your dev box can be tedious.
An alternative is to flip the project property "Preserve Data on Start" to "Yes". This will automatically bump all versions of the generated application package (without touching the versions in your source) and then perform an app upgrade on your behalf.
Note that because of some of the system checks inherent in the upgrade path, this deployment option is likely to be a bit slower than the default remove-and-replace. However, when you factor in the time it takes to recreate the test data, it's often a wash.
You need to think of a ReliableDictionary as holding collections of objects as opposed to a collection of references. That is, when you add an “object” to the dictionary, you must think that you are handing the object off completely; and you must not alter this object’s state in the anymore. When you ask ReliableDictionary for an “object”, it gives you back a reference to its internal object. The reference is returned for performance reasons and you are free to READ the object’s state. (It would be great if the CLR supported read-only objects but it doesn't.) However, you MUST NOT MODIFY the object’s state (or call any methods that would modify the object’s state) as you would be modifying the internal data structures of the dictionary corrupting its state.
To modify the object’s state, you MUST make a copy of the object pointed to by the returned reference. You can do this by serializing/deserializing the object or by some other means (such as creating a whole new object and copying the old state to the new object). Then, you write the NEW OBJECT into the dictionary. In a future version of Service Fabric, We intend to improve ReliableDictionary’s APIs to make this required pattern of use more discoverable.
I have a web project deployed in azure using colocated caching. I have 2 instances of this web role.
I am using Entity framework 5 and upon fetching some entities from the db, I cache them using colocated caching.
My entities are defined in class library called Drt.BusinessLayer.Entities
However when I visit my web app, I get the error:
The deserializer cannot load the type to deserialize because type 'System.Data.Entity.DynamicProxies.Country_4C17F5A60A033813EC420C752F1026C02FA5FC07D491A3190ED09E0B7509DD85' could not be found in assembly 'EntityFrameworkDynamicProxies-Drt.BusinessLayer.Entities, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null'. Check that the type being serialized has the same contract as the type being deserialized and the same assembly is used.
Also sometimes I get this too:
Assembly 'EntityFrameworkDynamicProxies-Drt.BusinessLayer.Entities, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null' is not found.
It appears that there is an error getting the entities out/deserialized. Since they are 2 instances of my web role, instance1 might place some entity objects in the cache and instance2 might get them out. I was expecting this to work, but I am unsure why I am getting this error....
Can anyone help/advise?
I ran into the same issue. At least in my case, the problem was the DynamicProxies with which the EF wraps all the model classes. In other words, you might think you're retrieving a Country class, but under the hood, EF is actually dynamically generating a class that's called something like Country_4C17F5A60A033813EC420C752F1026C02FA5FC07D491A3190ED09E0B7509DD85. The last part of the name is obviously generated at run-time, and it can be expected to remain static throughout the life of your application - but (and this is the key) only on the same instance of the app domain. If you've got two machines accessing the same out-of-process cache, one will be storing an object of the type Country_4C17F5A60A033813EC420C752F1026C02FA5FC07D491A3190ED09E0B7509DD85, but that type simply won't exist on the other machine. Its dynamic Country class will be something like Country_JF7ASDF8ASDF8ADSF88989ASDF8778802348JKOJASDLKJQAWPEORIU7879243AS, and so there won't be any type into which it can deserialize the serialized object. The same thing will happen if you restart the app domain your web app is running in.
I'm sure the big brains at MS could come up with a better solution, but the one I've been using is to do a "shallow clone" of my EF objects before I cache them. The C# method I'm using looks like this:
public static class TypeHelper
{
public static T ShallowClone<T>(this T obj) where T : class
{
if (obj == null) return null;
var newObj = Activator.CreateInstance<T>();
var fields = typeof(T).GetFields();
foreach (var field in fields)
{
if (field.IsPublic && (field.FieldType.IsValueType || field.FieldType == typeof(string)))
{
field.SetValue(newObj, field.GetValue(obj));
}
}
var properties = typeof(T).GetProperties();
foreach (var property in properties)
{
if ((property.CanRead && property.CanWrite) &&
(property.PropertyType.IsValueType || property.PropertyType == typeof(string)))
{
property.SetValue(newObj, property.GetValue(obj, null), null);
}
}
return newObj;
}
}
This takes care of two problems at once: (1) It ensures that only the EF object I'm specifically interested in gets cached, and not the entire object graph - sometimes huge - to which it's attached; and (2) The object that it caches is of a common type, and not the dynamically generated type: Country and not Country_4C17F5A60A033813EC420C752F1026C02FA5FC07D491A3190ED09E0B7509DD85.
It's certainly not perfect, but it does seem a reasonable workaround for many scenarios.
It would in fact be nice, though, if the good folks at MS were to come up with a way to cache EF objects without this.
I'm not familiar with azure-caching in particular, but I'm guessing you need to hydrate your entities completely before passing them to anything that does serialization, which is something a distributed or out-of-process cache would do.
So, just do .Include() on all relationships when you're fetching an entity or disable lazy initialization and you should be fine.