I am new to service fabric and started by looking at the MSDN articles covering the topic. I began by implementing the Hello World sample here.
I changed their original RunAsync implementation to:
var myDictionary = await this.StateManager.GetOrAddAsync<IReliableDictionary<int, DataObject>>("myDictionary");
while (!cancellationToken.IsCancellationRequested)
{
DataObject dataObject;
using (var tx = this.StateManager.CreateTransaction())
{
var result = await myDictionary.TryGetValueAsync(tx, 1);
if (result.HasValue)
dataObject = result.Value;
else
dataObject = new DataObject();
//
dataObject.UpdateDate = DateTime.Now;
//
//ServiceEventSource.Current.ServiceMessage(
// this,
// "Current Counter Value: {0}",
// result.HasValue ? result.Value.ToString() : "Value does not exist.");
await myDictionary.AddOrUpdateAsync(tx, 1, dataObject, ((k, o) => dataObject));
await tx.CommitAsync();
}
await Task.Delay(TimeSpan.FromSeconds(1), cancellationToken);
}
I also introduced a DataObject type and have exposed an UpdateDate property on that type.
[DataContract(Namespace = "http://www.contoso.com")]
public class DataObject
{
[DataMember]
public DateTime UpdateDate { get; set; }
}
When I run the app (F5 in visual studio 2015), a dataObject instance (keyed as 1) is not found in the dictionary so I create one, set UpdateDate, add it to the dictionary and commit the transaction. During the next loop, it finds the dataObject (keyed as 1) and sets UpdateDate, updates the object in the dictionary and commits the transaction. Perfect.
Here's my question. When I stop and restart the service project (F5 in visual studio 2015) I would expect that on my first iteration of the RunAsync that the dataObject (keyed as 1) would be found but it's not. I would expect all state to be flushed to its replica.
Do I have to do anything for the stateful service to flush its internal state to its primary replica?
From what I've read, it makes it sound as though all of this is handled by service fabric and that calling commit (on the transaction) is sufficient. If I locate the primary replica (in Service Fabric Explorer->Application View) I can see that the RemoteReplicator_xxx LastACKProcessedTimeUTC is updated once I commit the transaction (when stepping through).
Any help is greatly appreciated.
Thank you!
-Mark
This is a function of the default local development experience in Visual Studio. If you watch the Output window closely after hitting F5 you'll see a message like this:
The deployment script detects that there's an existing app of the same type and version already registered, so it removes it and deploys the new one. In doing that, the data associated with the old application is removed.
You have a couple of options to deal with this.
In production, you would perform an application upgrade to safely roll out the updated code while maintaining the state. But constantly updating your versions while doing quick iteration on your dev box can be tedious.
An alternative is to flip the project property "Preserve Data on Start" to "Yes". This will automatically bump all versions of the generated application package (without touching the versions in your source) and then perform an app upgrade on your behalf.
Note that because of some of the system checks inherent in the upgrade path, this deployment option is likely to be a bit slower than the default remove-and-replace. However, when you factor in the time it takes to recreate the test data, it's often a wash.
You need to think of a ReliableDictionary as holding collections of objects as opposed to a collection of references. That is, when you add an “object” to the dictionary, you must think that you are handing the object off completely; and you must not alter this object’s state in the anymore. When you ask ReliableDictionary for an “object”, it gives you back a reference to its internal object. The reference is returned for performance reasons and you are free to READ the object’s state. (It would be great if the CLR supported read-only objects but it doesn't.) However, you MUST NOT MODIFY the object’s state (or call any methods that would modify the object’s state) as you would be modifying the internal data structures of the dictionary corrupting its state.
To modify the object’s state, you MUST make a copy of the object pointed to by the returned reference. You can do this by serializing/deserializing the object or by some other means (such as creating a whole new object and copying the old state to the new object). Then, you write the NEW OBJECT into the dictionary. In a future version of Service Fabric, We intend to improve ReliableDictionary’s APIs to make this required pattern of use more discoverable.
Related
Suppose I create a model
public class Foo :TableEntity {
public int OriginalProperty {get;set;}
}
I then deploy a service that periodically updates the values of OriginalProperty with code similar to...
//use model-based query
var query = new TableQuery<Foo>().Where(…);
//get the (one) result
var row= (await table.ExecuteQueryAsync(query)).Single()
//modify and write it back
row.OriginalProperty = some_new_value;
await table.ExecuteAsync(TableOperation.InsertOrReplace(row));
At some later time I decide I want to add a new property to Foo for use by a different service.
public class Foo :TableEntity {
public int OriginalProperty {get;set;}
public int NewProperty {get;set;}
}
I make this change locally and start updating a few records from my local machine without updating the original deployed service.
The behaviour I am seeing is that changes I make to NewProperty from my local machine are lost as soon as the deployed service updates the record. Of course this makes sense in some ways. The service is unaware that NewProperty has been added and has no reason to preserve it. However my understanding was that the TableEntity implementation was dictionary-based so I was hoping that it would 'ignore' (i.e. preserve) newly introduced columns rather than delete them.
Is there a way to configure the query/insertion to get the behaviour I want? I'm aware of DynamicTableEntity but it's unclear whether using this as a base class would result in a change of behaviour for model properties.
Just to be clear, I'm not suggesting that continually fiddling with the model or having multiple client models for the same table is a good habit to get into, but it's definitely useful to be able to occasionally add a column without worrying about redeploying every service that might touch the affected table.
You can use InsertOrMerge instead of InsertOrReplace.
Can we use value object in command ?
Suppose I have a Shop (aggregate) in which there is one value object Address.
In the value object constructor Address ,I was put the some validation logic for address.
So if I am using that Address object in command (CreateShopCmd) , then it get validated at the making of command , but What I want or Read that validation should be present in command handler.
But problem is that , I have to put that validation again in command handler (Since validation is already present in it Address constructor) and if I am not putting that in command handler , then the validation will occur when I am making the Address object in event handler and assign to Shop aggregate(Which is incorrect)
So, please guide me.
Below are code example
#Aggregate
#AggregateRoot
public class Shop {
#AggregateIdentifier
private ShopId shopId;
private String shopName;
private Address address;
#CommandHandler
public Shop(CreateShopCmd cmd){
//Validation Logic here , if not using the Address in
// in cmd
//Fire an event after validation
ShopRegistredEvt shopRegistredEvt = new ShopRegistredEvt();
AggregateLifecycle.apply(shopRegistredEvt);
}
#EventSourcingHandler
public void on(ShopRegistredEvt evt) {
this.shopName = evt.getShopName();
//Validation happend here if not put in cmd at the time of making
//Address object - this is wrong
this.address = new Address(evt.getCity(),evt.getCountry(),evt.getZipCode())
}
}
public class CreateShopCmd{
private String shopId;
private String shopName;
private String city;
private String zipCode;
private String country;
}
public ShopCreatedEvent{
private String shopId;
private String shopName;
private String city;
private String zipCode;
private String country;
}
There is nothing conceptually wrong with using Value Objects in Commands or Events. However, you should use them with caution.
The structure of a Message may change over time. If you have used Value Object excessively inside your messages, it may become less clear how a change in one of the value objects changes the structure of different messages.
For Value Objects that represent a "common" concept, such as an Address, this is not so much of a problem. But as soon as the Value Objects become more domain-specific, this may come up as an issue.
This is a very good question and I have been thoroughly thinking about embedding value objects in commands or not. I came to the conclusion you should definitely not use Value Objects in commands:
Commands are part of the application layer, they are supposed to work as simple as possible, avoiding any typed objects, and work best using literal (think serialization). What happen when an external system wants to plugin on your hexagon (application layer) and send commands to your application, do they need your command library to be able to use the objects and the structure defined ? Hell no ! You don't want that, so keep command simple.
Another reason is, as DmitriBodiu said, VO contains business logic and validation, they belong to the domain layer, do not ever put them in commands. Application service will do the translation, and be responsible of throwing validation error to any non conforming commands at the client.
There is nothing wrong in your design, its actually how Vaughn Vernon (the author of Implementing Domain Driven Design - IDDD book) did in his repository, you might want to check the application layer at this link:
https://github.com/VaughnVernon/IDDD_Samples/blob/master/iddd_identityaccess/src/main/java/com/saasovation/identityaccess/application/IdentityApplicationService.java
Notice how he reconstruct every objects from flat commands to value object belonging to the domain layer:
#Transactional
public void changeUserContactInformation(ChangeContactInfoCommand aCommand) {
User user = this.existingUser(aCommand.getTenantId(), aCommand.getUsername());
this.internalChangeUserContactInformation(
user,
new ContactInformation(
new EmailAddress(aCommand.getEmailAddress()),
new PostalAddress(
aCommand.getAddressStreetAddress(),
aCommand.getAddressCity(),
aCommand.getAddressStateProvince(),
aCommand.getAddressPostalCode(),
aCommand.getAddressCountryCode()),
new Telephone(aCommand.getPrimaryTelephone()),
new Telephone(aCommand.getSecondaryTelephone())));
}
Commands must not contain business logic, so they cannot carry a value object.
I wouldn't suggest using Value Objects in commands. Cause your commands are part of the application layer, but Value Objects are kept in Domain Layer. You can use your ValueObjects in DomainEvens though. Because if domain model changes, modification of your domain event wouln't be that painful, cause the modification is done in the same bounded context. You should never use ValueObjects in integration events though.
Short answer: Have you ever thought about Integer, String, Boolean, etc.? Those are Value Objects, too. The only difference is, that you didn't create them yourself. Now try to build a Command without any Value Objects ;-)
Long answer:
In general I don't see any issue with Value Objects within Commands. As long as you follow a few simple guidelines:
The most important code in your application is your Domain Model. The Domain Model defines the data structures it expects for Command handling. This means: The only reason to change your Command Model is if your Domain Model requires this change. The same applies to your Value Objects: Value Objects only change if this change is required by your Domain Model. No exceptions!
Commands can in general fail either because of business constraints, or because of invalid data (or because of optimistic locking, or whatever).
As said above: Integers and Strings are Value Objects, too. If you only use basic types within your Command, it will already throw an exception if you try new SetAgeCommand(aggId, "foo"), because String cannot be assigned to int. The same applies if you don't provide an Aggregate ID to your UpdatePersonCommand. These are no business constraints, but instead very basic data and type validation. Your Command will never be created if you pass malformed data.
Now let's say you have a PersonAge Value Object. I doesn't matter where you construct this object, because in any case it must throw an Exception if you try to construct it with a negative number: -5 cannot be assigned to PersonAge - looks familiar? As long as you can make sure that your code created those Value Object instances, you can know for sure that they are valid.
Business rules should be checked by the Command Handler within your Domain Model. In general business constraints are specific to your Domain, and most often they rely on the data within your Aggregate. Take for example SendMoneyCommand. Your Money Value Object can validate if it's a valid currency, but it cannot validate if the user's bank account has enough money to execute the transaction. This is a business validation and it's part of your Domain Model.
And a word regarding Events: I'd suggest to only use very basic Value Objects inside your events. For example: String, Integer, Date, etc. Basically every kind of Value Object that will never change. The reason behind it: Business requirements can change. For example: Maybe your Domain Model requires your Address Value Object to change, and it's now required to provide geo-coordinates. Then this will implicitly change your NewAddressAddedEvent. But your already persisted Events didn't have this requirement, though you're unable to construct Address Value Objects from your past event data, because the new Address Value Object will throw an Exception if there are no geo-coordinates provided.
There are (at least) two solutions for this problem:
Versioned Events: After modifying your Address Value Object, you have now a NewAddressAddedEvent_Version2 which uses the new Address Value Object, and you have the old NewAddressAddedEvent which must use a backup copy of the old Address Value Object.
Write a Script that "repairs" your event database by adding geo-coordinates to every Event that uses the Address Value Object. So you can throw away the old NewAddressAddedEvent.
That's OK as long as the value objects are conceptually a part of your message contract, and not used in entities.
And if they are a part of your entity, don't expose them as public properties of your message or you'll be in soop.
I'm getting a random Exception when I try to update an entity on a storage table. The exception I get is
System.Data.Services.Client.DataServiceRequestException: An error occurred while processing this request. ---> System.Data.Services.Client.DataServiceClientException: {"odata.error":{"code":"UpdateConditionNotSatisfied","message":{"lang":"en-US","value":"The update condition specified in the request was not satisfied.\nRequestId:2a205f10-0002-013b-028d-0bbec8000000\nTime:2015-10-20T23:17:16.5436755Z"}}} ---
I know that this might be a concurrency issue, but the thing is that there's no other process accessing that entity.
From time to time I get dozens of these exceptions, I restart the server and it starts working fine again.
public static class StorageHelper
{
static TableServiceContext tableContext;
static CloudStorageAccount storageAccount;
static CloudTableClient CloudClient;
static StorageHelper()
{
storageAccount = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("StorageConnectionString"));
CloudClient = storageAccount.CreateCloudTableClient();
tableContext = CloudClient.GetTableServiceContext();
tableContext.IgnoreResourceNotFoundException = true;
}
public static void Save(int myId,string newProperty,string myPartitionKey,string myRowKey){
var entity = (from j in tableContext.CreateQuery<MyEntity>("MyTable")
where j.PartitionKey == myId
select j).FirstOrDefault();
if (entity != null)
{
entity.MyProperty= myProperty;
tableContext.UpdateObject(entity);
tableContext.SaveChanges();
}
else
{
entity = new MyEntity();
entity.PartitionKey =MyPartitionKey;
entity.RowKey =MyRowKey;
entity.MyProperty= myProperty;
tableContext.AddObject("MyTable", entity);
tableContext.SaveChanges();
}
}
The code you've posted uses the very old table layer which is now obsolete. We strongly recommend you update to a newer version of the storage library and use the new table layer. See this StackOverflow question for more information. Also note that if you're using a very old version of the storage library these will eventually stop working as the service version they're using is going to be deprecated service side.
We do not recommend that customers reuse TableServiceContext objects as has been done here. They contain a variety of tracking that can cause performance issues as well as other adverse effects. These kind of limitations is part of the reason we recommend (as described above) moving to the newer table layer. See the how-to for more information.
On table entity update operations you must send an if-match header indicating an etag. The library will set this for you if you set the entity's etag value. To update no matter what the etag of the entity on the service, use "*".
I suggest you can consider using the Transient Fault Handling Application Block from Microsoft's Enterprise Library to retry when your application encounters such transient fault in Azure to minimize restarting the server every time when the same exception occurs.
https://msdn.microsoft.com/en-us/library/hh680934(v=pandp.50).aspx
While updating your entity, set ETag = "*".
Your modified code should look something like this -
if (entity != null)
{
entity.MyProperty= "newProperty";
tableContext.UpdateObject(entity);
tableContext.SaveChanges();
}
Suppose that I have 2 aggregate roots (AR) in my domain and invoking some method on the 1st requires access to an instance of the 2nd. In DDD how and where should retrieval and creation of the 2nd AR happen?
Here's a contrived example TravelerEntity that needs access to a SuitcaseEntity. I'm looking for an answer that doesn't pollute the domain layer with infrastructure code.
public class TravelerEntity {
// null if traveler has no suitcase yet.
private String suitcaseId = ...;
...
// Returns an empty suitcase ready for packing. Caller
public SuitcaseEntity startTrip(SuitcaseRepository repo) {
SuitcaseEntity suitcase;
if (suitcaseId == null) {
suitcase = new SuitcaseFactory().create();
suitcase = repo.save(suitcase);
suitcaseId = suitcase.getId();
} else {
suitcase = repo.findOne(suitcaseId);
}
suitcase.emptyContents();
return suitcase;
}
}
An application layer service handling the start trip request would get the appropriate SuitcaseRepository implementation via DI, get the TravelerEntity via a TravelerRepository implementation and call its startTrip() method.
The only alternative I thought of was to move SuitcaseEntity management to a domain service, but I don't want to create the suitcase before starting the trip, and I don't want to end up with an anemic TravelerEntity.
I'm a little uncertain about one AR creating and saving another AR. Is this OK since the repo and factory encapsulate specifics about the 2nd AR? Is there a danger I'm missing? Is there a better alternative?
I'm new enough to DDD to question my thinking on this. And the other questions I found about ARs seem to focus on identifying them properly, not on managing their lifecycles in relation to one another.
Ideally TravelerEntity wouldn't manipulate a SuitcaseRepository because it shouldn't know about an external thing where suitcases are stored, only about its own internals. Instead, it could new up a SuitCase and add it to its internal [list of] suitcases. If you wanted that to work with ORMs without specifically adding the suitcase to the repository though, you'd have to store the whole suitcase object in TravelerEntity.suitcaseList and not just its ID, which conflicts with the "store references to other AR's as IDs" best practice.
Moreover, TravelerEntity.startTrip() returning a suitcase seems a bit artificial and unexplicit and you'll be in trouble if you need to return other entities created by startTrip(). So a good solution could be to have TravelerEntity emit a SuitcaseAdded event with the suitcase data in it once it has added the suitcase to its list. An application service could subscribe to the event, add the suitcase to SuitcaseRepository and commit the transaction, effectively saving both the new suitcase and the modified traveler to the database.
Alternatively, you could place startTrip() in a Domain Service instead of an Entity. There it might be more legit to use SuitcaseRepository since a domain service is allowed know about multiple domain entities and the overall domain process going on.
First of all persistence is not domain's job so i would get rid of all the repositories from the domain models and create a service that would use them.
Second of all you should rethink your design. Why a StartTrip method of a Traveller should return a SuitCase?
A Traveller either has or hasn't a suitcase. Once you have retrieved the Traveller you should already have their SuitCases too.
public class StartTripService {
public void StartTrip(int travellerId) {
var traveller = travellerRepo.Get(travellerId);
traveller.StartTrip();
}
}
I am trying to write a method to create a database and run migrations on it, given the connection string.
I need the multiple connections because I record an audit log in a separate database.
I get the connection strings out of app.config using code like
ConfigurationManager.ConnectionStrings["Master"].ConnectionString;
The code works with the first connection string defined in my app.config but not others, which leads me to think that somehow it is getting the connection string from app.config in some manner I don't know.
My code to create the database if it does not exist is
private static Context MyCreateContext(string ConnectionString)
{
// put the connection string where the factory method can get it
AppDomain.CurrentDomain.SetData("ConnectionString", ConnectionString );
var factory = new ContextFactory();
// I know I need this line - but I cant see how what follows actually uses it
Database.SetInitializer(new MigrateDatabaseToLatestVersion<Context,DataLayer.Migrations.Configuration>());
var context = factory.Create();
context.Database.CreateIfNotExists();
return context
}
The code in the Migrations.Configuration is
Public sealed class Configuration : DbMigrationsConfiguration<DataLayer.Context>
{
public Configuration()
{
AutomaticMigrationsEnabled = false;
}
}
The context factory code is
public class ContextFactory : IDbContextFactory<Context>
{
public Context Create()
{
var s = (string)AppDomain.CurrentDomain.GetData("ConnectionString");
return new Context(s);
}
}
Thus I am setting the connection string before creating the context.
Where can I be going wrong, given that the connection strings are all the same except the database name, and the migration code runs with one connection string, but doesnt run with others?
I wonder if my problem is to do with understanding how How does Database.SetInitializer actually works. I am guessing something about reflection or generics. How do i make the call to SetInitializer tie tie to my actual context?
I have tried the following code but the migrations do not run
private static Context MyCreateContext(string ConnectionString)
{
Database.SetInitializer(new MigrateDatabaseToLatestVersion<Context, DataLayer.Migrations.Configuration>());
var context = new Context(ConnectionString);
context.Database.CreateIfNotExists();
}
This question appears to be related
UPDATE:
I can get the migrations working if I refer to the connection string using
public MyContext() : base("MyContextConnection") - which points to in the config
I was also able to get migrations working on using different instances of the context, if I created a ContextFactory class and passed the connection to it by referencing a global. ( See my answer to the related question link )
Now I am wondering why it has to be so hard.
I'm not sure exactly as to what the problems are you facing, but let me try
The easiest way to provide connection - and be sure it works that way...
1) Use your 'DbContext' class name - and define a connection in the app.config (or web.config). That's easiest, you should have a connection there that matches your context class name,
2) If you put it into the DbContext via constructor - then be consistent and use that one. I'd also suggest to 'read' from config connections - and again name it 'the same' as your context class (use the connection 'name', not the actual string),
3) if none is present - EF/CF makes the 'default' one - based on your provider - and your context's class name - which usually isn't what you want,
You shouldn't customize with initializers for that reason -
initializers should be agnostic and serve other purpose - setup
connection in the .config - or directly on your DbContext
Also check this Entity Framework Code First - How do I tell my app to NOW use the production database once development is complete instead of creating a local db?
Always check 'where your data' goes - before doing anything.
For how the initializer actually works - check this other post of mine, I made a thorough example
How to create initializer to create and migrate mysql database?
Notes: (from the comments)
Connection shouldn't be very dynamic - config is the right place for it to be, unless you have a good reason.
Constructor should work fine too.
CreateDbIfNotExists doesn't work well together with the 'migration' initializer. You can just use the MigrateDatabaseToLatestVersion initializer. Don't 'mix' it
Or - put something like public MyContext() : base("MyContextConnection") - which points to <connectionStrings> in the config
To point to connection - just use its 'name' and put that into constructor.
Or use somehting like ConfigurationManager.ConnectionStrings["CommentsContext"].ConnectionString
Regarding entertaining 'multiple databases' with migrations (local and remote from one app) - not exactly related - but this link - Migration not working as I wish... Asp.net EntityFramework
Update:
(further discussion here - Is adding a class that inherits from something a violation of the solid principles if it changes the behavior of code?)
It is getting interesting here. I did manage to reproduce the problems you're facing actually. Here is a short breakdown on what I think it's happening:
First, this worked 'happily':
Database.SetInitializer(new CreateAndMigrateDatabaseInitializer<MyContext, MyProject.Migrations.Configuration>());
for (var flip = false; true; flip = !flip)
{
using (var db = new MyContext(flip ? "Name=MyContext" : "Name=OtherContext"))
{
// insert some records...
db.SaveChanges();
}
}
(I used custom initializer from my other post, which controls migration/creation 'manually')
That worked fine w/o an Initializer. Once I switched that on, I ran into some curious problems.
I deleted Db-s (two, for each connection). I expected to either not work, or create one db, then another in the next pass (like it did, w/o migrations, just 'Create' initializer).
What happened, to my surprise - is it actually created both databases on the first
pass ??
Then, being a curious person:), I put breakpoints on the MyContext ctor, and debugged through the migrator/initializer. Again empty/no db-s etc.
It created first instance on my call within the flip. Then on the first access to 'model', it invoked the initializer. Migrator took over (having had no db-s). During the migrator.Update(); it actually constructs the MyContext (I'm guessing via generic param in Configuration) - and calls the 'default' empty ctor. That had the 'other connection/name' by default - and creates the other Db all as well.
So, I think this explains what you're experiencing. And why you had to create the 'Factory' to support the Context creation. That seems to be the only way. And setting some 'AppDomain' wide 'connection string' (which you did well actually) which isn't 'overriden' by default ctor call.
Solution that I see is - you just need to run everything through factory - and 'flip' connections in there (no need for static connection, as long as your factory is a singleton.
You can supply a configuration in the MigrateDatabaseToLatestVersion constructor.
If you set the initializer in the DbContext you can also pass a 'true' to use the current connection string.