How do I prevent duplicate entries using the UnitOfWork pattern with code first Entity Framework? - c#-4.0

I am using the Unit of Work and Generic Repository pattern. Here is the statement that checks for a duplicate entry:
int id = int.Parse(beer.id); //id comes from the item we're hoping to insert
if (_unitOfWork.BeerRepository.GetByID(id) == null)
\\create a new model br
_unitOfWork.BeerRepository.Insert(br);
_unitOfWork.save();
Apparently this is failing to check to see if the beer is already in the database because I get this inner exception:
Violation of PRIMARY KEY constraint 'PK_Beers_3214EC2703317E3D'.
Cannot insert duplicate key in object 'dbo.Beers'.\r\nThe statement
has been terminated.
I also get this message:
An error occurred while saving entities that do not expose foreign
key properties for their relationships. The EntityEntries property
will return null because a single entity cannot be identified as the
source of the exception. Handling of exceptions while saving can be
made easier by exposing foreign key properties in your entity types.
See the InnerException for details.
The UnitOfWork class has my BeerRecommenderContext which implements DbContext and the UnitOfWork has a generic repository for each entity:
namespace BeerRecommender.Models
{
public class GenericRepository<TEntity> where TEntity : class
{
internal BeerRecommenderContext context;
internal DbSet<TEntity> dbSet;
public GenericRepository(BeerRecommenderContext context)
{
this.context = context;
this.dbSet = context.Set<TEntity>();
}
public virtual IEnumerable<TEntity> Get(
Expression<Func<TEntity, bool>> filter = null,
Func<IQueryable<TEntity>, IOrderedQueryable<TEntity>> orderBy = null,
string includeProperties = "")
{
IQueryable<TEntity> query = dbSet;
if (filter != null)
{
query = query.Where(filter);
}
foreach (var includeProperty in includeProperties.Split
(new char[] { ',' }, StringSplitOptions.RemoveEmptyEntries))
{
query = query.Include(includeProperty);
}
if (orderBy != null)
{
return orderBy(query).ToList();
}
else
{
return query.ToList();
}
}
public virtual TEntity GetByID(object id)
{
return dbSet.Find(id);
}
public virtual void Insert(TEntity entity)
{
dbSet.Add(entity);
}
public virtual void Delete(object id)
{
TEntity entityToDelete = dbSet.Find(id);
Delete(entityToDelete);
}
public virtual void Delete(TEntity entityToDelete)
{
if (context.Entry(entityToDelete).State == EntityState.Detached)
{
dbSet.Attach(entityToDelete);
}
dbSet.Remove(entityToDelete);
}
public virtual void Update(TEntity entityToUpdate)
{
dbSet.Attach(entityToUpdate);
context.Entry(entityToUpdate).State = EntityState.Modified;
}
}
}

I have a similar usage of repository using code-first. Occasionally, I would see conflicts like the one you described. My issue was with change tracking across multiple processes. Are you inserting items into the database inside one process (using a single entity context)?
If you are, you should look at the Merge Options available with Entity Framework. If you are using the default merge option (AppendOnly), then you could be querying the in memory context instead of going to the database. This could cause the behaviour you are describing.
Unfortunately, as far as I understand, all the merge options are not yet exposed to Code-First. You can choose the default (AppendOnly) or NoTracking, which will go to the database every time.
Hope this helps,
Davin

Related

DDD entity with complex creation process

How entities with complex creation process should be created in DDD? Example:
Entity
- Property 1
- Property 2: value depends on what was provided in Property 1
- Property 3: value depends on what was provided in Property 1
- Property 4: value depends on what was provided in Property 1, 2 and 3
I have two ideas but both looks terrible:
Create entity with invalid state
Move creation process to service
We are using REST API so in first scenario we will have to persist entity with invalid state and in second scenario we move logic outside of the entity.
You can use the Builder Pattern to solve this problem.
You can make a Builder that has the logic for the dependencies between properties and raise Exceptions, return errors or has a mechanism to tell the client which are the next valid steps.
If you are using an object orienterd language, the builder can also return different concrete classes based on the combination of these properties.
Here's a very simplified example. We will store a configuration for EventNotifications that can either listen on some Endpoint (IP, port) or poll.
enum Mode { None, Poll, ListenOnEndpoint }
public class EventListenerNotification {
public Mode Mode { get; set; }
public Interval PollInterval { get; set; }
public Endpoint Endpoint { get; set; }
}
public class Builder {
private Mode mMode = Mode.None;
private Interenal mInterval;
private Endpoint mEndpoint;
public Builder WithMode(Mode mode) {
this.mMode = mode;
return this;
}
public Builder WithInterval(Interval interval) {
VerifyModeIsSet();
verifyModeIsPoll();
this.mInterval = interval;
return this;
}
public Builder WithEndpoint(Endpoint endpoint) {
VerifyModeIsSet();
verifyModeIsListenOnEndpoint();
this.mInterval = interval;
return this;
}
public EventListenerNotification Build() {
VerifyState();
var entity = new EventListenerNotification();
entity.Mode = this.mMode;
entity.Interval = this.mInterval;
entity.Endpoint = this.mEndpoint;
return entity;
}
private void VerifyModeIsSet() {
if(this.mMode == Mode.None) {
throw new InvalidModeException("Set mode first");
}
}
private void VerifyModeIsPoll() {
if(this.mMode != Mode.Poll) {
throw new InvalidModeException("Mode should be poll");
}
}
private void VerifyModeIsListenForEvents() {
if(this.mMode != Mode.ListenForEvents) {
throw new InvalidModeException("Mode should be ListenForEvents");
}
}
private void VerifyState() {
// validate properties based on Mode
if(this.mMode == Mode.Poll) {
// validate interval
}
if(this.mMode == Mode.ListenForEvents) {
// validate Endpoint
}
}
}
enum BuildStatus { NotStarted, InProgress, Errored, Finished }
public class BuilderWithStatus {
private readonly List<Error> mErrors = new List<Error>();
public BuildStatus Status { get; private set; }
public IReadOnlyList<Error> Errors { get { return mErrors; } }
public BuilderWithStatus WithInterval(Interval inerval) {
if(this.mMode != Mode.Poll) {
this.mErrors.add(new Error("Mode should be poll");
this.Status = BuildStatus.Errored;
}
else {
this.mInterval = interval;
}
return this;
}
// rest is same as above, but instead of throwing errors you can record the error
// and set a status
}
Here are some resources with more information and other machisms that you can use:
https://martinfowler.com/articles/replaceThrowWithNotification.html
https://martinfowler.com/eaaDev/Notification.html
https://martinfowler.com/bliki/ContextualValidation.html
Take a look at chapter 6 of the Evans book, which specifically talks about the life cycle of entities in the domain model
Creation is usually handled with a factory, which is to say a function that accepts data as input and returns a reference to an entity.
in second scenario we move logic outside of the entity.
The simplest answer is for the "factory" to be some method associate with the entity's class - ie, the constructor, or some other static method that is still part of the definition of the entity in the domain model.
But problem is that creation of the entity requires several steps.
OK, so what you have is a protocol, which is to say a state machine, where you collect information from the outside world, and eventually emit a new entity.
The instance of the state machine, with the data that it has collected, is also an entity.
For example, creating an actionable order might require a list of items, and shipping addresses, and billing information. But we don't necessarily need to collect all of that information at the same time - we can get a little bit now, and remember it, and then later when we have all of the information, we emit the submitted order.
It may take some care with the domain language to distinguish the tracking entity from the finished entity (which itself is probably an input to another state machine....)

Unable to use multiple instances of MobileServiceClient concurrently

I structured my project into multiple mobile services, grouped by the application type eg:
my-core.azure-mobile.net (user, device)
my-app-A.azure-mobile.net (sales, order, invoice)
my-app-B.azure-mobile.net (inventory & parts)
I'm using custom authentication for all my services, and I implemented my own SSO by setting the same master key to all 3 services.
Things went well when I tested using REST client, eg. user who "logged in" via custom api at my-core.azure-mobile.net is able to use the returned JWT token to access restricted API of the other mobile services.
However, in my xamarin project, only the first (note, in sequence of creation) MobileServiceClient object is working properly (eg. returning results from given table). The client object are created using their own url and key respectively, and stored in a dictionary.
If i created client object for app-A then only create for app-B, I will be able to perform CRUD+Sync on sales/order/invoice entity, while CRUD+Sync operation on inventory/part entity will just hang there. The situation is inverse if I swap the client object creation order.
I wonder if there is any internal static variables used within the MobileServiceClient which caused such behavior, or it is a valid bug ?
=== code snippet ===
public class AzureService
{
IDictionary<String, MobileServiceClient> services = new Dictionary<String, MobileServiceClient>();
public MobileServiceClient Init (String key, String applicationURL, String applicationKey)
{
return services[key] = new MobileServiceClient (applicationURL, applicationKey);
}
public MobileServiceClient Get(String key)
{
return services [key];
}
public void InitSyncContext(MobileServiceSQLiteStore offlineStore)
{
// Uses the default conflict handler, which fails on conflict
// To use a different conflict handler, pass a parameter to InitializeAsync.
// For more details, see http://go.microsoft.com/fwlink/?LinkId=521416
var syncHandler = new MobileServiceSyncHandler ();
foreach(var client in services) {
client.Value.SyncContext.InitializeAsync (offlineStore, syncHandler);
}
}
public void SetAuthenticationToken(String uid, String token)
{
var user = new MobileServiceUser(uid);
foreach(var client in services) {
client.Value.CurrentUser = user;
client.Value.CurrentUser.MobileServiceAuthenticationToken = token;
}
}
public void ClearAuthenticationToken()
{
foreach(var client in services) {
client.Value.CurrentUser = null;
}
}
}
=== more code ===
public class DatabaseService
{
public static MobileServiceSQLiteStore LocalStore = null;
public static string Path { get; set; }
public static ISet<IEntityMappingProvider> Providers = new HashSet<IEntityMappingProvider> ();
public static void Init (String dbPath)
{
LocalStore = new MobileServiceSQLiteStore(dbPath);
foreach(var provider in Providers) {
var types = provider.GetSupportedTypes ();
foreach(var t in types) {
JObject item = null;
// omitted detail to create JObject using reflection on given type
LocalStore.DefineTable(tableName, item);
}
}
}
}
=== still code ===
public class AzureDataSyncService<T> : IAzureDataSyncService<T>
{
public MobileServiceClient ServiceClient { get; set; }
public virtual Task<List<T>> GetAll()
{
try
{
var theTable = ServiceClient.GetSyncTable<T>();
return theTable.ToListAsync();
}
catch (MobileServiceInvalidOperationException msioe)
{
Debug.WriteLine("GetAll<{0}> EXCEPTION TYPE: {1}, EXCEPTION:{2}", typeof(T).ToString(), msioe.GetType().ToString(), msioe.ToString());
}
catch (Exception e)
{
Debug.WriteLine("GetAll<{0}> EXCEPTION TYPE: {1}, EXCEPTION:{2}", typeof(T).ToString(), e.GetType().ToString(), e.ToString());
}
List<T> theCollection = Enumerable.Empty<T>().ToList();
return Task.FromResult(theCollection);
}
}
=== code ===
public class UserService : AzureDataSyncService<User>
{
}
public class PartService : AzureDataSyncService<Part>
{
}
const string coreApiURL = #"https://my-core.azure-mobile.net/";
const string coreApiKey = #"XXXXX";
const string invApiURL = #"https://my-inventory.azure-mobile.net/";
const string invApiKey = #"YYYYY";
public async void Foo ()
{
DatabaseService.Providers.Add (new CoreDataMapper());
DatabaseService.Providers.Add (new InvDataMapper ());
DatabaseService.Init (DatabaseService.Path);
var coreSvc = AzureService.Instance.Init ("Core", coreApiURL, coreApiKey);
var invSvc = AzureService.Instance.Init ("Inv", invApiURL, invApiKey);
AzureService.Instance.InitSyncContext (DatabaseService.LocalStore);
AzureService.Instance.SetAuthenticationToken("AAA", "BBB");
UserService.Instance.ServiceClient = coreSvc;
PartService.Instance.ServiceClient = invSvc;
var x = await UserService.GetAll(); // this will work
var y = await PartService.GetAll(); // but not this
}
It's ok to use multiple MobileServiceClient objects, but not with the same local database. The offline sync feature uses a particular system tables to keep track of table operations and errors, and it is not supported to use the same local store across multiple sync contexts.
I'm not totally sure why it is hanging in your test, but it's possible that there is a lock on the local database file and the other sync context is waiting to get access.
You should instead use different local database files for each service and doing push and pull on each sync context. With your particular example, you just need to move LocalStore out of DatabaseService and into a dictionary in AzureService.
In general, it seems like an unusual design to use multiple services from the same client app. Is there a particular reason that the services need to be separated from each other?

ServiceStack ORMLite

We are migrating our SProc based solution over to ORMLite, and so far has been pretty painless. Today I wrote the following method:
public AppUser GetAppUserByUserID(int app_user_id)
{
var dbFactory = new OrmLiteConnectionFactory(this.ConnectionString, SqlServerOrmLiteDialectProvider.Instance);
AppUser item = null;
var rh = new RedisHelper();
var id= CacheIDHelper.GetAppUserID( app_user_id );
item = rh.Get<AppUser>(id);
if (item == null)
{
try
{
using (var db = dbFactory.OpenDbConnection())
{
item = db.Single<AppUser>("application_user_id={0}", app_user_id);
rh.Set<AppUser>(item, id);
}
}
catch (Exception ex)
{
APLog.error(ex, "Error retrieving user!");
}
}
return item;
}
I have remove some of the extraneous fields, but they are basically:
[Alias("application_user")]
public class AppUser : APBaseObject
{
[Alias("application_user_id")]
[AutoIncrement]
public int? UserID
{
get;
set;
}
[Alias("application_user_guid")]
public string UserGUID
{
get;
set;
}
//MORE FIELDS here.
}
The challenge is that they only field that is populate is the ID field, AND I already know that ID because I am passing it into the method.
I did get the last SQL called and ran that against the DB directly and all of the fields were being referenced correctly.
I stepped through the code in the debugger, and everything came back correctly, except that the only field returned was the ID.
Thoughts?
I had a similar issue which was caused by my class methods not mapping to the db properly. My exact issue was caused by a nullable int field in the db and the class method was defined as an 'int' instead of 'int?'.
Perhaps you have a similar issue?

Using factory pattern for modeling similar subscriptions

I have the following question that's been nagging at me for quite some time.
I'd like to model the following domain entity "Contact":
public class Contact:IEntity<Contact>
{
private readonly ContactId _Id;
public ContactId Id
{
get { return this._Id; }
}
private CoreAddress _CoreAddress;
public CoreAddress CoreAddress
{
get { return this._CoreAddress; }
set
{
if (value == null)
throw new ArgumentNullException("CoreAddress");
this._CoreAddress = value;
}
}
private ExtendedAddress _ExtendedAddress;
public ExtendedAddress ExtendedAddress
{
get { return this._ExtendedAddress; }
set
{
if (value == null)
throw new ArgumentNullException("ExtendedAddress");
this._ExtendedAddress = value;
}
}
private readonly IList<ContactExchangeSubscription> _Subscriptions
= new List<ContactExchangeSubscription>();
public IEnumerable<ContactExchangeSubscription> Subscriptions
{
get { return this._Subscriptions; }
}
public Contact(ContactId Id, CoreAddress CoreAddress, ExtendedAddress ExtendedAddress)
{
Validations.Validate.NotNull(Id);
this._Id = Id;
this._CoreAddress = CoreAddress;
this._ExtendedAddress = ExtendedAddress;
}
}
As you can see it has a collection of subscriptions. A subscription is modeled like this:
public class ContactExchangeSubscription
{
private ContactId _AssignedContact;
public ContactId AssignedContact
{
get { return this._AssignedContact; }
set
{
if (value == null)
throw new ArgumentNullException("AssignedContact");
this._AssignedContact = value;
}
}
private User _User;
public User User
{
get { return this._User; }
set
{
Validations.Validate.NotNull(value, "User");
this._User = value;
}
}
private ExchangeEntryId _EntryId;
public ExchangeEntryId EntryId
{
get { return this._EntryId; }
set
{
if (value == null)
throw new ArgumentNullException("EntryId");
this._EntryId = value;
}
}
public ContactExchangeSubscription(ContactId AssignedContact, User User, ExchangeEntryId EntryId)
{
this._AssignedContact = AssignedContact;
this._User = User;
this._EntryId = EntryId;
}
}
Now I've been thinking that I shouldnt model a storage technology (Exchange) in my domain, after all, we might want to switch our application to other subscription providers. The property "EntryId" is specific to Exchange. A subscription would always need a User and a ContactId, though.
Is there a better way to model the Subscription? Should I use a factory or abstract factory for the Subscription type to cover other types of subscriptions, should the need arise?
EDIT: So let's toss an abstract factory in the ring and introduce some interfaces:
public interface IContactSubscriptionFactory
{
IContactSubscription Create();
}
public interface IContactSubscription
{
ContactId AssignedContact { get;}
User User { get; }
}
How would a concrete factory for a ContactExchangeSubscription be coded? Remember that this type will need the EntryID field, so it has to get an additional ctr parameter. How to handle different constructor paremeters on different sub-types in factories in general?
I think the answer is staring you in the face in that you need to work against an interface making it easier to introduce new subscription providers (if that's the right term) in the future. I think this is more of an OO design question that DDD.
public interface ISubscriptionProvider
{
ContactId AssignedContact { get; }
User User { get; }
}
And the code in your contract becomes
private readonly IList<ISubscriptionProvider> _subscriptions
= new List<ISubscriptionProvider>();
public IEnumerable<ISubscriptionProvider> Subscriptions
{
get { return _subscriptions; }
}
With regards to using a factory; the purpose of a factory is to construct your domain objects when a creation strategy is required. For example a SubscriptionProviderFactory could be used within your repository when you rehydrate your aggregate and would make the decision to return the ContactExchangeSubscription (as an ISubscriptionProvider) or something else based on the data passed into it.
One final point but perhaps this is just because of the way you have shown your example. But I would say your not really following DDD, the lack of behaviour and with all your propeties having public getters and setters, suggestions your falling into the trap of building an Aemic Domain Model.
After some research I came up with this. Code first, explanation below:
public interface IContactFactory<TSubscription> where TSubscription : IContactSubscription
{
Contact Create(ContactId Id, CoreAddress CoreAddress, ExtendedAddress ExtendedAddress, TSubscription Subscription);
}
public class ContactFromExchangeFactory : IContactFactory<ContactExchangeSubscription>
{
public Contact Create(ContactId Id, CoreAddress CoreAddress, ExtendedAddress ExtendedAddress, ContactExchangeSubscription ExchangeSubscription)
{
Contact c = new Contact(Id, CoreAddress, ExtendedAddress);
c.AddSubscription(ExchangeSubscription);
return c;
}
}
I realized that I dont need a factory for the Contactsubscription but rather for the contact itself.
I learned some things about factories along the way:
They are only to be used when creating (really) new entities, not when rebuilding them from a SQL DB for example
They live in the domain layer (see above!)
Factories are more suitable for similar objects that differ in behaviour rather than data
I welcome comments and better answers.

Retrieving values of ReadOnly fields from DynamicData DetailsView in Edit Mode on Updating using LinqDataSource

I have several tables in my database that have read-only fields that get set on Inserting and Updating, namely: AddDate (DateTime), AddUserName (string), LastModDate (DateTime), LastModUserName (string).
All of the tables that have these values have been set to inherit from the following interface:
public interface IUserTrackTable
{
string AddUserName { get; set; }
DateTime AddDate { get; set; }
string LastModUserName { get; set; }
DateTime LastModDate { get; set; }
}
As such, I have the following method on the Edit.aspx page:
protected void DetailsDataSource_Updating(object sender, LinqDataSourceUpdateEventArgs e)
{
IUserTrackTable newObject = e.NewObject as IUserTrackTable;
if (newObject != null)
{
newObject.LastModUserName = User.Identity.Name;
newObject.LastModDate = DateTime.Now;
}
}
However, by the time it hits this method, the e.OriginalObject has already lost the values for all four fields, so a ChangeConflictException gets thrown during the actual Update. I have tried adding the four column names to the DetailsView1.DataKeyNames array in the Init event handler:
protected void Page_Init(object sender, EventArgs e)
{
// other things happen before this
var readOnlyColumns = table.Columns.Where(c => c.Attributes.SingleOrDefaultOfType<ReadOnlyAttribute>(ReadOnlyAttribute.Default).IsReadOnly).Select(c => c.Name);
DetailsView1.DataKeyNames = DetailsView1.DataKeyNames.Union<string>(readOnlyColumns).ToArray<string>();
DetailsView1.RowsGenerator = new CustomFieldGenerator(table, PageTemplates.Edit, false);
// other things happen after this
}
I've tried making that code only happen on PostBack, and still nothing. I'm at a lose for how to get the values for all of the columns to make the round-trip.
The only thing the CustomFieldGenerator is handling the ReadOnlyAttribute, following the details on C# Bits.
UPDATE: After further investigation, the values make the round trip to the DetailsView_ItemUpdating event. All of the values are present in the e.OldValues dictionary. However, they are lost by the time it gets to the LinqDataSource_Updating event.
Obviously, there are the "solutions" of making those columns not participate in Concurrency Checks or other ways that involve hard-coding, but the ideal solution would dynamically add the appropriate information where needed so that this stays as a Dynamic solution.
i Drovani, I assume you want data auditing (see Steve Sheldon's A Method to Handle Audit Fields in LINQ to SQL), I would do this in the model in EF4 you can do it like this:
partial void OnContextCreated()
{
// Register the handler for the SavingChanges event.
this.SavingChanges += new EventHandler(context_SavingChanges);
}
private static void context_SavingChanges(object sender, EventArgs e)
{
// handle auditing
AuditingHelperUtility.ProcessAuditFields(objects.GetObjectStateEntries(EntityState.Added));
AuditingHelperUtility.ProcessAuditFields(objects.GetObjectStateEntries(EntityState.Modified), InsertMode: false);
}
internal static class AuditingHelperUtility
{
internal static void ProcessAuditFields(IEnumerable<Object> list, bool InsertMode = true)
{
foreach (var item in list)
{
IAuditable entity = item as IAuditable;
if (entity != null)
{
if (InsertMode)
{
entity.InsertedBy = GetUserId();
entity.InsertedOn = DateTime.Now;
}
entity.UpdatedBy = GetUserId();
entity.UpdatedOn = DateTime.Now;
}
}
}
}
Sadly this is not possible with EF v1

Resources