2 part question... I have several resource files (.resx) used in my solution primarily for translation of strings. For example, Errors.resx, Validation.resx, and Enums.resx.
Part1 : If I didn't have the Enums resource file, I'd assume that I should place all resource files in the UI Layer, probably within it's own assembly (like 'Company.App1.MVCApp.Resources') and reference it from the web app (Company.App1.MVCApp)... would I be correct in placing the resource files in the UI layer?
Part 2: The Enums.resx file contains descriptive strings that tie to enum members (using the Description attribute), in my UI and sometimes Domain services I will need to access the descriptive strings possibly in their translation. I thought about storing this somewhere in the Core/Domain layer maybe somewhere like Company.App1.Core.Resources ... ? Or should I create an abstraction in the Core layer and then implement the ResourcemManager somewhere in the Infrastructure layer in order to stick to the proper Onion Architecture.. ?
Part1 : In the application I'm currently working on, there isn't one resx file per concern (Enums, Errors...) but one resx file per project. If you take error messages, for instance, purely UI error messages go in the UI resx file, domain error messages go in the Domain resx, and son on. IMO resource files are best placed closest to the code where the localized strings are used. Having most localization files in the UI project tightly couples localization to the way your application is rendered, which might be problematic if you want to reuse localization in another context than that of your main UI.
Part2 : If you only need to access localized enum members in the Domain layer, then you could have a specific helper in the domain layer that derives from or uses System.Resources.ResourceManager to find the localized string. However, I find it handy to have some kind of general-purpose localization helper in an independent layer that centralizes all the localization logic that is more complex than just a Properties.Resources.[...] and is able to search in all resx files of your solution if need be.
Related
We're refactoring our solution to a Domain Driven Design structure. Our consultants which implement the software, should be able to customize some behavior of the application (for specific customers needs). For example they want to add custom defined properties to the presentation (user input forms) and save the custom data together with the entities defined in our DDD projects.
However, the domain objects should preferably not contain a customData property. I don't want to mix those and let the domain object know that there's something like custom data. I'm saving en fetching the entities by Repositories.
How do I make this scenario possible? One possible solution would be:
Query the entity by using his Repository
And query separately the CustomPropertiesRepository by entity ID.
Combine the two queries objects.
When saving the forms. It will be splitted again using two repositories
The disadvantage of this, is that I have to query twice though it should be one document.
Any advice on this problem?
in general dynamic properties are better suited for data-centric design and in my opinion this practice is not suitable for DDD.
in DDD the code must reflect the knowledge of the domain, it must be simple and explicit.
before thinking of the best way to persist a dynamic property, you must solve the problem at the design level:
1-There are three possible artifacts for a property: aggregate root, entity or object value.
2-usually a dynamic property brings with it a functional need (calculation, validation ... etc), where you will implement this functionality? whether in the aggregate root or in a domain service you will be compelled to compile your code and here the dynamic propriety loses its meaning, unless you think use a Business rules engine, and there you introduce another new paradigm with its whole complication, and some of your business logic would be outside aggregates and domain services.
i though that a vocabulary was a special type of directory or that at list a directory could provide a source for a vocabulary. It seems notData List and directories. What i want to acheive is to plug my taxonomy server into nuxeo. In other words, i would like nuxeo to use taxonomies that are defined externally. Isn't the directory abstraction meant for it ? The taxonomy server provide some rest service for external access.
Yes the directory abstraction is designed to abstract lists like yours. You need to implement a new Nuxeo component and implement a org.nuxeo.ecm.directory.DirectoryFactory and a org.nuxeo.ecm.directory.Directory as well as a org.nuxeo.ecm.directory.Session. It's not as easy as it should and involves a few classes, but it's quite feasible.
You can take as an example the SQL implementation in nuxeo-platform-directory-sql to get an idea if what's needed.
I'm planning my first architecture that uses DTOs. I'm now exploring how to map the modified client-side domain objects back to the DTOs that were originally retrieved from the data service. I must map back to the original object graph, instead of instantiating a new one, in order to use WCF Data Services Client Library's change tracking feature.
To put it in general terms, I need a tool that maps instances and (recursively) their sub-instances (collectively called the "source graph") to existing instances and (recursively) sub-instances (collectively called the "target graph") in a manner that is (nearly) 100% convention, rather than configuration, based.
The specific required functionality that I can think of is:
Replace single-valued properties within the target graph with their corresponding values from the source graph.
Synchronize collection pairs: elements that were added to a collection within the source graph should then be added to the corresponding collection within the target graph; elements removed from a collection within the source graph should then be removed from the corresponding collection within the target graph.
When it comes to mapping DTOs, it seems many people use AutoMapper. So I had assumed this task would be easy using that tool. Upon looking at the details, though, I have doubts it will fit my requirements. This indicates AutoMapper won't handle #1 so well. Equally so, this indicates AutoMapper won't help much with #2 either.
I don't want to try bending AutoMapper to my purposes if it will lead to a lot of configuration code. That would defeat the purpose of using a convention-based tool in the first place. So I'm wondering: what's a better tool for the job?
I recently switched to Entity Framework 5. Now, I want to generate the POCO classes from an existing database and also I need both lazy loading and change tracking. So all the scalar properties should be virtual as well as navigation properties.
Adding a new ADO.Net Entity Data Model ends in an .edmx file and some other .cs and .tt files.
Firstly, I wonder why the generated POCO classes by default do not meet the requirements of change tracking proxy, i.e scalar properties are not virtual.
Secondly, how can I genrate proxy-enabled poco classes?
PS: I accepted the Slauma's answer as the best and the only answer so far but I don't agree with the first part of it. Here is my argument
Slauma talks about two problems with proxy: restrictions and performance:
About the restrictions on the proxy-enabled entities:
When the classes are generated in DB First method by Entity Framework, the rules that the classes must follow to enable change-tracking proxies are not that much important becuase they are not restrictive at all. Who really cares whether the navigation collections are IList or HashSet? Talking about the restrictions is sensible only when there are perior designed classes in the application and tables are to be generated from them.
Complex properties are not supported in DB first. So we can exclude them from our discussion.
About the perfomrance:
In the addressed article and also some other experiments I have studied so far the results are not very convincing to reject proxy in favor of snapshot. First, the experiments were done on a large number of entities a.k.a 10,000. It is not improbable that a batch process in your application(not in database) works on large number of entities, however better approaches are assumed such as stored procedure.
Second, depending on the type of the application and the needs, we usually deal with few number of entites for example when Repository pattern is impelemented and used; there is no difference between the performance of proxy and snapshot.
Interestingly, in the addressed experiment, re-assigning the same value to the properties was the only case when performance of proxy dramatically fails. But who really does this? It is very easy to be careful to avoid repeatedly notifying change tracker. Again, in this case significant problem arrises when large number of entites are dealt with.
Firstly, I wonder why the generated POCO classes by default do not
meet the requirements of change tracking proxy, i.e scalar properties
are not virtual.
Using change tracking proxies is not recommended as the default change tracking strategy. It is explained in more details in this blog post. In essence the main reason to use change tracking proxies - better performance compared to snapshot based change tracking - is not always guaranteed - and sometimes it's even worse - and the list of disadvantages is longer than for snapshot based change tracking.
In the past the T4 templates that generated POCO entities indeed marked all properties - including scalar properties - as virtual and prepared the entities for proxy based change tracking. For the reasons described in the blog this has been changed for the newer templates, including the DbContext Generator for EF 5, as mentioned in this comment below the blog post linked above. Now, only navigation properties are marked as virtual, but not scalar properties, which allows lazy loading but is not sufficient for change tracking proxies.
Secondly, how can I generate proxy-enabled poco classes?
I am not aware of any available T4 template that would do this, but it is quite easy to modify the default template to mark also the scalar properties as virtual:
In your project you should have two files with a .tt extension: YourModelContainer.tt and YourModelContainer.Context.tt. Open the YourModelContainer.tt file.
In this file you'll find a method called Property:
public string Property(EdmProperty edmProperty)
{
return string.Format(
CultureInfo.InvariantCulture,
"{0} {1} {2} {{ {3}get; {4}set; }}",
Accessibility.ForProperty(edmProperty),
_typeMapper.GetTypeName(edmProperty.TypeUsage),
_code.Escape(edmProperty),
_code.SpaceAfter(Accessibility.ForGetter(edmProperty)),
_code.SpaceAfter(Accessibility.ForSetter(edmProperty)));
}
Change the line with...
Accessibility.ForProperty(edmProperty),
...to...
AccessibilityAndVirtual(Accessibility.ForProperty(edmProperty)),
That's it.
Just to mention it, in case you are not familiar with it, but there is a second kind of Database-First approach available, that is Reverse Engineering an existing database to a Code-First model. This approach doesn't use a T4 template at all but creates a Code-First model and a context with Fluent API mapping. It is useful if you want to customize and extend the model classes (you could also add virtual modifiers then manually) and proceed with Code-First workflow (and Code-First Migrations) in future to update and evolve your database schema.
I've started learning about DDD and wanted to know how others have organised their projects.
I've started off by organising around my AggregateRoots:
MyApp.Domain (namespace for domain model)
MyApp.Domain.Product
- Product
- IProductService
- IProductRepository
- etc
MyApp.Domain.Comment
- Comment
- ICommentService
- ICommentRepository
- etc
MyApp.Infrastructure
- ...
MyApp.Repositories
- ProductRepository : IProductRepository
- etc
The problem i've bumped into with this is that i have to reference my domain product as MyApp.Domain.Product.Product or Product.Product. I also get a conflict with my linq data model for product....I have to use ugly lines of code to distiguish between the two such as MyApp.Domain.Product.Product and MyApp.Respoitories.Product.
I am really interested to see how others have organised their solutions for DDD...
I am using Visual Studio as my IDE.
Thanks alot.
I try to keep things very simple whenever I can, so usually something like this works for me:
Myapp.Domain - All domain specific classes share this namespace
Myapp.Data - Thin layer that abstracts the database from the domain.
Myapp.Application - All "support code", logging, shared utility code, service consumers etc
Myapp.Web - The web UI
So classes will be for example:
Myapp.Domain.Sales.Order
Myapp.Domain.Sales.Customer
Myapp.Domain.Pricelist
Myapp.Data.OrderManager
Myapp.Data.CustomerManager
Myapp.Application.Utils
Myapp.Application.CacheTools
Etc.
The idea I try to keep in mind as I go along is that the "domain" namespace is what captures the actual logic of the application. So what goes there is what you can talk to the "domain experts" (The dudes who will be using the application) about.
If I am coding something because of something that they have mentioned, it should be in the domain namespace, and whenever I code something that they have not mentioned (like logging, tracing errors etc) it should NOT be in the domain namespace.
Because of this I am also wary about making too complicated object hierarchies. Ideally a somewhat simplified drawing of the domain model should be intuitively understandable by non-coders.
To this end I don't normally start out by thinking about patterns in much detail. I try to model the domain as simple as I can get away with, following just standard object-oriented design guidelines. What needs to be an object? How are they related?
DDD in my mind is about handling complex software, but if your software is not itself very complex to begin with you could easily end up in a situation where the DDD way of doing things adds complexity rather than removes it.
Once you have a certain level of complexity in your model you will start to see how certain things should be organised, and then you will know which patterns to use, which classes are aggregates etc.
In my example, Myapp.Domain.Sales.Order would be an aggregate root in the sense that when it is instanced it will likely contain other objects, such as a customer object and collection of order lines, and you would only access the order lines for that particular order through the order object.
However, in order to keep things simple, I would not have a "master" object that only contains everything else and has no other purpose, so the order class will itself have values and properties that are useful in the application.
So I will reference things like:
Myapp.Domain.Sales.Order.TotalCost
Myapp.Domain.Sales.Order.OrderDate
Myapp.Domain.Sales.Order.Customer.PreferredInvoiceMethod
Myapp.Domain.Sales.Order.Customer.Address.Zip
etc.
I like having the domain in the root namespace of the application, in its own assembly:
Acme.Core.dll [root namespace: Acme]
This neatly represents the fact that the domain is in scope of all other portions of the application. (For more, see The Onion Architecture by Jeffrey Palermo).
Next, I have a data assembly (usually with NHibernate) that maps the domain objects to the database. This layer implements repository and service interfaces:
Acme.Data.dll [root namespace: Acme.Data]
Then, I have a presentation assembly declaring elements of my UI-pattern-of-choice:
Acme.Presentation.dll [root namespace: Acme.Presentation]
Finally, there is the UI project (assuming a web app here). This is where the composition of the elements in preceding layers takes place:
Acme.Web [root namespace: Acme.Web]
Although you're also a .Net developer, the Java implementation reference of the cargo app from DDD by Eric Evans and Citerus is a good resource.
In the doc'd code, you can see the DDD-organization into bounded contexts and aggregates in action, right in the Java packages.
Additionally, you might consider Billy McCafferty's Sharp Architecture. It's an ASP.Net MVC, NHibernate/Fluent NHibernate implementation that is built with DDD in mind.
Admittedly, you will still need to apply a folder/namespace solution to provide the contexts. But, couple the Evans approach with #Arch and you should be well on your way.
Let us know what you are going with. I am on the same path as well, and not far from you!
Happy coding,
Kurt Johnson
Your domain probably have a
name, so you should use this
name as namespace.
I usally put repository
implementation and data access
details in a namespace called
Persistance under the domain
namespace.
The application use its own name
as namespace.
I'd check out codecampserver since the setup there is quite common.
They have a core project in which they include both the application and domain layers. I.e. the insides of the onion (http://jeffreypalermo.com/blog/the-onion-architecture-part-1/).
I actually like to break the core apart into separate projects to control the direction of dependency. So typically I have:
MyNamespace.SomethingWeb <-- multiple UIs
MyNamespace.ExtranetWeb <-- multiple UIs
MyNamespace.Application <-- Evans' application layer with classes like CustomerService
MyNamespace.Domain
MyNamespace.Domain.Model <-- entities
MyNamespace.Domain.Services <-- doman services
MyNamespace.Domain.Repositories
MyNamespace.Infrastructure <-- repo implementation etc.
MyNamespace.Common <-- A project which all other projects have a dependency to which has things like a Logger, Util classes, etc.