Subject may be unclear, but I'd like to expose two API calls that are almost identical, like so:
Routes
.Add<GameConsole>("/consoles", "GET")
.Add<GameConsole>("/consoles/count", "GET");
What I have now is "/consoles" giving me a list of all GameConsole objects from my repository. What I'd like to add is "/consoles/count", which gives me a count of all the GameConsole objects from my repository.
But since the service can only map one DTO in the routes, I can only have:
public object Get(GameConsole request)
{
return mRepository.GetConsoles();
}
Not sure I truly understand the limitations of only having one route map to a DTO; is there a way around this? As a side note, it seems odd that I have to pass the DTO to my service method, even though it's not being used at all (mapping to the route is the only purpose?)
Since the 2 routes don't contain any mappings to any variables and are both registered with the same request, you wont be able to tell the matching route from just the Request DTO, e.g:
public object Get(GameConsole request)
{
return mRepository.GetConsoles();
}
i.e. You would need to introspect the base.Request and look at the .PathInfo, RawUrl or AbsoluteUri to distinguish the differences.
If it mapped to a variable, e.g:
Routes
.Add<GameConsole>("/consoles", "GET")
.Add<GameConsole>("/consoles/{Action}", "GET");
Then you can distinguish the requests by looking at the populated request.Action.
But if they have different behaviors and return different responses then they should just be 2 separate services, e.g:
Routes
.Add<GameConsole>("/consoles", "GET")
.Add<GameConsoleCount>("/consoles/count", "GET");
The other option is to only have a single coarse-grained service that returns the combined dataset of both services (i.e. that also contains the count) that way the same service can fulfill both requests.
In very similar situations, I have been creating a subclass DTO for each separate routing service, inheriting the shared elements.
It has been working very well.
So the pattern is
public class SharedRequestDto
{
public string CommonItem { set; get; }
public string CommonId { set; get; }
}
then
[Route("/api/mainservice")]
public class MainServiceRequest : SharedRequestedDto
{
}
[Route("/api/similarservice")]
public class SimilarServiceRquest : SharedRequestDto
{
public string AddedItem { set; get; }
}
This allows differing but similar DTOs to be routed to individual services to process them. There is no need to perform introspection.
You can still use common code when necessary behind the concrete services because they can assume that their request object parameter is a SharedRequestDto.
It probably is not the right solution for every use case, but it is effective, especially since many of my DTOs are in families that share a great deal of data.
Related
I am trying to document an existing API using the OpenAPI spec (specifically using Swashbuckle and ASP.NET Core).
For many of the endpoints, the api uses a single query parameter which is a filter object – holding the actual parameters – that is base64-encoded.
I have successfully added the Swashbuckle library and can generate a swagger.json.
The generated spec however does not correctly describe the endpoints described above. Rather, the property names of the filter object are stated as query parameters, and thus autogenerated clients based off the spec do not work.
The spec mentions base64 only in relation to format of String and File, not Object.
Is it possible (and if so, how) to describe this type of endpoint in OpenAPI?
Is it possible (and if so, how) to generate this description correctly using Swashbuckle?
EDIT
In response to comment (probably necessary for answering subquestion 2) ).
An endpoint in the API source may look something like:
[HttpGet("")]
public async Task<IActionResult> Query([FromQuery] ThingFilter filter)
{
var results = await _dataContext.ThingService.Search(filter);
return Ok(results);
}
And a ThingFilter might be something like:
public class ThingFilter
{
public string Freetext { get; set; }
public List<PropertyFilter> PropertyFilters { get; set; }
}
In Startup.cs there is also registered a custom modelbinder that handles conversion from base64.
Hi I have a maybe a common problem that I think not entirely can be solved by Autofac or any IoC container. It can be a design problem that I need some fresh input on.
I have the classic MVC web solution with EF 6. Its been implemented in a true DDD style with Anti-corruption layer, three bounded contexts, cross-cutting concerns movers out to infrastructure projects. It has been a real pleasure to see all pieces fall in to place in good way. We also added Commands to CUD operations into Domain.
Now here is the problem. Customer want a change log that tracks every entities property and when updates are done we need to save into change log values before and after update. We have implemented that successful in a ILoggerService that wraps a Microsoft test utility that we uses to detect changes. But I, my role is Software Architect, took the decision to Decorate our generic repositories with a ChangeTrackerRepository that have a dependency on ILoggerService. This works fine. The Decorator track methods Add(…) and Modify(…) in our IRepository<TEntity>.
The problem is that we have Repositories that have custom repositories that have custom queries like this:
public class CounterPartRepository : Repository<CounterPart>, ICounterPartRepository
{
public CounterPartRepository(ManagementDbContext unitOfWork)
: base(unitOfWork)
{}
public CounterPart GetAggregate(Guid id)
{
return GetSet().CompleteAggregate().SingleOrDefault(s => s.Id == id);
}
public void DeleteCounterPartAddress(CounterPartAddress address)
{
RemoveChild(address);
}
public void DeleteCounterPartContact(CounterPartContact contact)
{
RemoveChild(contact);
}
}
We have simple repositories that just closes the generic repository and get proper EF Bounded context injected into it (Unit Of Work pattern):
public class AccrualPeriodTypeRepository : Repository<AccrualPeriodType>, IAccrualPeriodTypeRepository
{
public AccrualPeriodTypeRepository(ManagementDbContext unitOfWork)
: base(unitOfWork)
{
}
}
The problem is that when decorating AccrualPeriodTypeRepository with AutoFac through generic Decorator we can easily inject that repo into CommandHandler actor like this
public AddAccrualPeriodCommandHandler(IRepository<AccrualPeriod> accrualRepository)
This works fine.
But How do we also decorate CounterPartRepository???
I have gone through several solutions in my head and they all end up with a dead-end.
1) Manually decorate every custom repository generate to many custom decorators that it will be near unmaintainable.
2) Decorate the closed Repository Repository with extended custom queries. This smells bad. Should be part of that repository?
3) If we consider 2… maybe Skip our Services and only rely on IRepository for operating on our Aggregate Roots and IQueryHandler (see article https://cuttingedge.it/blogs/steven/pivot/entry.php?id=92)
I need some fresh input to a common problem I think, when it comes to decorating your repositories when you have custom closed repositories and simple repositories also closed but both inherit from same Repository
Have you consider decorating command handlers instead of decorating repositories?
Repos are too low level, and it is not their responsibility to know what should be logged and how.
What about the following:
1) You have your command handlers in a way:
public class DeleteCounterPartAddressHandler : IHandle<DeleteCounterPartAddressCommand>
{
//this might be set by a DI container, or passed to a constructor
public ICounterPartRepository Repository { get; set; }
public void Handle(DeleteCounterPartAddressCommand command)
{
var counterpart = repository.GetPropertyById(command.CounterPartId);
// in DDD you always want to read and aggregate
// and save an aggregate as a whole
property.DeleteAdress(command.AddressId);
repository.Save(counterpart)
}
}
2) Now you can simply use Chain Of Responsibility pattern to "decorate" your handlers with logging, transactions, whatever:
public class LoggingHandler<T> : IHandler<T> {
private readonly IHandler<T> _innerHandler;
public LoggingHandler(IHandler<T> innerHandler) {
_innerHandler = innerHandler;
}
public void Handle(T command)
{
//Obviously you do it properly, but you get the idea
_log.Info("Before");
_innerHandler.Handle(command);
_log.Info("After");
}
}
Now you have just one piece of code responsible for logging and you can compose it with any command handler, so if you ever want to log a particular command then you just "wrap" it with the logging handler, and it is still your IHandle<T> so the rest of the system is not impacted.
And you can do it with other concerns too (threading, queueing, transactions, multiplexing, routing, etc.) without messing around and plumbing this stuff here and there.
Concerns are very well separated this way.
It is also much better (to me) because you log on a real operation (business) level, rather than on low-level repository.
Hope it helps.
P.S. In DDD you really want your repositories to only expose aggregate-level methods because Aggregates suppose to take care of their invariants (and nothing else, no services, no repositories), and because Aggregate represents transaction boundary.
Really, it is up to the Repository how to get the Aggregate from persisted storage and how to persist it back, outside it should look like you ask someone for an object and it gives you an object you can call behaviors on.
So normally you would only get an aggregate from the repository, call its behavior(s) and then save it back. Which really means that your repositories would mostly have GetById and Save methods, not some internals like "UpdateThatPartOfAnAggregate".
I am just starting out with domain driven design and have a project for my domain which is structured like this:
Domain
/Entities
/Boundaries
/UserStories
As I understand DDD, apart from the boundaries with which the outside world communicates with the domain, everything in the domain should be invisble. All of the examples I have seen of entity classes within a domain have a public access modifer, for example here I have a entity named Message:
public class Message
{
private string _text;
public string Text
{
get { return _text; }
set { _text = value; }
}
public Message()
{
}
public bool IsValid()
{
// Do some validation on text
}
}
Would it not be more correct if the entity class and its members were marked as internal so it is only accessible within the domain project?
For example:
internal class Message
{
private string _text;
internal string Text
{
get { return _text; }
set { _text = value; }
}
internal Message()
{
}
internal bool IsValid()
{
// Do some validation on text
}
}
I think there's a confusion here: the Bounded Context is a concept which defines the context in which a model is valid there aren't classes actualy named Boundary. Maybe those are objects for anti corruption purposes, but really the Aggregate Root should deal with that or some entry point in the Bounded Context.
I wouldn't structure a Domain like this, this is artificial, you should structure the Domain according to what make sense in the real world process. You're using DDD to model a real world process in code and I haven't heard anyone outside software devel talking aobut Entities or Value Objects. They talk about Orders, Products, Prices etc
Btw that Message is almost certain a value object, unless the Domain really needs to identify uniquely each Message. Here the Message is a Domain concept, I hope you don't mean a command or an event. And you should put the validation in the constructor or in the method where the new value is given.
In fairness this code is way to simplistc, perhaps you've picked the wrong example. About the classes being internal or public, they might be one or another it isn't a rule, it depends on many things. At one extreme you'll have the approach where almost every object is internal but implements a public interface common for the application, this can be highly inefficient.
A rule of the thumb: if the class is used outside the Domain assembly make it public, if it's something internally used by the Domain and/or implements a public interface, make it internal.
Just wanted to get the groups thoughts on how to handle configuration details of entities.
What I'm thinking of specifically is high level settings which might be admin-changed. the sort of thing that you might store in the app or web.config ultimately, but from teh DDD perspective should be set somewhere in the objects explicitly.
For sake of argument, let's take as an example a web-based CMS or blog app.
A given blog Entry entity has any number of instance settings like Author, Content, etc.
But you also might want to set (for example) default Description or Keywords that all entries in the site should start with if they're not changed by the author. Sure, you could just make those constants in the class, but then the site owner couldn't change the defaults.
So my thoughts are as follows:
1) use class-level (static) properties to represent those settings, and then set them when the app starts up, either setting them from the DB or from the web.config.
or
2) use a separate entity for holding the settings, possibly a dictionary, either use it directly or have it be a member of the Entry class
What strikes you all as the most easy / flexible? My concerns abou the first one is that it doesn't strike me as very pluggable (if I end up wanting to add more features) as changing an entity's class methods would make me change the app itself as well (which feels like an OCP violation). The second one feels like it's more heavy, though, especially if I then have to cast or parse values out of a dictionary.
I would say that that whether a value is configurable or not is irrelevant from the Domain Model's perspective - what matters is that is is externally defined.
Let's say that you have a class that must have a Name. If the Name is always required, it must be encapsulated as an invariant irrespective of the source of the value. Here's a C# example:
public class MyClass
{
private string name;
public MyClass(string name)
{
if(name == null)
{
throw new ArgumentNullException("name");
}
this.name = name;
}
public string Name
{
get { return this.name; }
set
{
if(value == null)
{
throw new ArgumentNullException("name");
}
this.name = value;
}
}
}
A class like this effectively protects the invariant: Name must not be null. Domain Models must encapsulate invariants like this without any regard to which consumer will be using them - otherwise, they would not meet the goal of Supple Design.
But you asked about default values. If you have a good default value for Name, then how do you communicate that default value to MyClass.
This is where Factories come in handy. You simply separate the construction of your objects from their implementation. This is often a good idea in any case. Whether you choose an Abstract Factory or Builder implementation is less important, but Abstract Factory is a good default choice.
In the case of MyClass, we could define the IMyClassFactory interface:
public interface IMyClassFactory
{
MyClass Create();
}
Now you can define an implementation that pulls the name from a config file:
public ConfigurationBasedMyClassFactory : IMyClassFactory
{
public MyClass Create()
{
var name = ConfigurationManager.AppSettings["MyName"];
return new MyClass(name);
}
}
Make sure that code that needs instances of MyClass use IMyClassFactory to create it instead of new'ing it up manually.
I'm wondering if it's bad practice to use log4net directly on my domain object... I'll be using ELMAH for my exceptions on the ASP.NET MVC application side, but for some informational purposes I'd like to log some data about the domain model itself.
Given the following domain object:
public class Buyer
{
private int _ID;
public int ID
{
get { return _ID; }
set
{
_ID = value;
}
}
private IList<SupportTicket> _SupportTickets=new List<SupportTicket>();
public IList<SupportTicket> SupportTickets
{
get
{
return _SupportTickets.ToList<SupportTicket>().AsReadOnly();
}
}
public void AddSupportTicket(SupportTicket ticket)
{
if (!SupportTickets.Contains(ticket))
{
_SupportTickets.Add(ticket);
}
}
}
Is adding logging behavior in the AddSupportTicketMethod a bad idea...so essentialy it'd look like this:
public class Buyer
{
protected static readonly ILog log = LogManager.GetLogger(typeof(SupportTicket));
public Buyer()
{
log4net.Config.XmlConfigurator.Configure();
}
private int _ID;
public int ID
{
get { return _ID; }
set
{
_ID = value;
}
}
private IList<SupportTicket> _SupportTickets=new List<SupportTicket>();
public IList<SupportTicket> SupportTickets
{
get
{
return _SupportTickets.ToList<SupportTicket>().AsReadOnly();
}
}
public void AddSupportTicket(SupportTicket ticket)
{
if (!SupportTickets.Contains(ticket))
{
_SupportTickets.Add(ticket);
} else {
log.Warn("Duplicate Ticket Not Added.");
}
}
}
I have used log4net and log4J directly in domain objects. This has good side effects and bad ones.
+: Logging in the domain object is simple and straightforward to code and you know you can take advantage of log4net features.
--: It means the program making use of the domain objects needs to pay attention to log4net configuration, which may or may not be a problem
--: You cannot link your domain object to a different log4net version than the calling program is using. I've seen a lot of conflicts with one item linked against log4net 1.2.0.10 and another linked against an earlier release.
Not logging in your domain object is a bad idea. The alternative is as others have suggested, dependency injection or an external framework (such as commons-logging for log4J) that allows plugging different logging frameworks or creating an interface that does the logging and logging against that interface. (The code using your domain object would need to then supply an appropriate instance of that interface for logging purposes.)
If you are going to log from your domain objects and you use an IOC container which you might want to swap out, I would recommend you use the Service Locator pattern (you could look at the Sharp# architecture for a nice implementation of a SafeServiceLocator that wraps msoft's ServiceLocator with more informative error messages).
I would also like to suggest that you consider whether you want to log the type of error you show in your example. I would tend to want to have the domain object throw an exception in that case and let the caller decide whether that was something that was expected by the application (and hence shouldn't be logged) or whether that represents a situation that the caller wants to deal with in some way.
This is a classic question!
The good way of doing this would be to introduce a class member of ILogger type and abstract the logging into this interface. In your class wherever you do a call to logg something do it through this interface. Then inject this dependency at the run-time with one of the implementation using one of the available IoC container or dependency injection farmeworks. By default you can use log4net implementation of this interface.
Here is a long list of available dependency injection frameworks:
http://www.hanselman.com/blog/ListOfNETDependencyInjectionContainersIOC.aspx
I think logging is a cross cutting concern, so it's best done in an aspect-oriented fashion. If you're using a framework like Spring.NET it's available to you.