Without getting into all of the gory details, I am trying to design a service-based solution that will be consumed by several client applications. The solution allows admins to create and modify document templates which are used by regular users to perform data entry. It is my intent to make the application a learning tool for best practices, techniques, etc.
And, at the same time, I have to accomodate a schizophrenic environment because the 'powers that be' cannot ever stick to their decisions regarding technologies and tools. For example, I am using Linq-to-SQL today because they aren't ready to go to EF4 but there is also discussion about switching over to NHibernate. So, I have to make the code as persistent ignorant as possible to minimize the work required should we change OR/M tools.
At this point, I am also limited to using the partial class approach to extend the Linq-to-SQL classes so they implement interfaces defined in my business layer. I cannot go with POCOs because management insists that we leverage all built-in tooling, etc. so I must support the Linq-to-SQL designer.
That said, my service interface has a StartSession method that accepts a template identifier in its signature. The operation flows like this:
If a session already exists in the database for the current user and specified template, update the record to show the current activity. If not, create a new session object.
The session is associated with an instance of the template, call it the "form". So if the session is new, I need to retrieve the template information to create the new "form", associate it with the session then save the session to the database. On the other hand, if the session already existed, then I need to also load the "form" with the data entered by the user and stored in the session previously.
Finally, the session (with form definition and data) is returned to the caller.
My first objective is to create clean separation between the logical layers of my application. The second is to maintain persistence ignorance (as mentioned above). Third, I have to be able to test everything so all dependencies must be externalized for easy mocking. I am using Unity as an IoC tool to help in this area.
To accomplish this, I have defined my service class and data contracts as needed to support the service interface. The service class will have a dependency injected from the business layer that actually performs the work. And here's where it has gotten messy for me.
I've been try to go the Unit of Work and Repository route to help with persistance ignorance. I have an ITemplateRepository and an ISessionRepository which I can access from my IUnitOfWork implementation. The service class gets an instance of my SessionManager class (in my BLL) injected. The SessionManager receives the IUnitOfWork implementation through constructor injection and will delegate all persistence to the UoW but I find myself playing a shell game with the various logic.
Should all of the logic described above be in the SessionManager class or perhaps the UoW implementation? I want as little logic as possible in the repository implementations because changing the data access platform could result in unwanted changes to the application logic. Since my repository is working against an interface, how do I best go about creating the new session (keeping in mind that a valid session has a reference to the template, er, form being used)? Would it be better to still use POCOs even though I have to support the designer and use a tool like AutoMapper inside the repository implementation to handle translating the objects?
Ugh!
I know I am just stuck in analysis paralysis so a little nudge is probably all I need. What would be ideal would be if someone could provide an example how you would you would solve the problem given the business rules and architectural constraints I've defined.
If you don't use POCOs then your not really going to be data store agnostic. And using POCOs will allow you to get your system up and running with memory based repositories which is what you'll likely want to use for your unit tests anyhow.
The AutoMapper sounds nice but I wouldn't consider it a deal breaker. Mapping POCOs to EF4, LinqToSql, nHibernate isn't that time consuming unless you have hundreds of tables. When/If your POCOs begin to diverge from your persistence layer then you might find that an AutoMapper wont really fit the bill.
Related
I am trying to understand what the purpose of injecting service providers into a NestJS controller? The documentation here explains here how to use them, that's not the issue here: https://docs.nestjs.com/providers
What I am trying to understand is, in most traditional web applications regardless of platform, a lot of the logic that would go into a NestJS service would otherwise just normally go right into a controller. Why did NestJS decide to move the provider into its own class/abstraction? What is the design advantages gained here for the developer?
Nest draws inspiration from Angular which in turn drew inspiration from enterprise application frameworks like .NET and Java Spring Boot. In these frameworks, the biggest concerns are ideas called Separation of Concern (SoC) and the Single Responsibility Principle (SRP), which means that each class deal with a specific function, and for the most part it can do it without really knowing much about other parts of the application (which leads to loosely coupled design patterns).
You could, if you wanted, put all of your business logic in a controller and call it a day. After all, that would be the easy thing to do, right? But what about testing? You'll need to send in a full request object for each functionality you want to test. You could then make a request factory that makes theses requests for you so it's easier to test, but now you're also looking at needing to test the factory to make sure it is producing correctly (so now you're testing your test code). If you broke apart the controller and the service, the controller could be tested that it just returns whatever the service returns and that's that. Then he service can have a specific input (like from the #Body() decorator in NestJS) and have a much easier input to work with an test.
By splitting the code up, the developer gains flexibility in maintenance, testing, and some autonomy if you are on a team and have interfaces set up so you know what kind of architecture you'll be getting from an injected service without needing to know how the service works in the first place. However, if you still aren't convinced you can also read up on Module Programming, Coupling, and Inversion of Control
We have recently decided to adopt DDD in my team for our new projects because of the so many obvious benefits (coming from the Active-Record pattern school) and there are a couple of things that are yet unclear.
Say I have an entity Transaction that depends on the following entities (that each in turn depends on other so many entities):
1. Customer
2. Account
3. Currency
When I make use of factories to instantiate a Transaction entity to pass to a Domain Service for some fancy business rules, do I make so many queries to setup all these dependent instances?
If I have overloads in my factory that skip such dependencies then those will be null in some cases and it will become too complicated to differentiate when I can access those properties and when I cannot. With Active-Record pattern I just use lazy loading and have them load only on demand. Any ideas with DDD?
EDIT:
In my scenario “Transaction” seems to be the best candidate for an Aggregate root. I have defined a method in my Application Service “InitiateTransaction” (also have a “FinalizeTransaction” as it involves a redirect to PayPal) and takes as parameters the DTOs needed to carry AccountId, CurrencyId, LanguageId and various other foreign keys as well as Transaction attributes.
When calling my Domain Services (Transaction Processor and Fraud Rule Evaluator), I need to specify the “Transaction” Aggregate with all dependencies loaded (“Transaction.Customer”, “Transaction.Currency”, etc.).
So if I am correct the steps required are:
1. Call some repository(ies) to retrieve Customer, Currency etc.
2. Call TransactionFactory with dependencies specified above to get a Transaction object
3. Call Domain Services with fully loaded Transaction object for business rules to take place
Correct? Additionally, my concern was about steps 1 and 2.
If “Customer”, “Currency” and other Entities/Value Objects “Transaction” depends on, have in turn other dependencies. Do I try to set up those as well? Because it seems to me that if I do I will end up with very bloated code in my Application Service and not very reusable to place in a separate method. However, if I don’t and just retrieve those from a repository with a “GetById(id)”as you suggested, my code could end up buggy as say I need property “Transaction.Customer.CreatedByUser” which returns a “User” instance, it will be null because repositories only load flat instances.
EDIT:
I ended up using GetById(id) to load only the dependencies I knew they were needed in my Services. Not a big fun of accidentally accessing null instances due to flat loading but I have my unit tests to protect me from taking it to production!!
I highly doubt it that Currency is an entity, however it's important to model things like how they defined and use by the real Domain. Forget factories or other implementation details like the db, you need to make sure you have defined the concepts right.
Once you've done that, you'd already identified the aggregate root as well. Btw, the entities should encapsulate the relevant business rules. Use Services to implement use-cases i.e to manage the interaction between the domain objects and other parts such as the repository.
You should keep EVERYTHING related to db and CRUD in the repository, and have the repo work only with the aggregate roots. Also, for querying purposes, you should use CQRS so that all the queries would be done on a read model. For Domain purposes, a Get(id) is 99% enough and that method returns an aggregate root.
Be aware that DDD is very tricky, the most difficult part is modeling the Domain correctly, all the buzzwords are useless if the model is wrong.
I am new to liferay, Can anyone please suggest some way to generate the service.xml for existing database Discussion on Liferay Website . I hope people might have developed some way or liferay have developed some plugin for this.
I see no particular use in introducing servicebuilder to large existing databases: You can connect servicebuilder entities to "legacy datasources" or "legacy tables" (those make good search terms) but service.xml generation has not been done AFAIK.
Some problem with this approach are:
servicebuilder has certain assumptions about operations in a database. It's done to encapsulate all different databases that Liferay runs on, thus might not use every database to its fullest extent possible
If you have a large existing database, you probably have a lot of existing business logic to make sure correct data goes in and out of the database. You might even work with stored procedures etc.
While you can make servicebuilder work with stored procedures, you'd have to introduce custom sql to work around servicebuilder's assumptions. Same goes for explicit foreign key relationships etc.
My recommendation is to rather have a proper interface on the existing business logic, e.g. Webservice, JSON, Rest, whatever is popular. Then use this interface in Liferay's portlets.
Another option might be to bring the existing persistence code into Liferay and just expose services without making use of the persistence features of Servicebuilder. For this you'd just define empty <entity> blocks (with names etc). This will generate the appropriate DoSomethingLocalService, but omit the persistence implementation - and you can wire your existing code in these services.
You can go through below link to understand Service Builder in liferay
https://www.liferay.com/documentation/liferay-portal/6.0/development/-/ai/service-build-2
Also below link have sample service builder portlet
https://www.liferay.com/community/forums/-/message_boards/message/17609606
Hope it Helps !
Not done yet AFAIK. Since Liferay directly doesnot support all data properties of DB like foreign key, one to n mapping etc, it is a challenge to create the reverese engineering. But you can give a try.
Service Builder is generally a nice feature to create relatively small databases, and simple business Logic, while giving you the advantage that your tables will be auto-generated when you deploy your portlet, and having finders (search by X attribute) with no effort. If this is the case with your database, it will be much easier to create a new service.xml from scratch.
Other that that, I think that having an extended database in Liferay's Service Builder will introduce more problems and slow development while you're implementing a complex business Logic, create custom Finders whenever you need to query on a join of tables and so on. So it seems quite normal to me that a conversion of a database to Service Builder is not available.
In other words, if your database is too large to write it in service.xml, you shouldn't use Service Builder in the first place
I am researching to start a new project based on Liferay.
It relies on a system that will require its own data model and a certain agility and flexibility in data management as well as its visualization.
These are my options:
Using Liferay Expando fields and define their own data models. I must do all the view layer.
Using Liferay ECMS adding patches creating structures and hooks that allow me to define data models Master - Detail. It makes much easier viewing issue (velocity templates), but perhaps is the most "dirty" way.
Generating data layer and access to services with Hibernate and Spring. (using Service Factory, for example).
Liferay Service Builder would be similar to the option of creating the platform with Hibernate and Spring.
CRUD generation systems as OpenXava or your XMLPortletFactory
And now my question, what is your advice? What advantages or disadvantages do you think would provide one or another option?
Thanks in advance.
I can't speak for the other CRUD generation systems but I can tell you about the Liferay approaches.
I would take a hybrid approach.
First, I would create the required data models as best as I can with the current requirements in Liferay Service Builder and maintain them there as much as possible. This would require that you rebuild and redeploy your plugin every time you changed the data model but would greatly enhance performance compared to all the other Liferay approaches you've mentioned. Service Builder in that regard is much more rigid and cannot be changed via GUI.
However, in the event for some reason you cannot use Service Builder to redefine your data models and you need certain aspects of it the be changed via GUI, you can also use Expandos to extend the models you've created with Service Builder. So, it is the best of both worlds.
On the other option, using the ECMS would be a specialized case and I would only take this approach if there is a particular requirement it satisfies (like integration with the ECMS).
With that said, Liferay provides you many different ways to create your application. It ultimately depends on how you're going to use your application.
I have an entity factory which requires access to the file system for construction of the object. I have created a IFileSystem interface, which is being injected into the factory.
Is the the correct way to use an infrastructure service? If so is it recommended to do the same for the entity itself, since important methods on the entity will be required to manipulate the file system also.
It is hard to answer this question without knowing what domain you are working on. It does not seem right though because this would be similar to injecting something like IDatabase. File system and database are persistence technologies and domain logic should be as persistent-agnostic as possible. So you might want to reevaluate this design if your Ubiquitous Language does not include the concept of 'file system'. You can simply restate your intent in a more domain centric terms like ICustomerConstructionInfoProvider. And then inject this interface implementation similarly to how you would inject repositories implementation.