Can Core project in Clean Architecture depends on nuget package? - domain-driven-design

I have Core project where I need to do some cryptographic operations, e.g. verification of SHA256. What can I do if it's Core project, so it shouldn't depend on anything? I have to write my own cryptographic functions that are resistant to e.g. side-channel attack? This causes security problems.
So what to do? Can my Core project depend on a nuget package if I use Clean Architecture?

The guideline regarding dependencies is to keep the core project as simple as possible so that most of its logic is about solving the business problem.
By keeping it simple, it's much easier to express which part of the business domain the classes solve. It's also easy to write focused tests that prove that the code can solve the correct part of the business problem.
To me, preventing attacks is not a part of that. It's something that should be done on inbound API calls before the domain is called. I would put that logic in application services. Those services can, of course, live in the Core project but not in any of the bounded contexts.

In Clean Architecture we try to keep the domain and application logic as independent from external libraries and frameworks as possible so that we do not depend on their future development.
Nevertheless the application logic will have to interact with external libraries, services and other IO which is achieved via "dependency inversion": the application logic defines an interface which is implemented by the outer layers (infrastructure).
This was the application logic remains "clean" and can focus on decision making while you can still reuse external libraries and services.
A more detailed discussion of this topic you can find here: http://www.plainionist.net/Implementing-Clean-Architecture-Frameworks/

Related

Is a Service provider really necessary in NestJS?

I am trying to understand what the purpose of injecting service providers into a NestJS controller? The documentation here explains here how to use them, that's not the issue here: https://docs.nestjs.com/providers
What I am trying to understand is, in most traditional web applications regardless of platform, a lot of the logic that would go into a NestJS service would otherwise just normally go right into a controller. Why did NestJS decide to move the provider into its own class/abstraction? What is the design advantages gained here for the developer?
Nest draws inspiration from Angular which in turn drew inspiration from enterprise application frameworks like .NET and Java Spring Boot. In these frameworks, the biggest concerns are ideas called Separation of Concern (SoC) and the Single Responsibility Principle (SRP), which means that each class deal with a specific function, and for the most part it can do it without really knowing much about other parts of the application (which leads to loosely coupled design patterns).
You could, if you wanted, put all of your business logic in a controller and call it a day. After all, that would be the easy thing to do, right? But what about testing? You'll need to send in a full request object for each functionality you want to test. You could then make a request factory that makes theses requests for you so it's easier to test, but now you're also looking at needing to test the factory to make sure it is producing correctly (so now you're testing your test code). If you broke apart the controller and the service, the controller could be tested that it just returns whatever the service returns and that's that. Then he service can have a specific input (like from the #Body() decorator in NestJS) and have a much easier input to work with an test.
By splitting the code up, the developer gains flexibility in maintenance, testing, and some autonomy if you are on a team and have interfaces set up so you know what kind of architecture you'll be getting from an injected service without needing to know how the service works in the first place. However, if you still aren't convinced you can also read up on Module Programming, Coupling, and Inversion of Control

Integration of bounded contexts locally

In "Implementing Domain-Driven Design", Vernon give detailed examples for integrating bounded context with a messaging or REST based solution, it also mention database integration, but I understand it is not a very clean solution to share database or at least db tables between BC.
But what if the 2 BCs I want to integrate are hosted locally on the same server, is it really a good idea to use a messaging/rest/rpc solution ? (which seems more suitable for a remotely hosted BC to me)
Otherwise, except with DB integration, what are the other alternatives ? Hosting both BC in the same process and calling it directly (still using adapters and translators for clean seperation) ?
Thanks
You could look into using something like 0MQ for inter-process communication on the same server. I've also in the past just hosted things in the same process as you suggest and just used interfaces / in-memory messaging to separate out contexts.
Everything is about trade-offs in the end, so you just need to decide what level of isolation you are willing to accept. The simplest solution would be to separate inside a solution via folders and interfaces, the other end of the spectrum being completely separate servers.
I don't think that location should come into play w.r.t. integration between BCs.
There really are other factors to consider such as guaranteed delivery to the recipient in order to ensure that the processing takes place. This should be required whether or not the two BCs are hosted on the same server.
Another reason to ignore location is that when you need to scale, your architecture should be able to handle it from the get-go.
As tomliversidge mentioned it is possible to use some deployment mechanisms such as non-durable messaging to speed up things but there will definitely be a trade-off and that has to be a conscious decision.

Common vs Core - difference

Assume we have a couple of libs. What is the difference between Core and Common library? How should they be recognized and do we organize the responsibilites of both?
+Common
-Class1
+Core
-Class2
+Lib1 has : Common
+Lib2 has : Core, Common
Should Common be truely common (i.e. all libs use it)? Or is Common only for those who need it?
What is good practice when refactoring / creating a project?
I don't really understand the difference between Core and Common.
I think this depends a lot on your particular application. In a single centralized app, I do think there might be a little overlap between the Core and Common folders. But the most important thing is that it makes sense for your app. Don't feel that you need to have those folders just because you've seen it in other apps...
For me, having a Core and a Common folders makes a lot of sense in some scenarios - e.g. a web app with an API and a client. You may have your Core folder in the API side, where the core execution (the business logic) takes place, and then have a Common folder with some things you need in both the API and the client sides - e.g., Http requests validation or a Json converter.
Anyway, it may make sense to have a Core and a Common folder in other kinds of apps.
For example, the Core folder would contain those classes that are central for your app - the vast majority of business model classes would be there.
In the Common folder, on the other hand, you can have some other classes that are shared, but not central - e.g., a Logger or a MessageSender could be there...
As for your little draft of code structure, I think that your Core package is the one to be revised - why Lib1 doesn't use Core? If something is core, generally it's because everything else needs it in order to run. If you do not have code that is conceptually central, maybe you can remove your Core package and keep only Common?
As for your other question - I do not think the Common stuff must be shared by all other packages, but just with 2 or more packages sharing something, that can be considered common.

Transition from RestKit to pure AFNetworking 2.0

I'd been using RestKit for the last two years, but recently I've started thinking about transition from these monolith framework as it seems to be really overkill.
Here's my pros for moving forward:
There is big need in using NSURLSession for background fetches and RestKit has only experimental branch for transition to AFNetworking 2.0. No actual dates when transition will be finished. (Main Reason)
No need for CoreData support in network library as no need for fully functional offline data storage.
Having headache with new concept of response/request descriptors as they don't support different parameters in path patterns (ex. access token parameter) and there is no way to create object request operation in one line with custom descriptor. Here I am loosing features of object manager as facade.
I. The biggest loss of RestKit for me in object mapping process.
Could you recommend standalone libraries that you use which shows themselves as flexible and stable?
II. And as I sad I need no fully functional storage but I still need some caching support in some places.
I've heard that NSURLCache has become useful in last OS release.
Did you use it and what's the strategy?
Does it return cached API responses when network connection is down?
III. Does anybody faces the same problems?
What solutions have you applied?
Maybe someone could give some piece of advice about architecture that he or she uses in multiple apps with pure AFNetworking?
I. In agreement with others who have commented, AFNetworking + Mantle is a simple and effective way to interact with a Restful API and to replace RestKit's object mapping process that you miss.
II. To answer the requirements of your caching support is highly dependent on the context. However, I have found for my recent functional requirements that caching a view model for a particular controller's screen and only caching reference data returned by APIs allows me to keep the application logic relatively simple whilst giving the user some continuity. A simple error notification for connectivity issues can be dealt with a cross-cutting manner.
III. One thought on the architecture relevant to this aspect is to ensure that the APIs the app is dependent on provides data according to the app experience. This allows your app to focus on what it is good at (a very slick user-experience) and moves logic into the API's closer to API dependencies such as data. This has a further benefit of reducing the chattiness of the app.

OSGi application design and DDD

After reading OSGi application design - am I abusing the service framework? and What is the best way of grouping OSGi bundles to make a coherent 'application', I'm now stuck with burning question of how to apply the "don't program OSGi" mantra.
Assuming the bounded context is the entire application and not the individual bundles, I should be free to declare some aggregate root entity in an API bundle, and in the DDD style, a repository for working with said entity. Now, the crux of the matter: an explicit repository appears to be antithetical to the OSGi style because a repository is itself a registry (of domain objects), and this design sidesteps the OSGi service registry. But to do away with the repository would require consumers to program OSGi in order to perform lookups of an entity.
It is said that repositories are just a façade--does this imply that I should create a repository implementation which delegates to the service registry? This does not appear to be a coherent approach since consumers would have two entry points into the domain model: the repository, and the service registry itself. To forgo the repository would no longer be DDD because we're back to mixing domain logic with framework code in a spaghetti-like fashion.
So what's the take-away here? Is domain-driven design incompatible with the "OSGi way," or am I missing some critical concept?
The rationale for not depending on OSGi is that as middleware OSGi visibility in your code makes it less cohesive. Domain code and OSGi code should not mix (just like it should not mix with JMS, Java EE, or any other API). Less cohesive code is more complicated and therefore error prone.
However, you always need some bridging code that links the infra-structure to your domain code. Since this bridging code is going to be coupled to some API anyway, leverage it to the hilt, there is absolutely no use in abstracting yourself from the environment for bridge code.
DS (and Blueprint, iPOJO, etc) hide the registry for the business logic, they are just very convenient ways to provide your domain registry. So with OSGi you hardly have to write any code to have a very powerful repository of domain objects without your domain objects themselves being aware of OSGi.
Yes, if you move your code from OSGi to somewhere else you have to rewrite the bridge code and coming from an OSGi world writing the generic functionality provided by OSGi is surprisingly large. However, not using the OSGi constructs because in event that the app is ported you have to provide is functionality is throwing away money.
Conclusion: domain could should not mix with OSGi, bridge code should leverage OSGi to the hilt.
I think you should simply register the repository as an OSGi service. The bundles that need the repository should reference the service and should only know of the repository interface. Of course this means you have to use either a ServiceTracker in an Activator or e.g. a blueprint context. I prefer the blueprint context as this way you do not have a real java level dependency on OSGi.
Of course this creates a dependency to OSGi in some way but this is not avoidable. The basic idea to follow is that you should keep the OSGi specific code out of your business code and have it in separate classes or in blueprint.
See one of my tutorials which shows a simple application with UI, a model and a persistence impl. While the application is too small to really do DDD the aproach should be fairly compatible. One thing you will notice is that the whole application does not import a single OSGi interface. One nice side effect is that you can easily reuse the code outside OSGi but then of course you have to solve the DI differently.
http://www.liquid-reality.de/x/DIBZ

Resources