Common vs Core - difference - naming

Assume we have a couple of libs. What is the difference between Core and Common library? How should they be recognized and do we organize the responsibilites of both?
+Common
-Class1
+Core
-Class2
+Lib1 has : Common
+Lib2 has : Core, Common
Should Common be truely common (i.e. all libs use it)? Or is Common only for those who need it?
What is good practice when refactoring / creating a project?
I don't really understand the difference between Core and Common.

I think this depends a lot on your particular application. In a single centralized app, I do think there might be a little overlap between the Core and Common folders. But the most important thing is that it makes sense for your app. Don't feel that you need to have those folders just because you've seen it in other apps...
For me, having a Core and a Common folders makes a lot of sense in some scenarios - e.g. a web app with an API and a client. You may have your Core folder in the API side, where the core execution (the business logic) takes place, and then have a Common folder with some things you need in both the API and the client sides - e.g., Http requests validation or a Json converter.
Anyway, it may make sense to have a Core and a Common folder in other kinds of apps.
For example, the Core folder would contain those classes that are central for your app - the vast majority of business model classes would be there.
In the Common folder, on the other hand, you can have some other classes that are shared, but not central - e.g., a Logger or a MessageSender could be there...
As for your little draft of code structure, I think that your Core package is the one to be revised - why Lib1 doesn't use Core? If something is core, generally it's because everything else needs it in order to run. If you do not have code that is conceptually central, maybe you can remove your Core package and keep only Common?
As for your other question - I do not think the Common stuff must be shared by all other packages, but just with 2 or more packages sharing something, that can be considered common.

Related

How to create shared language resources with i18next in multi-app node/express & react monorepo?

I just started to use i18next in my monorepo (new to it), which has several serverside microservices and a couple of frontends, including several custom library components. Many of the language strings are shared, and some are app specific. I could not decide on a logical way to approach the problem.
Tooling: Typescript - Node/Express - React/Vite - Electron/React (Desktop)
Firstly, questions:
Where should I keep the language resources during development? In another library? App in monorepo? Under each library module?
From where should I serve them? Something like lang.mydomain.com? Re-dividing them under each app during build (e.g. with Vite)?
All examples/tutorials I could reach use a single app and use i18next.js/ts to be included at the app level. I think I need to wrap it into a library module for my purposes. How can I do that, without losing access to its capabilities/types/methods etc? Dynamically creating instances in a higher-order module (the library is extensive, and I'm nearly lost)?
My initial thoughts:
As many translations will be shared, translating each one will be illogical, they should be shared.
As there can be many languages, it seems using i18next-http-backend is logical for web and embed with i18next-fs-backend for desktop app.
Dividing the resources as common/graphs/tables/ui etc will be logical (these will be divided in a library module hierarchy though).
A logical way can be to include a module's language resources in the module but that would not help translators, they should be at the same place at the top level in this respect.
PS: I used to use react-intl-universal, it is really easy, but it's release schedule is falling back.
Thank you in advance...

Can Core project in Clean Architecture depends on nuget package?

I have Core project where I need to do some cryptographic operations, e.g. verification of SHA256. What can I do if it's Core project, so it shouldn't depend on anything? I have to write my own cryptographic functions that are resistant to e.g. side-channel attack? This causes security problems.
So what to do? Can my Core project depend on a nuget package if I use Clean Architecture?
The guideline regarding dependencies is to keep the core project as simple as possible so that most of its logic is about solving the business problem.
By keeping it simple, it's much easier to express which part of the business domain the classes solve. It's also easy to write focused tests that prove that the code can solve the correct part of the business problem.
To me, preventing attacks is not a part of that. It's something that should be done on inbound API calls before the domain is called. I would put that logic in application services. Those services can, of course, live in the Core project but not in any of the bounded contexts.
In Clean Architecture we try to keep the domain and application logic as independent from external libraries and frameworks as possible so that we do not depend on their future development.
Nevertheless the application logic will have to interact with external libraries, services and other IO which is achieved via "dependency inversion": the application logic defines an interface which is implemented by the outer layers (infrastructure).
This was the application logic remains "clean" and can focus on decision making while you can still reuse external libraries and services.
A more detailed discussion of this topic you can find here: http://www.plainionist.net/Implementing-Clean-Architecture-Frameworks/

API Architecture - Business logic tightly coupled to routes?

To speed up development for my next Node-API I was looking for a suitable Framework. In the past I was building my APIs with express only.
One Design pattern I always found useful is to completely seperate the business logic from route-handling in services. Those services only accept the required information (like a user id or data) and return a promise resolving the result of the operation.
This way it is easy to reuse these services in other routes, to combine them, test them, or call them based on schedules or other events - totally independent from endpoint-calls. Routing and Middleware take care of access-controll, error-handling and respondig.
Looking at the documentations of those frameworks (sailsjs, keystonejs, ...) I mostly see the business-logic tightly coupled to individual routes, directly accepting request objects and handling the responses. Only as an afterthought it seems there is sometimes offered a way to extract "often used code" into helper functions.
Am I missing something? How come this pattern seems to be the standard of API design? Is this a best practice for a reason?
It might have to do with Node.js services being smaller in size. If you're coming from an enterprise background, you're well aware mixing business-logic with controller code doesn't fly in the long run. Perhaps small projects can get away with defying that, but once the size increases, you can't avoid the laws of physics. It's best to separate concerns and keep the codebase maintainable.
I'd also add that below services, it's good to have a separate layer that handles talking to outside process boundaries. That way, you can test business logic in isolation by providing appropriate test doubles for integrations. Here's a longer explanation of how it would work in a Node project: Organize Node.js API project using 3-layer architecture.

Better ways of building Microservices in Nodejs

I'm into a very big project where we have already built some 50 to 70 Microservices in Nodejs. All these services imports some 5 to 10 core common modules. At this stage if there is any single line of change in common core module, we have to update, build and deploy all the artifacts. Is there any better way to handle this?
Thanks.
I agree with #skjagini core modules should be stable, they should truely be core. In general I advocate for sharing as little as possible between your microservices, they should be independently developed and deployable. They should not require deployment synchronisation where you need to co-ordinate deployments of all your microservices least you break something. If that is the issue you are finding you have yourself a distributed monolith not a microservice architecture.
I cant see any easy resolution to the issue you post. If common code does change then naturally any deployable unit that uses that common code needs to be rebuilt and redeployed. The only exception to this would be if the change is not required by a particular deployable, and in this case it probably means your modules are doing too much and don't have a clear purpose or are too large.

Dealing with DLLs in use case diagrams

I've developed a heterogeneous application which takes advantage of service oriented architecture. It consists of many components which are different in code and run in different platforms (example: an Android Client, a WP8 Client, a Web Server, a Desktop Client, a Website).
Now I'm trying to document that I have concluded to put each component in a separate subsystem. But I have come across the question of whether putting DLLs also in subsystems or not. This application consist of many DLL files and I can't really decide on this. I also have another question, since the main applications need to make use of class libraries like DLL, if I wanna show this relationship in the use case (all function in the main apps rely on the function in the DLL, and the functions in the DLL files cannot be executed separately), so is this "Include" or "Extend".
For example:
DLL A = Generates Machine ID
Desktop App uses the DLL A to register the Machine
So is this "Extend" or "Include" (I think include is right but wanna double check)
Depicting DLL-s in use case level is not something you do every day. I would forget about DLL and I would just simply write what those specific DLLs do (if somebody from "business" reads your documentation, he or she would not care about DLLs anyway, if this is a technical documentation, use Deployment or component diagram for this purpose).
From my understanding all DLLs do the same but runs on different platform, am I correct? If so, then just draw one use case and use include.
Why include, not extend? Extend is for eg. there is a use case which comprises other steps on specific condition whereas include means that specific use case is the same in different use cases.

Resources