We have a conflict in our programming teams when an error raising on the database.
When db facing with error like this:
“Msg 547, Level 16, State 0, Line 1
The DELETE statement conflicted with the REFERENCE constraint ... “.
Ui must show understandable messages, like this:
“You cannot Delete this Item. It Used.”
Database team(MS SQL) return pure error message by “raise error” and expects back-end team(Node js) or front-end team(Angular) convert this message to user-understandable messages and show to user. But the back-end team and front-end team say it is not optimized, and db messages must be converted to user-understandable messages in the database.
Are there any standards for this problem?
The responsibility for the content of error messages should not be in the database. For example, the same database might be used from a variety of different applications, with users in different cultures: the database will have no idea what localisation is in place on the particular user interface being used.
Even if your data layer returns a particular error, in different scenarios, this can mean different things. So it is common practice to build a "business" layer on top of the "data" layer. That business layer can make the decision about whether it is a problem or not. This business layer can be re-used by multiple different "user interface" layers (e.g. Web app, Windows app, phone app, etc), or even the business layer of other applications.
But the business layer should only return an indication of what the problem was (including any details which may be relevant). Ultimately, the user interface must be responsible for constructing the error message in a correctly localised way. Most UI layers have a way of storing localised resources for this purpose.
Related
There are many excellent resources about the Unit of Work pattern. My understanding is that it's main purpose is to provide a way to ensure that the effects of a piece of code will not persist if an error occurs. There are plenty of examples of this usage for databases in most languages.
There are very few resources I can find about using such patterns to query and use external APIs while maintaining some level of data integrity during an error. Generally repositories are about data persistence but a lot of API's do concern such things especially in a microservice architecture. Clean Architecture: Where to make API calls suggests that such a microservice architecture should abstract calls to other microservices using a repository, and there are many public APIs that can be thought of in a similar manner.
In my specific case, I am looking to plug in the Todoist API for Task items into my application which works with its own version of a Todo entity. I have successfully adapted my TodoRepository for the Todoist API and can see my tasks from Todoist displayed in my UI - I now face the issue that if a call fails then I could be adding, deleting or updating a Task in the Todoist API when an error occurs after the call, which is not ideal for data integrity reasons.
There seems to be some distinction between an API that can act as a repository and one that cannot. Seemingly, if the API is able to perform general CRUD on a similar entity in the modelled entity then it may be a good candidate for a repository adapter, but if it were something like retrieving the weather forecast, determining if a name is the same as some celebrity, working with the google maps API (if your application wasn't a map itself), etc, then these are handled differently.
Under the assumption that I have not yet confirmed that all API adapters/facades will be implemented in the Infrastructure layer of a project, what context does the interface that defines the API usage exist? If I want to query to see if a name is also a celebrity name, would I have an Application or Domain service interface that looks something like
public interface CelebrityService {
Celebrity LinkNameToCelebrity(string first_name, string last_name);
}
Where Celebrity is a Domain entity. This feels out of place if the Celebrity entity has been made only for this call.
Similarly for a weather API,
public interface WeatherService {
Weather GetWeatherForDay(datetime day);
}
I am building an ecommerce website for a project for my portfolio, and I wanted to know where the calculations should be done for the cart.
Normally I use react and I create a model folder, route folder and a controller folder but the way I was taught Angular it seems like the services acts like the routes and the actual calls to the database are done in the node server file which I am sure I could separate into a separate controller file. My question is where should the calculations for the cart be done before I send the order to the database? I thought about doing it in the cart component before the order is place or should it be done in the services or in the backend in the controller? I am just trying to figure out what is the standard
When writing an Angular app, I think it is important to adhere to the following principles that:
Components - should have a single responsibility for simple view logic only, shouldn't reach out to the server, and shouldn't do complex calculations and/or logic that is not related to the view.
Services - should have a single responsibility for (reusable/shared) and complex logic, to do outbound communication and reach out to the server, and to act as data stores (using BehaviorSubjects).
Therefore, if your calculations are needed to update the view of the cart, I would vote that these calculations need to be made at the component. If your calculations are needed to update the items or the request to be sent to the server, they need to be made at the service.
Remember, the component "shouldn't know" how the data comes to it or how it is manipulated or sent to the server. The component should only know, given any data - how to present it in the view, and shouldn't "worry about" how that data came to it. Similarly, the component shouldn't know how the data is calculated before being sent to the server, and this would fall within the responsibility of the service that works with and processes the cart data and builds the request to the server.
However, you have to always consider the security of your app, and if a malicious data modification at the client side can affect your cart. If such caculations affect the app's security - they should at least be validated at the server if not fully delegated to it.
I don't know the calculation you need exactly, but since it is an e-commerce website I assume it is simple math such as the total payment amount of checkout.
The main role of the server is communicating to the database. If a task does not involve interacting with data, you can do the calculations on the client-side. Leaving details on client-side allows you to have access to details of your formula, and reduce the communication time between client and server.
As it says in the documentation for the Microsoft Bot Framework, they have different types of data. One of them is the dialogData, privateConversationData, conversationData and userData.
By default, it seems the userData is/should be prepared to handle the persistency across nodes, however the dialogData should be used for temporary data.
As it says here: https://learn.microsoft.com/en-us/bot-framework/nodejs/bot-builder-nodejs-dialog-waterfall
If the bot is distributed across multiple compute nodes, each step of
the waterfall could be processed by a different node, therefore it's
important to store bot data in the appropriate data bag
So, basically, if I have two nodes, how/why should I used dialogData at all, as I cannot guarantee it will be kept across nodes? It seems that if you have more than one node, you should just use userData.
I've asked the docs team to remove the last portion of the sentence: "therefore it's important to store bot data in the appropriate data bag". It is misleading. The Bot Builder is restful and stateless. Each of the dialogData, privateConversationData, conversationData and userData are stored in the State Service: so any "compute node" will be able to retrieve the data from any of these objects.
Please note: the default Connector State Service is intended only for prototyping, and should not be used with production bots. Please use the Azure Extensions or implement a custom state client.
This blog post might also be helpful: Saving State data with BotBuilder-Azure in Node.js
To start: I've tried Loopback. Loopback is nice but does not allow for relations across multiple REST data services, but rather makes a call to the initial data service and passes query parameters that ask it to perform the joined query.
Before I go reinventing the wheel and writing a massive wrapper around Loopback's loopback-rest-connector, I need to find out if there are any existing libraries or frameworks that already tackle this. My extensive Googling has turned up nothing so far.
In a true microservice environment, there is a service per database.
http://microservices.io/patterns/data/database-per-service.html
From this article:
Implementing queries that join data that is now in multiple databases
is challenging. There are various solutions:
Application-side joins - the application performs the join rather than
the database. For example, a service (or the API gateway) could
retrieve a customer and their orders by first retrieving the customer
from the customer service and then querying the order service to
return the customer’s most recent orders.
Command Query Responsibility Segregation (CQRS) - maintain one or more
materialized views that contain data from multiple services. The views
are kept by services that subscribe to events that each services
publishes when it updates its data. For example, the online store
could implement a query that finds customers in a particular region
and their recent orders by maintaining a view that joins customers and
orders. The view is updated by a service that subscribes to customer
and order events.
EXAMPLE:
I have 2 data microservices:
GET /pets - Returns an object like
{
"name":"ugly",
"type":"dog",
"owner":"chris"
}
and on a completely different microservice....
GET /owners/{OWNER_NAME} - Returns the owner info
{
"owner":"chris",
"address":"under a bridge",
"phone":"123-456-7890"
}
And I have an API-level microservice that is going to call these two data services. This is the microservice that I will be applying this at.
I'd like to be able to establish a model for Pet such that, when I query pet, upon a successful response from GET /pets, it will "join" with owners (send a GET /owners/{OWNERS_NAME} for all responses), and to the user, simply return a list of pets that includes their owner's data.
So GET /pets (maybe something like Pets.find()) would return
{
"name":"ugly",
"type":"dog",
"owner": "chris",
"address": "under a bridge",
"phone": "123-456-7890"
}
Applying any model/domain logic on your API-gateway is bad decision, and considered as bad practice. API Gateway should only do your systems's CAS (with relying onto Auth service which holds the logic), And convert incoming external requests into inner system requests (different headers/ requester payload data) and proxy formatted requests to services for any other work, recieves them, cares about encapsulating errors, and presents every response in proper external form.
Another point, if there is alot of joins between two models required for application core flow (validation/scoping etc) then perhaps you should reconsider to which Business Domain your models/services are bound. If it's same BD perhaps they should be together. Priciples of Domain-Driven-Design helped me to understand where real boundaries between micro-services are.
If you work with loopback (like we are and face same problem we faced - that loopback have no proper join implementation) you can have separate Report/Combined data service, which is only one who can access to all the service databases and does it only for READ purposes - i.e. queries. Provide it with separately set-up read-only wide access to the db - instead of having only one datasource being set up (single database) it should be able to read from all the databases which are in scope of this query-join db user.
Such service should able to generate proper joins with expected output schema from configuration json - like loopback models (thats what I did in same case). Once abstraction is done it's pretty simple to build/add any equery with any complex joins. It's clean, and it's easy to reason about. Also, it's DBA friendly. For me such approach worked well so far.
I am new to DDD but I am trying to implement it in my Project - I have a service which is setup following the DDD principles - Application / Model / Repository - The Clients of the Service want to get back a DTO class (which also contains a Error Collection as one of its members) . Questions is how do I populate the Error Collection of the result DTO. Can the Error DTO be passed from the Application/Service Layer to Model/Service layer and populated there – Can someone point me to some example of these kinds of scenarios Currently I am bubbling up all the errors that I am getting back to the Application Service and populating it there like I said I am struggling.
As a general rule try not to copy code (classes, methods, interfaces). If you really have to use DTOs create them as late as possible in the process so that if you remove them you should still be able to use the system in another way.
I would have something like this:
Model
Domain classes
Error class
Model/Service (has reference to Model)
Application/Service (has reference to Model and Model/Service)
Domain DTOs
Error DTO
Also a question do you really need two Service Layers? Avoid Anemic Domain Model