I am trying to create a joint table for two tables, that are from different PostgreSQL databases. Working with TypeORM, I have a problem defining the #ManyToMany(() => 'TableFromAnotherDb') in TypeScript. I've created an interface that has the needed property for the joint table, but having the interface in mind - it's unuseful when it's assigned in the ManyToMany part, because it refers to a type, and I am trying to use it as a value.
Also, does having two simultaneous database connections is necessary here? Because I am trying to mask the interface for the table needed from the second database.
Any recommendation for avoiding this problem while keeping my typescript compiler happy?
I highly doubt that TypeORM allows for linking between databases like this. The problem is that most of the relations auto-generate SQL queries that pull in the various data base tables it needs to operate on. If you have two databases, one SQL queries can't get all the info it needs. So your application needs to be the glue that binds them together.
I think the best you can get is store the ID in the entity, and then manually query each connection.
#Entity()
class Thing {
#Column()
otherThingId: number
}
// usage
const thing = await ThingRepository.find(123)
const otherThing = await OtherThingRepository.find(thing.otherThingId)
I'm working on a project and was trying to create data models for it. We've a usecase where a user can host an event and add members to it.
class Event {
event_name: String
}
class User {
username: String
}
I wanted to know which of the following way to store the event members in the Event class.
//v1
class Event {
event_name: String,
event_members: Array<String> // List of usernames
}
//v2
class Event {
event_name: String,
event_members: Array<User> // List of user objects
}
By using v2, I feel I'll be able to move the logic to get user information, from DB, from client side to my server.
Latency is also something that I'm considering. If I go with v1, then I need to make multiple calls to the server to fetch all the information about event members, resulting in increase in wait time. Whereas, in v2, the response payload is increasing which might impact our network calls.
I wanted to know which will be a better way to store my model among the two and if there's a different and more efficient way then please let me know.
There is no singular "better" data model here. When modeling data in a NoSQL database, it always depends on your use-cases. As you add more use-cases to your app, you'll expand and modify the data model to fit your needs.
That said, I typically store both directions of a many-to-many relationship, so both v1 and v2. This allows fast lookup of the related items in both directions, at the cost of some extra storage - so a typical time-vs-space trade-off.
But as said: there is no singular best data model, and when you're just getting started I typically focus on getting a simple model working quickly, and on securing access to that data.
For a good introduction to the topic in general, I recommend reading NoSQL data modeling, and for Firestore specifically watch Todd's excellent Getting to know Cloud Firestore series.
I am completely new to NestJS. I have seen that in NestJS, a model is created to specify the details of data, e.g. when creating a simple task manager, when we want to specify what a single task will look like, we specify it in the model (example below):
export interface Task {
id: string;
title: string;
description: string;
status: TaskStatus;
}
export enum TaskStatus {
OPEN = 'OPEN',
IN_PROGRESS = 'IN_PROGRESS',
DONE = 'DONE',
}
However, I later came across DTOs, where once again the shape of data is described. My understanding is that DTOs are used when transferring data, i.e. it describes the kind of data that you will post or get.
My question is that when I am already using DTOs to describe the shape of data, why use Models at all?
Also, I read that with DTOs we can have a single source of truth and in case we realise that the structure of data needs to change, we won't have to specify it separately in the controller and service files, however, this still means we will have to update the Model?
Most of the time over a long period of time your DTOs and your models can and will diverge from each other. What comes of the the HTTP request and what gets sent back can be in a different format than what is kept in your database, so keeping them separate can lead to more flexibility as time goes on. This basically comes into an argument of DTO (Data Transfer Objects) and DAO (Data Access Object) versus DTAO (Data Transfer/Access Object) (at least that's what I call them).
This also deals with the Single Responsibility Principle as each class should deal with one thing and one thing only.
There's also this SO post from a Java thread that talks about what you're thinking of
The NestJS documentation showcases how to add DTOs to use in Controllers to validate request objects by using class-validator package. DTOs described there are TypeScript classes. Now, while controllers deal with DTOs(TS Classes), NestJS providers (or services), on the other hand, makes use of TypeScript interfaces. These DTOs and interfaces are pretty much of the same shape.
Now, I am seeing duplication of shape definition here. And wondering if the interfaces are needed at all?
Can we not make DTOs source of truth for the shape and validations? One of the approaches we were considering (to make the DTO source of truth) was, to have an openapi generator take the DTOs as input and generate openapi definition and from there another codegen can generate a set of typescript interfaces to be consumed by NestJS itself and which can be shared with another set of consumer applications like Angular too.
Have any people come across a similar problem? What do you think of the above? Feedback appreciated.
According to the Nestjs docs:
But first (if you use TypeScript), we need to determine the DTO (Data Transfer
Object) schema. A DTO is an object that defines how the data will be
sent over the network. We could determine the DTO schema by using
TypeScript interfaces, or by simple classes. Interestingly, we
recommend using classes here. Why? Classes are part of the JavaScript
ES6 standard, and therefore they are preserved as real entities in the
compiled JavaScript. On the other hand, since TypeScript interfaces
are removed during the transpilation, Nest can't refer to them at
runtime. This is important because features such as Pipes enable
additional possibilities when they have access to the metatype of the
variable at runtime.
I'm no expert but I'm not using DTO's at all. I really couldn't see a use for them. In each module I have a service, module, entity, and controller.
I would like to explain the concept of DTO with the simplest example possible for your better understanding.
DTO stands for Data Transfer Object. Now DTO's are used to reduce code duplication. It simply defines a schema which are passed in the parameters of functions to make it easy to get the required data from them. Here is an example of a DTO
export class AuthCredentialsDto {
email: string;
password: string;
}
Now if we make a method to check whether the password is correct or not
async password_check(usercredentials: AuthCredentialsDTO)
{
//Destructuring in JAVASCRIPT
const {email} = usercredentials;
//Database Logic to find the email
return user;
}
Now if we didn't make use of the DTO, then the code would have looked like
async password_check(email: string, password: string)
{
//Database Logic to find the email
return user;
}
also the point is that this is just one function, in a framework, Multiple function call multiple other functions which requires passing the parameters again and again. Just consider that a function requires 10 parameters. you would have to pass them multiple times. although it is possible to work without a DTO but it is not a development friendly practice. Once you get used to DTO you would love to use them as they save a lot of extra code and effort.
Regards
To extend #Victor's answer regarding the DTO concept and its role, I'd like to point that interfaces allow us to set a contract which represents something meaningful in our app. We can then implement and/or extend this contract in other places where needed as well e.g. entity definition for database objects - DAOs, data transfer objects - DTOs, and business models definitions notably.
Also interfaces for DTOs can be shared across a backend and a front-end so that both projects can avoid code duplicate and differences between objects exchanged for ease of development and maintainability.
one thing that dto provides more than interface is. with dto and class validator you can make the validations quickly at request level. But when it comes to interface you cannot add class validator to it. dtos are class in general. that means you have more to do with that than a interface.
DTO has a little bit different mission. This is an additional abstraction for data transfer connection by the network between FE and BE layers of your application and at the same time, DTO gives a description of your data like it doing Interface. But the main mission is a distinct place for data connection and due to this you could implement many helpful things in your DTO layer it could be some transformation for DTO fields values, validation for them, checking with regular expression, etc. So you have a convenient place for attendant changes for data just on early receiving or sending to FE side
TLDR
The answer to your question is yes, you could use them for shape if you wanted to, but it might be unnecessary in some situations.
DTOs are a great solution for when you need to enforce some shape on your data(specially on the nestjs ecosystem where it interacts a lot with class-validator) or transform it somehow. Examples of that would be when you're recieving data from your client or from another service. In this case the DTOs are the way to go for setting contracts.
However when you're sending data for example, between two layers of the same application -- for instance between your controller and your usecase or between your usecase and your repository -- you might want to use an interface there since you know your data is comming in the correct format in this scenarios.
One key difference to understand is that the interface serves as a development tool, it keeps you for making mistakes like passing an object lacking a certain required property between two classes, while the DTO affects the application itself, it's an object that's going to be instantiated at runtime and might be used for validation and data transformation purposes, that`s the idea, but of course it also has the capacities of an interface.
There might be exceptions to this rule of thumb depending on the architecture you're going for. For example, on a project I'm working on it's very common to have the contract between domain and presentation layers equal to the contract between frontend and the API. On this project, to avoid having to create two similar contracts, I`ve chosen to use a DTO to set the contract between the presentation and domain layers. Now in most cases I just extend the DTO when setting the contracts between API and clients.
Reason for using of DTO and Interface in NestJS
Basically in rest API we have to types of operation, One is Input and Another is Output. which is Request and Response
During response we doesn't need to validate return value. We just need to pass data based on interface
But in request we need to validate body
for example you want to create a user. Then the request body might be something like this
const body = {
name: "Test Name",
email: "test#gmail.com",
phone: "0393939",
age: 25
}
so during request we need to validate email, phone number or password is matched regex etc.
so in DTO we can do all validation
Here is one of my DTO example
import { IsEmail, IsNotEmpty, IsString, MinLength } from 'class-validator';
export class RegisterUserRequest {
#IsString()
#IsNotEmpty()
name: string;
#IsEmail()
#IsNotEmpty()
email: string;
#IsNotEmpty()
#MinLength(6)
password: string;
}
export class LoginUserRequest {
#IsEmail()
#IsNotEmpty()
email: string;
#IsNotEmpty()
#MinLength(6)
password: string;
}
And here is the interface example
import { UserRole } from './user.schema';
export interface UserType {
_id?: string;
email?: string;
name?: string;
role: UserRole;
createdAt?: Date;
updatedAt?: Date;
}
Hope you understand.
Thanks
I read through all of the answers, and none really seem to answer your question. I believe that yes, although DTOs and interfaces serve different purposes, I don't see a reason why you need to create an additional interface for typing purposes.
Happy to be proven wrong here, but none of the answers really address the OP's point, which is that the DTO serves a validation purpose, but you also get the typing for free.
In the nestjs-query packages there are two types of DTOs referenced. Read DTO - The DTO returned from queries and certain mutations, the read DTO does not typically define validation and is used as the basis for querying and filtering. Input DTOs - The DTO used when creating or updating records.
Basically you can validate request input without Dto. But imagination, you have to work with body payload, route params, query params or even header values. Without Dto you have to put your validation code inside each controller's methods to handle the request.
With Class Validation and Class Transformer, you can use decorator to do that. Your mission is defining your Dto Classes and add the validation annotations for each property.
You can find out the details here How to validate request input in nestjs and How to use pipe in nestjs
For example, if the Dto you created for the incoming request needs to be checked for the incoming data, the best way to do this is to create the Dto as a class. Because after typescript is compiled, they continue to exist in your javascript code. In this way, you can add various validations. For example "IsNotEmpy", "IsString" etc. If the data doesn't need to be validated, you can create Dto using interface. So here, rather than a single correct or correct method, it's about what you need.
BTW, even despite on DTO is a Java convention it can't solve the problem of Generic fields, e.g.:
#Get(url/${variable})
#Reponse({
[$variable: string]: $value
})
TS Interfaces can solve this issue only, but you cant describe it in DTO
And to show it you will pass some hardcoded example
class ResponseDto {
#ApiProperty({
...
example: [...]
})
[$variable]: SomeTSInterface[]
}
#Reponse({ status: 200, type: ResponseDto })
DTOs represent the structure of data transferred over the network it is meant to be for a specific use case whereas interfaces are more generalized specificity helps with better readability and optimizations. Interfaces don't exist after transpiling but nest accommodates dtos to be useful even after the transpilation face.
I think the NestJs documentation answered this precisely:
A DTO is an object that defines how the data will be sent over the network. We could determine the DTO schema by using TypeScript interfaces, or by simple classes. Interestingly, we recommend using classes here. Why? Classes are part of the JavaScript ES6 standard, and therefore they are preserved as real entities in the compiled JavaScript. On the other hand, since TypeScript interfaces are removed during the transpilation, Nest can't refer to them at runtime. This is important because features such as Pipes enable additional possibilities when they have access to the metatype of the variable at runtime.
Link to this pragraph: https://docs.nestjs.com/controllers#request-payloads
In my opinion,
DTO = Data Transfer Object. Dtos are like interfaces but their whole goal is to transfer data and validate it. They are mostly used in routers/controllers.
You can simplify your API body and query validation logic by using them. For instance, you have a AuthDto which automatically maps the user email and password to an object dto to enforce validations.
Where as the interface is just to declare how your response or a particular data model will be.
I'm not an expert but I do not understand why we use Dto
When we can use the schema model - what is the need for Dto and additional objects
I am redesigning my NodeJS application because I want to use the Rich Domain Model concept. Currently I am using Anemic Domain Model and this is not scaling well, I just see 'ifs' everywhere.
I have read a bunch of blog posts and DDD related blogs, but there is something that I simply cannot understand... How do we handle Persistence properly.
To start, I would like to describe the layers that I have defined and their purpose:
Persistence Model
Defines the Table Models. Defines the Table name, Columns, Keys and Relations
I am using Sequelize as ORM, so the Models defined with Sequelize are considered my Persistence Model
Domain Model
Entities and Behaviors. Objects that correspond to the abstractions created as part of the Business Domain
I have created several classes and the best thing here is that I can benefit from hierarchy to solve all problems (without loads of ifs yay).
Data Access Object (DAO)
Responsible for the Data management and conversion of entries of the Persistence Model to entities of the Domain Model. All persistence related activities belong to this layer
In my case DAOs work on top of the Sequelize models created on the Persistence Model, however, I am serializing the records returned on Database Interactions in different objects based on their properties. Eg.: If I have a Table with a column called 'UserType' that contains two values [ADMIN,USER], when I select entries on this table, I would serialize the return according to the User Type, so a User with Type: ADMIN would be an instance of the AdminUser class where a User with type: USER would simply be a DefaultUser...
Service Layer
Responsible for all Generic Business Logic, such as Utilities and other Services that are not part of the behavior of any of the Domain Objects
Client Layer
Any Consumer class that plays around with the Objects and is responsible in triggering the Persistence
Now the confusion starts when I implement the Client Layer...
Let's say I am implementing a new REST API:
POST: .../api/CreateOrderForUser/
{
items: [{
productId: 1,
quantity: 4
},{
productId: 3,
quantity: 2
}]
}
On my handler function I would have something like:
function(oReq){
var oRequestBody = oReq.body;
var oCurrentUser = oReq.user; //This is already a Domain Object
var aOrderItems = oRequestBody.map(function(mOrderData){
return new OrderItem(mOrderData); //Constructor sets the properties internally
});
var oOrder = new Order({
items: aOrderItems
});
oCurrentUser.addOrder(oOrder);
// So far so good... But how do I persist whatever
// happened above? Should I call each DAO for each entity
// created? Like, first create the Order, then create the
// Items, then update the User?
}
One way I found to make it work is to merge the Persistence Model and the Domain Model, which means that oCurrentUser.addOrder(...) would execute the business logic required and would call the OrderDAO to persist the Order along with the Items in the end. The bad thing about this is that now the addOrder also have to handle transactions, because I don't want to add the order without the items, or update the User without the Order.
So, what I am missing here?
Aggregates.
This is the missing piece on the story.
In your example, there would likely not be a separate table for the order items (and no relations, no foreign keys...). Items here seem to be values (describing an entity, ie: "45 USD"), and not entities (things that change in time and we track, ie: A bank account). So you would not directly persist OrderItems but instead, persist only the Order (with the items in it).
The piece of code I would expect to find in place of your comment could look like orderRepository.save(oOrder);. Additionally, I would expect the user to be a weak reference (by id only) in the order, and not orders contained in a user as your oCurrentUser.addOrder(oOrder); code suggests.
Moreover, the layers you describe make sense, but in your example you mix delivery concerns (concepts like request, response...) with domain concepts (adding items to a new order), I would suggest that you take a look at established patterns to keep these concerns decoupled, such as Hexagonal Architecture. This is especially important for unit testing, as your "client code" will likely be the test instead of the handler function. The retrieve/create - do something - save code would normally be a function in an Application Service describing your use case.
Vaughn Vernon's "Implementing Domain-Driven Design" is a good book on DDD that would definitely shed more light on the topic.