I am having a problem writing clean OOP code, say in TypeScript, while some of my objects contain async methods: what 'ends up' happening is that I am writing static methods, and whichever object or method 'uses' these static methods is 'contaminated' and needs to be converted into a sort of promise itself... I am sure I doing something wrong - is there some architecture trick that I am missing?
Let's take a concrete example: I am writing a Node JS app with a model object of my MongoDB document. Nothing fancy. However, when I use the object's methods in my app, doesn't matter in which class, every method that uses the methods has to be async. And then every method that uses the method that uses the method has to be async as well... etc.
Is there a way to use the MongoDB async operations in such a way as to at least keep up the façade of normal OOP architecture, or is there an entirely new sort of logic I need to learn to write async OOP apps?
Hope I made my question clear,
Async methods don't have to be static, and there really isn't any reason that a program using async operations can't have the same overall structure as one that doesn't.
Async operations are contagious, however. That's known as the red/blue function problem: https://journal.stuffwithstuff.com/2015/02/01/what-color-is-your-function/
It's not ideal, but it's the best that can be done without requiring some very special capabilities from the language/system used to implement JS. You can either have threads, which come with their own problems, or you can have some mechanism for copying call stacks around, like Go and upcoming Java project Loom.
I am building an API using express js. I have structured it so that each data model (e.g. user, organization, etc.) has their own modules as such:
router - validates and sanitizes the http request at a high level and
routes accordingly
controller - includes some business logic and
validation that may rely on data in the persistence later
db - takes
parameters from the controller to make queries to the database
model - defines the properties returned when the endpoint(s) for that model are called and which properties may be included in a POST or PATCH.
In summary, the User model has a userRouter.js, userController.js, userDB.js, and userModel.js, while all other data models follow the same structure.
There are, of course, several cases where the models interact with each other. For example, a User can belong to one or more Organizations, and I want to show a list of those organizations in the returned user response.
For the sake of keeping the code as loosely coupled as possible with consistent responses, I would like to call the organizationController's getOrganizations function from the userController. Likewise I have a need to use a similar method in the organizationController to get a list of users that belong to a given organization.
In these cases, they are the functional equivalent of calling the GET endpoint for those models, but there are other scenarios where I need to return data I wouldn't necessarily want to expose to an API route, such as fetching a single value indicating the users role in an organization from the database.
I began by requiring the organizationController in the userController and vice-versa, but this created a circular dependency and the functions I was trying to access were coming back as undefined.
To solve this, I ultimately created a "helperController" that each and any controller can call to get a function from another controller:
function getOrganizations(params){
return require('./organizationController').getOrganizations(params)
}
function getUserIDByEmail(email){
return require('./userController').getUserIDByEmail(email);
}
module.exports = {
getOrganizations,
getUserIDByEmail
}
To avoid the circular dependencies, I wrap the require statements inside of the functions the respective controllers would call, so that way the userController is not ultimately requiring itself by calling the helperController.
This works, but it feels a little hacky and like a violation of the DRY principle. I have also read that controllers should not generally call each other, but I am not sure what the alternative here is to achieve this.
Is this a sound way to do this or is there a better way to achieve what I want to do?
The NestJS documentation showcases how to add DTOs to use in Controllers to validate request objects by using class-validator package. DTOs described there are TypeScript classes. Now, while controllers deal with DTOs(TS Classes), NestJS providers (or services), on the other hand, makes use of TypeScript interfaces. These DTOs and interfaces are pretty much of the same shape.
Now, I am seeing duplication of shape definition here. And wondering if the interfaces are needed at all?
Can we not make DTOs source of truth for the shape and validations? One of the approaches we were considering (to make the DTO source of truth) was, to have an openapi generator take the DTOs as input and generate openapi definition and from there another codegen can generate a set of typescript interfaces to be consumed by NestJS itself and which can be shared with another set of consumer applications like Angular too.
Have any people come across a similar problem? What do you think of the above? Feedback appreciated.
According to the Nestjs docs:
But first (if you use TypeScript), we need to determine the DTO (Data Transfer
Object) schema. A DTO is an object that defines how the data will be
sent over the network. We could determine the DTO schema by using
TypeScript interfaces, or by simple classes. Interestingly, we
recommend using classes here. Why? Classes are part of the JavaScript
ES6 standard, and therefore they are preserved as real entities in the
compiled JavaScript. On the other hand, since TypeScript interfaces
are removed during the transpilation, Nest can't refer to them at
runtime. This is important because features such as Pipes enable
additional possibilities when they have access to the metatype of the
variable at runtime.
I'm no expert but I'm not using DTO's at all. I really couldn't see a use for them. In each module I have a service, module, entity, and controller.
I would like to explain the concept of DTO with the simplest example possible for your better understanding.
DTO stands for Data Transfer Object. Now DTO's are used to reduce code duplication. It simply defines a schema which are passed in the parameters of functions to make it easy to get the required data from them. Here is an example of a DTO
export class AuthCredentialsDto {
email: string;
password: string;
}
Now if we make a method to check whether the password is correct or not
async password_check(usercredentials: AuthCredentialsDTO)
{
//Destructuring in JAVASCRIPT
const {email} = usercredentials;
//Database Logic to find the email
return user;
}
Now if we didn't make use of the DTO, then the code would have looked like
async password_check(email: string, password: string)
{
//Database Logic to find the email
return user;
}
also the point is that this is just one function, in a framework, Multiple function call multiple other functions which requires passing the parameters again and again. Just consider that a function requires 10 parameters. you would have to pass them multiple times. although it is possible to work without a DTO but it is not a development friendly practice. Once you get used to DTO you would love to use them as they save a lot of extra code and effort.
Regards
To extend #Victor's answer regarding the DTO concept and its role, I'd like to point that interfaces allow us to set a contract which represents something meaningful in our app. We can then implement and/or extend this contract in other places where needed as well e.g. entity definition for database objects - DAOs, data transfer objects - DTOs, and business models definitions notably.
Also interfaces for DTOs can be shared across a backend and a front-end so that both projects can avoid code duplicate and differences between objects exchanged for ease of development and maintainability.
one thing that dto provides more than interface is. with dto and class validator you can make the validations quickly at request level. But when it comes to interface you cannot add class validator to it. dtos are class in general. that means you have more to do with that than a interface.
DTO has a little bit different mission. This is an additional abstraction for data transfer connection by the network between FE and BE layers of your application and at the same time, DTO gives a description of your data like it doing Interface. But the main mission is a distinct place for data connection and due to this you could implement many helpful things in your DTO layer it could be some transformation for DTO fields values, validation for them, checking with regular expression, etc. So you have a convenient place for attendant changes for data just on early receiving or sending to FE side
TLDR
The answer to your question is yes, you could use them for shape if you wanted to, but it might be unnecessary in some situations.
DTOs are a great solution for when you need to enforce some shape on your data(specially on the nestjs ecosystem where it interacts a lot with class-validator) or transform it somehow. Examples of that would be when you're recieving data from your client or from another service. In this case the DTOs are the way to go for setting contracts.
However when you're sending data for example, between two layers of the same application -- for instance between your controller and your usecase or between your usecase and your repository -- you might want to use an interface there since you know your data is comming in the correct format in this scenarios.
One key difference to understand is that the interface serves as a development tool, it keeps you for making mistakes like passing an object lacking a certain required property between two classes, while the DTO affects the application itself, it's an object that's going to be instantiated at runtime and might be used for validation and data transformation purposes, that`s the idea, but of course it also has the capacities of an interface.
There might be exceptions to this rule of thumb depending on the architecture you're going for. For example, on a project I'm working on it's very common to have the contract between domain and presentation layers equal to the contract between frontend and the API. On this project, to avoid having to create two similar contracts, I`ve chosen to use a DTO to set the contract between the presentation and domain layers. Now in most cases I just extend the DTO when setting the contracts between API and clients.
Reason for using of DTO and Interface in NestJS
Basically in rest API we have to types of operation, One is Input and Another is Output. which is Request and Response
During response we doesn't need to validate return value. We just need to pass data based on interface
But in request we need to validate body
for example you want to create a user. Then the request body might be something like this
const body = {
name: "Test Name",
email: "test#gmail.com",
phone: "0393939",
age: 25
}
so during request we need to validate email, phone number or password is matched regex etc.
so in DTO we can do all validation
Here is one of my DTO example
import { IsEmail, IsNotEmpty, IsString, MinLength } from 'class-validator';
export class RegisterUserRequest {
#IsString()
#IsNotEmpty()
name: string;
#IsEmail()
#IsNotEmpty()
email: string;
#IsNotEmpty()
#MinLength(6)
password: string;
}
export class LoginUserRequest {
#IsEmail()
#IsNotEmpty()
email: string;
#IsNotEmpty()
#MinLength(6)
password: string;
}
And here is the interface example
import { UserRole } from './user.schema';
export interface UserType {
_id?: string;
email?: string;
name?: string;
role: UserRole;
createdAt?: Date;
updatedAt?: Date;
}
Hope you understand.
Thanks
I read through all of the answers, and none really seem to answer your question. I believe that yes, although DTOs and interfaces serve different purposes, I don't see a reason why you need to create an additional interface for typing purposes.
Happy to be proven wrong here, but none of the answers really address the OP's point, which is that the DTO serves a validation purpose, but you also get the typing for free.
In the nestjs-query packages there are two types of DTOs referenced. Read DTO - The DTO returned from queries and certain mutations, the read DTO does not typically define validation and is used as the basis for querying and filtering. Input DTOs - The DTO used when creating or updating records.
Basically you can validate request input without Dto. But imagination, you have to work with body payload, route params, query params or even header values. Without Dto you have to put your validation code inside each controller's methods to handle the request.
With Class Validation and Class Transformer, you can use decorator to do that. Your mission is defining your Dto Classes and add the validation annotations for each property.
You can find out the details here How to validate request input in nestjs and How to use pipe in nestjs
For example, if the Dto you created for the incoming request needs to be checked for the incoming data, the best way to do this is to create the Dto as a class. Because after typescript is compiled, they continue to exist in your javascript code. In this way, you can add various validations. For example "IsNotEmpy", "IsString" etc. If the data doesn't need to be validated, you can create Dto using interface. So here, rather than a single correct or correct method, it's about what you need.
BTW, even despite on DTO is a Java convention it can't solve the problem of Generic fields, e.g.:
#Get(url/${variable})
#Reponse({
[$variable: string]: $value
})
TS Interfaces can solve this issue only, but you cant describe it in DTO
And to show it you will pass some hardcoded example
class ResponseDto {
#ApiProperty({
...
example: [...]
})
[$variable]: SomeTSInterface[]
}
#Reponse({ status: 200, type: ResponseDto })
DTOs represent the structure of data transferred over the network it is meant to be for a specific use case whereas interfaces are more generalized specificity helps with better readability and optimizations. Interfaces don't exist after transpiling but nest accommodates dtos to be useful even after the transpilation face.
I think the NestJs documentation answered this precisely:
A DTO is an object that defines how the data will be sent over the network. We could determine the DTO schema by using TypeScript interfaces, or by simple classes. Interestingly, we recommend using classes here. Why? Classes are part of the JavaScript ES6 standard, and therefore they are preserved as real entities in the compiled JavaScript. On the other hand, since TypeScript interfaces are removed during the transpilation, Nest can't refer to them at runtime. This is important because features such as Pipes enable additional possibilities when they have access to the metatype of the variable at runtime.
Link to this pragraph: https://docs.nestjs.com/controllers#request-payloads
In my opinion,
DTO = Data Transfer Object. Dtos are like interfaces but their whole goal is to transfer data and validate it. They are mostly used in routers/controllers.
You can simplify your API body and query validation logic by using them. For instance, you have a AuthDto which automatically maps the user email and password to an object dto to enforce validations.
Where as the interface is just to declare how your response or a particular data model will be.
I'm not an expert but I do not understand why we use Dto
When we can use the schema model - what is the need for Dto and additional objects
I have a server which stores records representing Objects, and which uses Mongoose to manage these records. I want to be able to query/update/etc. all objects with a simple API (i.e. a single endpoint). Different types of Objects have some identical attributes, and some different attributes, so a single, static Object schema won't do. Instead, I still want to have a single schema, but I want to be able to change it slightly by adding/deleting fields when I create each new Object, with the fields which are/aren't present depending on the type of the Object. I don't want a mixed schema, because I want error validation for each type of Object. I want a single schema (as opposed to a different schema for each type of Object) so that I can just do
Object = mongoose.model('Object', ObjectSchema);
Object.findOne({objectType: "type1"}, function(err, model) {
...
});
So basically, I want field validation, while still maintaining some flexibility for attributes, and a single point to query/update/etc. my Object records. If I change the schema with each new Object, recompile it into a model, and create a new instance of that model, will all the instances of the different models (compiled from different modified versions of the same schema) still be queryable as above?
Obviously, I'm new to Mongoose. I just talked a lot about the schema here, and I honestly don't know whether I should have used the word "model" in place of "schema" in some places. I just don't know how I can accomplish all of this. Let me know if I make no sense.
We are successfully using the mongoose model inheritance and discriminator functionality for a very similar scenario. See here for an example:
http://www.laplacesdemon.com/2014/02/19/model-inheritance-node-js-mongoose/
You might also be able to use this plugin:
https://www.npmjs.com/package/mongoose-schema-extend
I have a model made up of three objects, a base model object, a specific model object (generalStatus as an example) and an genericXML getter object. The getter object is passed into the model so I can drive test cases without a network. There is a specific controller (genstatusController as an example) for each model pulling data and update the view. The low level genericXML getter uses ASIHttp for its network work, there are run loops and the activity is async. The specific model has a genericXML getter, it will call the getter to update an XML document. There are many upper level models all using the same base model and then a common XML getter object. When the genericXML getter finishes a async request to update an xml data it post a NSNotification to the model. The model will then parse the XML and post a NSNotification to the controller letting it know the data is updated. I have a couple of protocols between the base objects and specific model. I like this level of enforcement, is there a way to enforce the NSNotification between the sets of objects?
BTW, the controller invokes the refresh of the data but needs to wait on an async event from the model to to tell it the update is done so it can update the view.