Mapping Entity-to-DTO (and vice-versa) in Nest.js - nestjs

I'm building an API with Nest.js and I've been using a mapper to convert the TypeORM entity to a DTO (and vice-versa).
Until now, I've been doing this manually:
public static async entityToDto(entity: UserEntity): Promise<UserDto> {
const dto = new UserDto();
dto.id = entity.id;
dto.emailAddress = entity.emailAddress;
dto.firstName = entity.firstName;
dto.lastName = entity.lastName;
dto.addressLine1 = entity.addressLine1;
dto.addressLine2 = entity.addressLine2;
dto.townCity = entity.townCity;
[...]
return dto;
}
In my opinion, this is a nice (albeit inflexible) approach. It explicitly controls which fields are returned to the user, minimizing the chance of leaking sensitive fields (like password hash). However, I was under the impression that the purpose of a DTO is to have a single place to modify data about something. If I needed to add a field, I'd have to modify both the DTO and the mapper.
It seems to be the convention to have one mapper per entity. However, if I don't want to return, for example, the accountStatus field, I would have to write a new mapper. So I have now multiple mappers which would need to be modified.
I had the idea to write a "universal" mapper which looks at the fields in the DTO, and maps them to the fields in the entity.
I'm relatively new to TypeScript and Nest.js, so I was wondering how others manage this.

I suggest you should try object property map built-in by typescript. Basically, your entity can be map to dto based on the similar property name like below
public static async entityToDto(entity: UserEntity): Promise<UserDto> {
const dto : UserDTO = ({
...entity,
additionalProperty: entity.someProperty
});
return dto;
}
Any property that sharing the same name between DTO and Entity will be mapped. It is far more clean and more flexible.

Related

What type of data should be passed to domain events?

I've been struggling with this for a few days now, and I'm still not clear on the correct approach. I've seen many examples online, but each one does it differently. The options I see are:
Pass only primitive values
Pass the complete model
Pass new instances of value objects that refer to changes in the domain/model
Create a specific DTO/object for each event with the data.
This is what I am currently doing, but it doesn't convince me. The example is in PHP, but I think it's perfectly understandable.
MyModel.php
class MyModel {
//...
private MediaId $id;
private Thumbnails $thumbnails;
private File $file;
//...
public function delete(): void
{
$this->record(
new MediaDeleted(
$this->id->asString(),
[
'name' => $this->file->name(),
'thumbnails' => $this->thumbnails->toArray(),
]
)
);
}
}
MediaDeleted.php
final class MediaDeleted extends AbstractDomainEvent
{
public function name(): string
{
return $this->payload()['name'];
}
/**
* #return array<ThumbnailArray>
*/
public function thumbnails(): array
{
return $this->payload()['thumbnails'];
}
}
As you can see, I am passing the ID as a string, the filename as a string, and an array of the Thumbnail value object's properties to the MediaDeleted event.
How do you see it? What type of data is preferable to pass to domain events?
Updated
The answer of #pgorecki has convinced me, so I will put an example to confirm if this way is correct, in order not to change too much.
It would now look like this.
public function delete(): void
{
$this->record(
new MediaDeleted(
$this->id,
new MediaDeletedEventPayload($this->file->copy(), $this->thumbnail->copy())
)
);
}
I'll explain a bit:
The ID of the aggregate is still outside the DTO, because MediaDeleted extends an abstract class that needs the ID parameter, so now the only thing I'm changing is the $payload array for the MediaDeletedEventPayload DTO, to this DTO I'm passing a copy of the value objects related to the change in the domain, in this way I'm passing objects in a reliable way and not having strange behaviours if I pass the same instance.
What do you think about it?
A domain event is simply a data-holding structure or class (DTO), with all the information related to what just happened in the domain, and no logic. So I'd say Create a specific DTO/object for each event with the data. is the best choice. Why don't you start with the less is more approach? - think about the consumers of the event, and what data might they need.
Also, being able to serialize and deserialize the event objects is a good practice, since you could want to send them via a message broker (although this relates more to integration events than domain events).

Avoiding storage concerns in entities, with a complex database

The project I'm working on deals with quite complex business rules, so I'm trying to apply DDD. Unfortunately, I have to work with a legacy database I cannot get rid of, and I'm having trouble keeping a clean Domain Design.
Lets say some Entity, has some ValueType as primary key, which is required. This could be designed in DDD like the following:
public class Entity
{
public Entity(ValueType key)
{
Key = key;
}
public ValueType Key { get; }
}
Now, lets say this key is actually stored as a string representation, which can be parsed to construct the ValueType. I could do something like this, to make it work with Entity Framework:
public class Entity
{
private Entity()
{
//Private empty ctor for EF
}
public Entity(ValueType key)
{
StoredKey = key.ToString();
}
public ValueType Key => ValueType.Parse(StoredKey);
//DB representation of the key, setter for EF
private string StoredKey { get; set; }
}
This way, I feel I'm kind of polluting my Domain Design with storage concerns. For what the Domain cares, the Entity could be persisted just in memory, so this string internal representation feels weird.
This is a very simple scenario to show an example, but things can actually get really worse. I would like to know if there is any way to achieve persistance ignorance in the model with this simple example, so I can start thinking later about how to design more complex scenarios.
The domain model doesn't need to follow the entity framework structure. What you can do is to create 2 types of models. One pure domain models and when passing it to the repository to persist it transform it into entity framework model. And when fetching the model you can do the inverse transformation.
You can achieve persistance ignorance in this instance. Your instincts are right, get rid of all persistance concerns from your domain model, move them entirely within your dal where they belong.
DB.sql:
create table entity {
id nvarchar(50) not null primary key,
fields nvarchar(max) /*Look mum, NoSql inside sql! (please dont do this) */
}
Domain.dll:
class Entity {
/*optional - you are going to need some way of 'restoring' a persisted domain entity - how you do this is up to your own conventions */
public Entity(ValueType key, ValueObjects.EntityAttributes attributes) {Key=key;Attributes=attributes;}
public ValueType Key { get; }
public ValueObjects.EntityAttributes Attributes { get; }
/* domain functions below */
}
IEntityRepository {
public Update(Domain.Entity enity);
public Fetch(ValueType Key);
}
now ALL persistance work can go in your DAL, includeing the translation. I havent done EF in a while so treat the below
as sudo code only.
DAL (EF):
/* this class lives in your DAL, and can be private, no other project needs to know about this class */
class Entity :{
public string EntityId {get;set;}
public string Fields {get;set;}
}
class EntityRepository : BaseRepository, Domain.IEntityRepository {
public EntityRepository(DBContext context) {
base.Context = context;
}
public Domain.Entity Fetch(ValueType key) {
string id = key.ToString();
var efEntity = base.Context.Entitys.SingleOrDefault(e => e.Id == id);
return MapToDomain(efEntity);
}
/*Note: Handle mapping as you want, this is for example only*/
private Domain.Entity MapToDomain(EF.Entity efEntity) {
if (efEntity==null) return null;
return new Domain.Entity(
ValueType.Parse(efEntity.Id),
SomeSerializer.Deserialize<ValueObjects.EntityAttributes>(efEntity.Fields) /*every time you do this, a puppy hurts its paw*/
);
}
public Domain.Entity Update(Domain.Entity domainEntity) {
string id = key.ToString();
var efEntity = MapToEf(domainEntity);
base.Context.Entities.Attach(efEntity);
base.Context.Entity(efEntity).State=EntityState.Modified;
base.Context.SaveChanges();
}
private Domain.Entity MapToEf(Domain.Entity domainEntity) {
return new EF.Entity(
Id = domainEntity.Key.ToString(),
Fields = SomeSerializer.Serialize(domainEntity.Attributes) /*stahp!*/
);
}
}
The takeaway thing here is that you are going to need to do Mapping of some sort. This all but unavoidable unless your domain is realy simple and your ORM is super fancy, but even then I would recommend keeping your ORM models seperate to your Domain models because they solving 2 different problems (ORMS are providing a code version of your database model, DDD are providing a code version of you Business Models). If you are compromising your Domain Model (ie, making properties public set ) to cater for your DAL then step back and re evaluate. Obviously compromise where appropriate but realise this means you are introducing (implied) dependancies across your application layers.
You next quetion in realtion to performance (but mapping is so slow) was answered by Constantin Galbenu, have seperate 'read' models and reposistories for lists, searches. Do you really need to pull back 1000's of business models just to populate a search result list (and then have the tempation to add properties of no concern to the business model because 'the search page needs this one bit of data for the finaince people'). You should only be pulling out our domain model when you are doing some sort of business action, otherwise some nice anemica read only views are your friend.
As many suggested in the comments, CQRS is a good choice for complex business rules. It has the great advantage that you can have different models for each side (write/command and read/query). In this way you separate the concerns. This is also very good because the business logic for the write side differs from the read side's but enough with the advantages of CQRS.
...Unfortunately, I have to work with a legacy database I cannot get rid of...
Your new Write model, the Aggregate, will be responsible for handling commands. This means that the legacy model will be relieved of this responsibility; it will be used only for queries. And to keep it up-to-date you can create a LegacyReadModelUpdater that is subscribed to all Domain events generated by the new Aggregate and it will project them to the old model in an eventually consistent manner.

ServiceStack AutoQuery into custom DTO

So, I'm working with ServiceStack, and know my way around a bit with it. I've used AutoQuery and find it indispensable when calling for straight 'GET' messages. I'm having an issue though, and I have been looking at this for a couple of hours. I hope it's just something I'm overlooking.
I have a simple class set up for my AutoQuery message:
public class QueryCamera : QueryDb<db_camera>
{
}
I have an OrmLite connection that is used to retrieve db_camera entires from the database. this all works just fine. I don't want to return a model from the database though as a result, I'd like to return a DTO, which I have defined as another class. So, using the version of QueryDb, my request message is now this:
public class QueryCamera : QueryDb<db_camera, Camera>
{
}
Where the Camera class is my DTO. The call still executes, but I get no results. I have a mapper extension method ToDto() set up on the db_camera class to return a Camera instance.
Maybe I'm just used to ServiceStack making things so easy... but how do I get the AutoQuery request above to perform the mapping for my request? Is the data retrieval now a manual operation for me since I'm specifying the conversion I want? Where's the value in this type being offered then? Is it now my responsibility to query the database, then call .ToDto() on my data model records to return DTO objects?
EDIT: something else I just observed... I'm still getting the row count from the returned dataset in AutoQueryViewer, but the field names are of the data model class db_camera and not Camera.
The QueryDb<From, Into> isn't able to use your Custom DTO extension method, it's used to select a curated set of columns from the executed AutoQuery which can also be used to reference columns on joined tables.
If you want to have different names on the DTO than on your Data Model, you can use an [Alias] attribute to map back to your DB column name which will let you name your DTO Property anything you like. On the other side you can change what property the DTO property is serialized as, e.g:
[DataContract]
public class Camera
{
[DataMember(Name = "Id")] // serialized as `Id`
public camera_id { get; set; } // populated with db_camera.camera_id
[DataMember]
[Alias("model")] // populated with db_camera.model
public CameraModel { get; set; } // serialized as `CameraModel`
}

Is there a way to ignore some entity properties when calling EdmxWriter.WriteEdmx

I am specifically using breezejs and the server code for breeze js converts the dbcontext into a form which is useable on the clientside using EdmxWriter.WriteEdmx. There are many properties which I have added JsonIgnore attributes to so that they don't get passed to the client side. However, the metadata that is generated (and passed to the clientside) from EdmxWriter.WriteEdmx still has those properties. Is there any additional attribute that I can add to those properties that I want ignored so that they are ignored by EdmxWriter.WriteEdmx? Or, would I need to make a separate method so as not to have any other unintended side effects.
You can sub-class your DbContext with a more restrictive variant that you use solely for metadata generation. You can continue to use your base context for persistence purposes.
The DocCode sample illustrates this technique with its NorthwindMetadataContext which hides the UserSessionId property from the metadata.
It's just a few extra lines of code that do the trick.
public class NorthwindMetadataContext : NorthwindContext
{
protected override void OnModelCreating(DbModelBuilder modelBuilder)
{
base.OnModelCreating(modelBuilder);
// Hide from clients
modelBuilder.Entity<Customer>().Ignore(t => t.CustomerID_OLD);
// Ignore UserSessionId in metadata (but keep it in base DbContext)
modelBuilder.Entity<Customer>().Ignore(t => t.UserSessionId);
modelBuilder.Entity<Employee>().Ignore(t => t.UserSessionId);
modelBuilder.Entity<Order>().Ignore(t => t.UserSessionId);
// ... more of the same ...
}
}
The Web API controller delegates to the NorthwindRepository where you'll see that the Metadata property gets metadata from the NorthwindMetadataContext while the other repository members reference an EFContextProvider for the full NorthwindContext.
public class NorthwindRepository
{
public NorthwindRepository()
{
_contextProvider = new EFContextProvider<NorthwindContext>();
}
public string Metadata
{
get
{
// Returns metadata from a dedicated DbContext that is different from
// the DbContext used for other operations
// See NorthwindMetadataContext for more about the scenario behind this.
var metaContextProvider = new EFContextProvider<NorthwindMetadataContext>();
return metaContextProvider.Metadata();
}
}
public SaveResult SaveChanges(JObject saveBundle)
{
PrepareSaveGuard();
return _contextProvider.SaveChanges(saveBundle);
}
public IQueryable<Category> Categories {
get { return Context.Categories; }
}
// ... more members ...
}
Pretty clever, eh?
Just remember that the UserSessionId is still on the server-side class model and could be set by a rogue client's saveChanges requests. DocCode guards against that risk in its SaveChanges validation processing.
You can sub-class your DbContext with a more restrictive variant that you use solely for metadata generation. You can continue to use your base context for persistence purposes.
The DocCode sample illustrates this technique with its NorthwindMetadataContext which hides the UserSessionId property from the metadata.
It's just a few extra lines of code that do the trick.
The Web API controller delegates to the NorthwindRepository where you'll see that the Metadata property gets metadata from the NorthwindMetadataContext while the other repository members reference an EFContextProvider for the full NorthwindContext.
Pretty clever, eh?
If you use the [NotMapped] attribute on a property, then it should be ignored by the EDMX process.

ektorp / CouchDB mix HashMap and Annotations

In jcouchdb I used to extend BaseDocument and then, in a transparent manner, mix Annotations and not declared fields.
Example:
import org.jcouchdb.document.BaseDocument;
public class SiteDocument extends BaseDocument {
private String site;
#org.svenson.JSONProperty(value = "site", ignoreIfNull = true)
public String getSite() {
return site;
}
public void setSite(String name) {
site = name;
}
}
and then use it:
// Create a SiteDocument
SiteDocument site2 = new SiteDocument();
site2.setProperty("site", "http://www.starckoverflow.com/index.html");
// Set value using setSite
site2.setSite("www.stackoverflow.com");
// and using setProperty
site2.setProperty("description", "Questions & Answers");
db.createOrUpdateDocument(site2);
Where I use both a document field (site) that is defined via annotation and a property field (description) not defined, both get serialized when I save document.
This is convenient for me since I can work with semi-structured documents.
When I try to do the same with Ektorp I have documents using annotations and Documents using HashMap BUT I couldn't find an easy way of getting the mix of both (I've tried using my own serializers but this seems to much work for something that I get for free in jcouchdb). Also tried to annotate a HashMap field but then is serialized as an object and I get the fields automatically saved BUT inside an object with the name of the HashMap field.
Is it possible to do (easily/for free) using Ektorp?
It is definitely possible. You have two options:
Base your class on org.ektorp.support.OpenCouchDbDocument
Annotate the you class with #JsonAnySetter and #JsonAnyGetter. Red more here: http://wiki.fasterxml.com/JacksonFeatureAnyGetter

Resources