ServiceStack AutoQuery into custom DTO - servicestack

So, I'm working with ServiceStack, and know my way around a bit with it. I've used AutoQuery and find it indispensable when calling for straight 'GET' messages. I'm having an issue though, and I have been looking at this for a couple of hours. I hope it's just something I'm overlooking.
I have a simple class set up for my AutoQuery message:
public class QueryCamera : QueryDb<db_camera>
{
}
I have an OrmLite connection that is used to retrieve db_camera entires from the database. this all works just fine. I don't want to return a model from the database though as a result, I'd like to return a DTO, which I have defined as another class. So, using the version of QueryDb, my request message is now this:
public class QueryCamera : QueryDb<db_camera, Camera>
{
}
Where the Camera class is my DTO. The call still executes, but I get no results. I have a mapper extension method ToDto() set up on the db_camera class to return a Camera instance.
Maybe I'm just used to ServiceStack making things so easy... but how do I get the AutoQuery request above to perform the mapping for my request? Is the data retrieval now a manual operation for me since I'm specifying the conversion I want? Where's the value in this type being offered then? Is it now my responsibility to query the database, then call .ToDto() on my data model records to return DTO objects?
EDIT: something else I just observed... I'm still getting the row count from the returned dataset in AutoQueryViewer, but the field names are of the data model class db_camera and not Camera.

The QueryDb<From, Into> isn't able to use your Custom DTO extension method, it's used to select a curated set of columns from the executed AutoQuery which can also be used to reference columns on joined tables.
If you want to have different names on the DTO than on your Data Model, you can use an [Alias] attribute to map back to your DB column name which will let you name your DTO Property anything you like. On the other side you can change what property the DTO property is serialized as, e.g:
[DataContract]
public class Camera
{
[DataMember(Name = "Id")] // serialized as `Id`
public camera_id { get; set; } // populated with db_camera.camera_id
[DataMember]
[Alias("model")] // populated with db_camera.model
public CameraModel { get; set; } // serialized as `CameraModel`
}

Related

What type of data should be passed to domain events?

I've been struggling with this for a few days now, and I'm still not clear on the correct approach. I've seen many examples online, but each one does it differently. The options I see are:
Pass only primitive values
Pass the complete model
Pass new instances of value objects that refer to changes in the domain/model
Create a specific DTO/object for each event with the data.
This is what I am currently doing, but it doesn't convince me. The example is in PHP, but I think it's perfectly understandable.
MyModel.php
class MyModel {
//...
private MediaId $id;
private Thumbnails $thumbnails;
private File $file;
//...
public function delete(): void
{
$this->record(
new MediaDeleted(
$this->id->asString(),
[
'name' => $this->file->name(),
'thumbnails' => $this->thumbnails->toArray(),
]
)
);
}
}
MediaDeleted.php
final class MediaDeleted extends AbstractDomainEvent
{
public function name(): string
{
return $this->payload()['name'];
}
/**
* #return array<ThumbnailArray>
*/
public function thumbnails(): array
{
return $this->payload()['thumbnails'];
}
}
As you can see, I am passing the ID as a string, the filename as a string, and an array of the Thumbnail value object's properties to the MediaDeleted event.
How do you see it? What type of data is preferable to pass to domain events?
Updated
The answer of #pgorecki has convinced me, so I will put an example to confirm if this way is correct, in order not to change too much.
It would now look like this.
public function delete(): void
{
$this->record(
new MediaDeleted(
$this->id,
new MediaDeletedEventPayload($this->file->copy(), $this->thumbnail->copy())
)
);
}
I'll explain a bit:
The ID of the aggregate is still outside the DTO, because MediaDeleted extends an abstract class that needs the ID parameter, so now the only thing I'm changing is the $payload array for the MediaDeletedEventPayload DTO, to this DTO I'm passing a copy of the value objects related to the change in the domain, in this way I'm passing objects in a reliable way and not having strange behaviours if I pass the same instance.
What do you think about it?
A domain event is simply a data-holding structure or class (DTO), with all the information related to what just happened in the domain, and no logic. So I'd say Create a specific DTO/object for each event with the data. is the best choice. Why don't you start with the less is more approach? - think about the consumers of the event, and what data might they need.
Also, being able to serialize and deserialize the event objects is a good practice, since you could want to send them via a message broker (although this relates more to integration events than domain events).

Mapping Entity-to-DTO (and vice-versa) in Nest.js

I'm building an API with Nest.js and I've been using a mapper to convert the TypeORM entity to a DTO (and vice-versa).
Until now, I've been doing this manually:
public static async entityToDto(entity: UserEntity): Promise<UserDto> {
const dto = new UserDto();
dto.id = entity.id;
dto.emailAddress = entity.emailAddress;
dto.firstName = entity.firstName;
dto.lastName = entity.lastName;
dto.addressLine1 = entity.addressLine1;
dto.addressLine2 = entity.addressLine2;
dto.townCity = entity.townCity;
[...]
return dto;
}
In my opinion, this is a nice (albeit inflexible) approach. It explicitly controls which fields are returned to the user, minimizing the chance of leaking sensitive fields (like password hash). However, I was under the impression that the purpose of a DTO is to have a single place to modify data about something. If I needed to add a field, I'd have to modify both the DTO and the mapper.
It seems to be the convention to have one mapper per entity. However, if I don't want to return, for example, the accountStatus field, I would have to write a new mapper. So I have now multiple mappers which would need to be modified.
I had the idea to write a "universal" mapper which looks at the fields in the DTO, and maps them to the fields in the entity.
I'm relatively new to TypeScript and Nest.js, so I was wondering how others manage this.
I suggest you should try object property map built-in by typescript. Basically, your entity can be map to dto based on the similar property name like below
public static async entityToDto(entity: UserEntity): Promise<UserDto> {
const dto : UserDTO = ({
...entity,
additionalProperty: entity.someProperty
});
return dto;
}
Any property that sharing the same name between DTO and Entity will be mapped. It is far more clean and more flexible.

EF 5.0 new object: assign foreign key property does not set the foreign key id, or add to the collection

EF 5.0, using code-first on existing database workflow.
Database has your basic SalesOrder and SalesOrderLine tables with required foreign key on the SalesOrderLine as follows;
public class SalesOrder
{
public SalesOrder()
{
this.SalesOrderLines = new List<SalesOrderLine>();
}
public int SalesOrderID { get; set; }
public int CustomerID { get; set; }
public virtual Customer Customer { get; set; }
public virtual ICollection<SalesOrderLine> SalesOrderLines { get; set; }
}
public class SalesOrderLine
{
public SalesOrderLine()
{
}
public int SalesOrderLineID { get; set; }
public int SalesOrderID { get; set; }
public virtual SalesOrder SalesOrder { get; set; }
}
public SalesOrderLineMap()
{
// Primary Key
this.HasKey(t => t.SalesOrderLineID);
// Table & Column Mappings
this.ToTable("SalesOrderLine");
this.Property(t => t.SalesOrderLineID).HasColumnName("SalesOrderLineID");
this.Property(t => t.SalesOrderID).HasColumnName("SalesOrderID");
// Relationships
this.HasRequired(t => t.SalesOrder)
.WithMany(t => t.SalesOrderLines)
.HasForeignKey(d => d.SalesOrderID);
}
Now according to this page:
http://msdn.microsoft.com/en-us/data/jj713564
...we are told that:
The following code removes a relationship by setting the foreign key
to null. Note, that the foreign key property must be nullable.
course.DepartmentID = null;
Note: If the reference is in the added state (in this example, the
course object), the reference navigation property will not be
synchronized with the key values of a new object until SaveChanges is
called. Synchronization does not occur because the object context does
not contain permanent keys for added objects until they are saved. If
you must have new objects fully synchronized as soon as you set the
relationship, use one of the following methods.
By assigning a new object to a navigation property. The following code
creates a relationship between a course and a department. If the
objects are attached to the context, the course is also added to the
department.Courses collection, and the corresponding foreign key
property on the course object is set to the key property value of the
department.
course.Department = department;
...sounds good to me!
Now my problem:
I have the following code, and yet both of the the Asserts fail - why?
using (MyContext db = new MyContext ())
{
SalesOrder so = db.SalesOrders.First();
SalesOrderLine sol = db.SalesOrderLines.Create();
sol.SalesOrder = so;
Trace.Assert(sol.SalesOrderID == so.SalesOrderID);
Trace.Assert(so.SalesOrderLines.Contains(sol));
}
Both objects are attached to the context - are they not? Do I need to do a SaveChanges() before this will work? If so, that seems a little goofy and it's rather annoying that I need to set all of the references on the objects by hand when a new object is added to a foreign-key collection.
-- UPDATE --
I should mark Gert's answer as correct, but I'm not very happy about it, so I'll wait a day or two. ...and here's why:
The following code does not work either:
SalesOrder so = db.SalesOrders.First();
SalesOrderLine sol = db.SalesOrderLines.Create();
db.SalesOrderLines.Add(sol);
sol.SalesOrder = so;
Trace.Assert(so.SalesOrderLines.Contains(sol));
The only code that does work is this:
SalesOrder so = db.SalesOrders.First();
SalesOrderLine sol = db.SalesOrderLines.Create();
sol.SalesOrder = so;
db.SalesOrderLines.Add(sol);
Trace.Assert(so.SalesOrderLines.Contains(sol));
...in other words, you have to set all of your foreign key relationships first, and then call TYPE.Add(newObjectOfTYPE)
before any of the relationships and foreign-key fields are wired up. This means that from the time the Create is done until the time you do the Add(), the object is basically in a half-baked state. I had (mistakenly) thought that since I used Create(), and since Create() returns a sub-classed dynamic object (as opposed to using "new" which returns a POCO object) that the relationships wire-ups would be handled for me. It's also odd to me, that you can call Add() on an object created with the new operator and it will work, even though the object is not a sub-classed type...
In other words, this will work:
SalesOrder so = db.SalesOrders.First();
SalesOrderLine sol = new SalesOrderLine();
sol.SalesOrder = so;
db.SalesOrderLines.Add(sol);
Trace.Assert(sol.SalesOrderID == so.SalesOrderID);
Trace.Assert(so.SalesOrderLines.Contains(sol));
...I mean, that's cool and all, but it makes me wonder; what's the point of using "Create()" instead of new, if you're always going to have to Add() the object in either case if you want it properly attached?
Most annoying to me is that the following fails;
SalesOrder so = db.SalesOrders.OrderBy(p => p.SalesOrderID).First();
SalesOrderLine sol = db.SalesOrderLines.Create();
sol.SalesOrder = so;
db.SalesOrderLines.Add(sol);
// NOTE: at this point in time, the SalesOrderId field has indeed been set to the SalesOrderId of the SalesOrder, and the Asserts will pass...
Trace.Assert(sol.SalesOrderID == so.SalesOrderID);
Trace.Assert(so.SalesOrderLines.Contains(sol));
sol.SalesOrder = db.SalesOrders.OrderBy(p => p.SalesOrderID).Skip(5).First();
// NOTE: at this point in time, the SalesOrderId field is ***STILL*** set to the SalesOrderId of the original SO, so the relationships are not being maintained!
// The Exception will be thrown!
if (so.SalesOrderID == sol.SalesOrderID)
throw new Exception("salesorderid not changed");
...that seems like total crap to me, and makes me feel like the EntityFramework, even in version 5, is like a minefield on a rice-paper bridge. Why would the above code not be able to sync the SalesOrderId on the second assignment of the SalesOrder property? What essential trick am I missing here?
I've found what I was looking for! (and learned quite a bit along the way)
What I thought the EF was generating in it's dynamic proxies were "Change-Tracking Proxies". These proxy classes behave more like the old EntityObject derived partial classes from the ADO.Net Entity Data Model.
By doing some reflection on the dynamically generated proxy classes (thanks to the information i found in this post: http://davedewinter.com/2010/04/08/viewing-generated-proxy-code-in-the-entity-framework/ ), I saw that the "get" of my relationship properties was being overridden to do Lazy Loading, but the "set" was not being overriden at all, so of course nothing was happening until DetectChanges was called, and DetectChanges was using the "compare to snapshot" method of detecting changes.
Further digging ultimately lead me to this pair of very informative posts, and I recommend them for anyone using EF:
http://blog.oneunicorn.com/2011/12/05/entity-types-supported-by-the-entity-framework/
http://blog.oneunicorn.com/2011/12/05/should-you-use-entity-framework-change-tracking-proxies/
Unfortunately, in order for EF to generate Change-Tracking Proxies, the following must occur (quoted from the above):
The rules that your classes must follow to enable change-tracking
proxies are quite strict and restrictive. This limits how you can
define your entities and prevents the use of things like private
properties or even private setters. The rules are: The class must be
public and not sealed. All properties must have public/protected
virtual getters and setters. Collection navigation properties must be
declared as ICollection<T>. They cannot be IList<T>, List<T>,
HashSet<T>, and so on.
Because the rules are so restrictive it’s easy to get something wrong and the result is you won’t get a change-tracking proxy. For example,
missing a virtual, or making a setter internal.
...he goes on to mention other things about Change-Tracking proxies and why they may show better or worse performance.
In my opinion, the change-tracking proxy classes would be nice as I'm coming from the ADO.Net Entity Model world, and I'm used to things working that way, but I've also got some rather rich classes and I'm not sure if I will be able to meet all of the criteria. Additionally that second bullet point makes me rather nervous (although I suppose I could just create a unit test that loops through all of my entities, does a Create(0 on each and then tests the resulting object for the IEntityWithChangeTracker interface).
By setting all of my properties to virtual in my original example I did indeed get IEntityWithChangeTracker typed proxy classes, but I felt a little ... I don't know... "dirty" ...for using them, so I think I will just have to suck it up and remember to always set both sides of my relationships when doing assignments.
Anyway, thanks for the help!
Cheers,
Chris
No, SalesOrderLine sol is not attached to the context (although it is created by a DbSet). You must do
db.SalesOrderLines.Add(sol);
to have it attached to the context in a way that the ChangeTracker executes DetectChanges() (DbSet.Add() is one of the methods that trigger this) and, thus, also executes relationship fixup, which sets sol.SalesOrderID and ensures that so.SalesOrderLines contains the new object.
So, no, you don't need to execute SaveChanges(), but the object must be added to the context and relationship fixup must have been triggered.

ektorp / CouchDB mix HashMap and Annotations

In jcouchdb I used to extend BaseDocument and then, in a transparent manner, mix Annotations and not declared fields.
Example:
import org.jcouchdb.document.BaseDocument;
public class SiteDocument extends BaseDocument {
private String site;
#org.svenson.JSONProperty(value = "site", ignoreIfNull = true)
public String getSite() {
return site;
}
public void setSite(String name) {
site = name;
}
}
and then use it:
// Create a SiteDocument
SiteDocument site2 = new SiteDocument();
site2.setProperty("site", "http://www.starckoverflow.com/index.html");
// Set value using setSite
site2.setSite("www.stackoverflow.com");
// and using setProperty
site2.setProperty("description", "Questions & Answers");
db.createOrUpdateDocument(site2);
Where I use both a document field (site) that is defined via annotation and a property field (description) not defined, both get serialized when I save document.
This is convenient for me since I can work with semi-structured documents.
When I try to do the same with Ektorp I have documents using annotations and Documents using HashMap BUT I couldn't find an easy way of getting the mix of both (I've tried using my own serializers but this seems to much work for something that I get for free in jcouchdb). Also tried to annotate a HashMap field but then is serialized as an object and I get the fields automatically saved BUT inside an object with the name of the HashMap field.
Is it possible to do (easily/for free) using Ektorp?
It is definitely possible. You have two options:
Base your class on org.ektorp.support.OpenCouchDbDocument
Annotate the you class with #JsonAnySetter and #JsonAnyGetter. Red more here: http://wiki.fasterxml.com/JacksonFeatureAnyGetter

XmlSerializer, XmlArray with dynamic content... how?

To start: This is also for REST deserialiaztion, so a custom XmlSerializer is out of the question.
I have a hjierarchy of classes that need to be serializable and deserializable from an "Envelope". It has an arrayelement named "Items" that can contain subclasses of the abstract "Item".
[XmlArray("Items")]
public Item [] Items { get; set; }
Now I need to add XmlArrayItem, but the number is not "fixed". We use so far reflection to find all subclasses with a KnownTypeProvider so it is easy to extend the assembly with new subtypes. I dont really want to hardcode all items here.
The class is defined accordingly:
[XmlRoot]
[KnownType("GetKnownTypes")]
public class Envelope {
but it does not help.
Changing Items to:
[XmlArray("Items")]
[XmlArrayItem(typeof(Item))]
public Item [] Items { get; set; }
results in:
{"The type
xxx.Adjustment
was not expected. Use the XmlInclude
or SoapInclude attribute to specify
types that are not known statically."}
when tyrying to serialize.
Anyone an idea how I can use XmlInclude to point to a known type provider?
The KnownTypesAttribute does not work for XmlSerializer. It's only used by a DataContractSerializer. I'm quite sure that you can exchange the serializer in WCF, because I have done that for the DataContractSerializer. But if that's not an option, you have to implement IXmlSerializable yourself and handle type lookup there.
Before disqualifying this solution: You just have to implement IXmlSerializable just for a special class which replaces Item[]. Everything else can be handled by the default serializer.
According to: http://social.msdn.microsoft.com/Forums/en-US/asmxandxml/thread/83181d16-a048-44e5-b675-a0e8ef82f5b7/
you can use different XmlSerializer constructor:
new XmlSerializer(typeof(Base), new Type[] { typeof(Derived1), ..});
Instead of enumerating all derived classes in the base definition like this:
[System.Xml.Serialization.XmlInclude(typeof(Derived1))]
[System.Xml.Serialization.XmlInclude(typeof(Derived2))]
[System.Xml.Serialization.XmlInclude(typeof(DerivedN))]
I think you should be able to use your KnownTypeProvider to fill the array in the XmlSerializer's constructor.

Resources