SubSonic SimpleRepository - Foreign Objects - subsonic

SubSonic SimpleRepository doesn't seem to have a lot of support for foreign relations. How can I have foreign relationships in my code models that persist and load from the database naturally?

FKs are a DB concern - the Simple Repo is there to work as simply as possible so if you have a collection of child objects, you load them as needed:
public IEnumerable Kids{
get{
return Kids.All().Where(x=>x.ParentID==this.ID;
}
}
You'd have to roll this by hand. If you want to "eager" load it - do on a case by case basis.

Related

Can we create a joint table in TypeORM for tables from different databases?

I am trying to create a joint table for two tables, that are from different PostgreSQL databases. Working with TypeORM, I have a problem defining the #ManyToMany(() => 'TableFromAnotherDb') in TypeScript. I've created an interface that has the needed property for the joint table, but having the interface in mind - it's unuseful when it's assigned in the ManyToMany part, because it refers to a type, and I am trying to use it as a value.
Also, does having two simultaneous database connections is necessary here? Because I am trying to mask the interface for the table needed from the second database.
Any recommendation for avoiding this problem while keeping my typescript compiler happy?
I highly doubt that TypeORM allows for linking between databases like this. The problem is that most of the relations auto-generate SQL queries that pull in the various data base tables it needs to operate on. If you have two databases, one SQL queries can't get all the info it needs. So your application needs to be the glue that binds them together.
I think the best you can get is store the ID in the entity, and then manually query each connection.
#Entity()
class Thing {
#Column()
otherThingId: number
}
// usage
const thing = await ThingRepository.find(123)
const otherThing = await OtherThingRepository.find(thing.otherThingId)

Sequelize: what's the point of models?

I'm using Sequelize as my ORM, and just wondering what the point of having a model is.
It looks like the main thing that matters, is the table definitions in your migrations, and models are just a static snapshot of what your tables look like. When you perform a migration, nothing changes in your models. It doesn't get updated, nor created/deleted based on your migration.
You have to manually keep your models up to date it looks like.
So is there any point in having models, or making the effort to keep them updated?
The models are the definition of your database schema so that it can map into the ORM that Sequelize provides. For me this is the most important feature of Sequelize, not the migrations.
Migrations are used for changing the database schema.
Models are used to map the database schema to your code.
Using Models gives you lots of built in helper methods, associations let you build references between tables to generate complex JOINs, etc.

Preferred way of handling "Event Sourcing" in NestJS CQRS recipe

I have been trying to figure out the preferred way of doing "Event Sourcing" while using the NestJS CQRS recipe (https://docs.nestjs.com/recipes/cqrs).
I've been looking at the NestJS framework during the last couple of weeks and love every aspect of it. Except for the docs, which are pretty thin in some areas.
Either NestJS doesn't really have an opinion on how to implement "Event Sourcing", or I'm missing something obvious.
My main question is: What's the easiest way to persist the events themselves?
Right now, my events look pretty basic:
import { IEvent } from '#nestjs/cqrs';
export class BookingChangedTitleEvent implements IEvent {
constructor(
public readonly bookingId: string,
public readonly title: string) {}
}
My initial idea was to use TypeORM (https://docs.nestjs.com/recipes/sql-typeorm) and make each of my events not only implement IEvent, but also make it inherit a TypeORM #Entity().
But that would have one table (SQL) or collection (NoSQL) for each of the events, making it impossible to read all events that happened to a single aggregate. Am I missing something?
Another approach would be to dump each event to JSON, which sounds pretty easy. But how would I load the object IEvent classes from the db then? (sounds like I'm implementing my own ORM then)
So I'm doing something similar and using postgres, which does support json ('simple-json') in TypeORM vernacular (reference). For better or worse, my event entity looks like:
#Entity()
export class MyEvent {
#PrimaryGeneratedColumn('uuid')
id: string;
#Column()
name: string;
#Column('simple-json')
data: object;
#CreateDateColumn({type: 'timestamp'})
created_at: Date;
}
It's important to note that I'm only using my persisted events for an audit trail and the flexibility of potential projections I'm not already building. You can absolutely query on the JSON in postgres using TypeORM, eg. .where('my_event.data ::jsonb #> :data', {data: {someDataField: 2}}), but my understanding is querying your events to get current state is kinda missing the point of CQRS. Better off building up aggregates in new projection tables or updating one huge projection.
I'm fine with how I'm currently persisting my events, but it's certainly not DRY. I would think extending a base class with a common saveEvent method or using a EventHandlerFactory class that would take the repository in its constructor would be a bit cleaner, rather than injecting the repository into every handler.
Maybe someone out there has some good thoughts?
First of all, your initial hunch was correct: NestJS CQRS module has no opinion on how you implement Event Sourcing. Reason is CQRS is something different than ES. While you can combine them, this is entirely optional. Then if you decide to go with ES, there are again a ton of ways to implement.
It seems you would like to persist your events in a relational database, which can be a good choice to avoid the additional complexity of having a second NoSql database (you can switch to a dedicated db later, e.g. Eventstore, and benefit from specialized ES features).
With regards to your SQL model, it is best-practice to have a single table for storing your Events. A very nice article that demonstrates this is Event Storage in Postgres by Kasey Speakman.
The Events table layout chosen here looks as follows:
CREATE TABLE IF NOT EXISTS Event
(
SequenceNum bigserial NOT NULL,
StreamId uuid NOT NULL,
Version int NOT NULL,
Data jsonb NOT NULL,
Type text NOT NULL,
Meta jsonb NOT NULL,
LogDate timestamptz NOT NULL DEFAULT now(),
PRIMARY KEY (SequenceNum),
UNIQUE (StreamId, Version),
FOREIGN KEY (StreamId)
REFERENCES Stream (StreamId)
);
The article provides clear description of the rationale for each of the columns, but you can build your aggregates using a query based on the StreamId + Version. The Meta column can hold metadata, such as userId and correlationId (here's more info on correlation), etc. The article also mentions how you can create Snapshots, which in some cases may be handy (but avoid until needed).
Note the Type column, which stores the event type and can be used for deserialization purposes (so no need to create your own ORM ;)
Other projects that show how to implement event storage are PostgreSQL Event Sourcing and the more complete solution message-db.
I'm sure this is going to be pretty different depending on the persistence layer. When using MongoDB I am have loose event schemas with Mongoose, with some required properties for aggregate events.
The events themselves are just plain classes, like:
class FooHappened {
constructor(
readonly root: string;
readonly bar: string;
) {}
}
I've been using a root property for the aggregate root ObjectId to build read models and that has been working well so far.

Handling entity updates from a mapped object

I have my code first, SQL data models (using EF Core 1.1) that are used to model my schema/tables. However I also have domain objects which are partial or full mapped versions of these SQL data models, in essence they sort of have the same shape as the SQL data models.
Now I would like to know what is the best way to handle cascading updates when you have complex objects being altered outside of the context of its tracked context. When you consider that all my domain operations do not take place on the tracked entity, they take place on the domain object.
In Short, This is what I am trying to achieve.
1) Read entity from database.
2) Map entity to domain object.
3) Apply updates to domain object.
4) Map domain object back to entity.
5) Apply database update on mapped entity which results in the entity and its associated relative entities to be updated.
By the way the entities and domain object have the typical many to one relationships that one might run into. What is the best way to go about doing this?
What is the best way to go about doing this?
I think the best way to go about this is to avoid the problem in the first place by using an framework that is flexible enough to allow mapping the domain objects directly to the database without too many compromise in order to avoid having to model an explicit persistence model in code.
in essence they sort of have the same shape as the SQL data models
If you think about it means you would have the same impedance mismatch between your domain model (object model) and relational DB model than between your domain model and the explicit persistence model.
Still, there is an elegant way to perform the mapping which Vaughn Vernon describes in Modeling Aggregates with DDD and Entity Framework. Basically, it boils down to store state in explicit state objects which are bags of getters/setters that are encapsulated and maintained by real domain objects. These state objects are then mapped with EF.
E.g. taken from the above linked article
public class Product {
public Product(
TenantId tenantId,
ProductId productId,
ProductOwnerId productOwnerId,
string name,
string description) {
State = new ProductState();
State.ProductKey = tenantId.Id + ":" + productId.Id;
State.ProductOwnerId = productOwnerId;
State.Name = name;
State.Description = description;
State.BacklogItems = new List<ProductBacklogItemState>();
}
internal Product(ProductState state) {
State = state;
}
...
}

Whats the difference between ORM and ORP?

What is the difference between Object Relational Mapping(ORM) and Object Relational Persistence(ORP)?
From what i know ORM is a framework for mapping relational tables to application domain objects and relationships between them. So in ORM you would already have a Persistent data structure?
ORP consists of:
Entities
Database connection
Database
Mapping (ORM)
Etc..
We could almost say it is the same thing since ORM is a term which is used as ORP. Both are used to indicate the same thing.

Resources