I was thinking, How we can prevent user/developer from adding unwanted nodes/relationships/properties?
What I read was - we should impose those schema's at application level. So, how can we do that in Node.js?
Is there any example on this? Or can someone post some code here?
It depends how your applications works.
You can create your own validation in your application, but it depends on the type of your application.
Better option is to create your own unmanaged extension for Neo4j. You can use Transaction Event API for that - http://graphaware.com/neo4j/transactions/2014/07/11/neo4j-transaction-event-api.html
GraphAware provides paid extensions for schema enforcemen - http://graphaware.com/enterprise/
Neo4j supports some limited schema enforcement. Specifically:
Uniqueness constraints. Specify a node property that acts as a unique id for Nodes with a given label. Transactions that attempt to violate the constraint will be rolled back. Uniqueness constraints are created in Cypher using this syntax:
CREATE CONSTRAINT ON (p:Person) ASSERT p.name IS UNIQUE
Property existence constraints. This constraint ensures that all Nodes of a given label contain the specified property. Any create statement that does not specify the given property will be rolled back. Property existence constraints can by created with this syntax:
CREATE CONSTRAINT ON (book:Book) ASSERT exists(book.isbn)
Note that property existence constraints are new in Neo4j 2.3 and are only available in Neo4j Enterprise.
Related
I wish to know how to create a foreign key with NOT NULL in Keystonejs 6 Schema file.
I use postgresQl AND ORM Prisma.
I can't create a relationship field with isRequired = true, which means NOT NULL.
Someone can explain how to add NOT NULL for the relationship field in Keystonejs 6 Schema file? Either maybe it's impossible?
Yeah, relationship fields currently don't support the validation.isRequired or db.isNullable options. This is true even when the list being configured holds the foreign key (ie. a many-to-one relationship or one-to-one with db.foreignKey: true).
There are plans to support these options but the work isn't trivial. For example, these constraints can effect the order in which nested creates need to be performed. Keystone will also need to validate the config makes sense and doesn't for example, have isNullable: false on both sides of a one-to-one relationship (which would make inserting records impossible).
If you want to emulate similar functionality right now it's possible using hooks. I think you'd need...
A validate-input hook on the list with the foreign key, to ensure a item was linked on create (and not removed on update)
validate-input and validate-delete hooks on the other list to ensure links weren't broken when updating or deleting items from the other side.
Since this solution's implemented in the app layer it doesn't give you as strong a guarantee as a proper database constraint, but it's a start.
I've read the official documentation on creating 1:N and M:N relations and there's one particular aspect that isn't covered: support for importing and exporting the relations. Since the relation is defined implicitly using the primary keys (auto-incrementing integers), won't that be a problem when exporting the data for import in another environment (like in a backup/restore scenario)? For instance, the order of items should matter during the import. Also, the internal id values won't necessarily be the same after an import to a fresh Orchard installation (since they are auto-incrementing).
What is the preferred way of implementing relations that support importing and exporting?
This is solved by using the identity feature that is provided as part of the import/export api. Instead of referring to a primary key value that is pretty much guaranteed not to be valid on the target instance, it's generating a deterministic and unique id that enables proper transfer of items, including in cases of relationships. There are two identity providers out of the box. One uses the alias of the item (when that exists) and the other stores GUID (that's the identity part, used by widgets for example).
Quotes are from DDD: Tackling Complexity in the Heart of Software ( pg. 150 )
a)
global search access to a VALUE is often meaningles, because finding a
VALUE by its properties would be equivalent to creating a new instance
with those properties. There are exceptions. For example, when I am
planning travel online, I sometimes save a few prospective itineraries
and return later to select one to book. Those itineraries are VALUES
(if there were two made up of the same flights, I would not care which
was which), but they have been associated with my user name and
retrieved for me intact.
I don't understand author's reasoning as for why it would be more appropriate to make Itinierary Value Object globally accessible instead of clients having to globally search for Customer root entity and then traverse from it to this Itinierary object?
b)
A subset of persistent objects must be globaly accessible through a
search based on object attributes ... They are usualy ENTITIES,
sometimes VALUE OBJECTS with complex internal structure ...
Why is it more common for Values Objects with complex internal structure to be globally accesible rather than simpler Value Objects?
c) Anyways, are there some general guidelines on how to determine whether a particular Value Object should be made globally accessible?
UPDATE:
a)
There is no domain reason to make an itinerary traverse-able through
the customer entity. Why load the customer entity if it isn't needed
for any behavior? Queries are usually best handled without
complicating the behavioral domain.
I'm probably wrong about this, but isn't it common that when user ( Ie Customer root entity ) logs in, domain model retrieves user's Customer Aggregate?
And if users have an option to book flights, then it would also be common for them to check from time to time the Itineraries ( though English isn't my first language so the term Itinerary may actually mean something a bit different than I think it means ) they have selected or booked.
And since Customer Aggregate is already retrieved from the DB, why issue another global search for Itinerary ( which will probably search for it in DB ) when it was already retrieved together with Customer Aggregate?
c)
The rule is quite simple IMO - if there is a need for it. It doesn't
depend on the structure of the VO itself but on whether an instance of
a particular VO is needed for a use case.
But this VO instance has to be related to some entity ( ie Itinerary is related to particular Customer ), else as the author pointed out, instead of searching for VO by its properties, we could simply create a new VO instance with those properties?
SECOND UPDATE:
a) From your link:
Another method for expressing relationships is with a repository.
When relationship is expressed via repository, do you implement a SalesOrder.LineItems property ( which I doubt, since you advise against entities calling repositories directly ), which in turns calls a repository, or do you implement something like SalesOrder.MyLineItems(IOrderRepository repo)? If the latter, then I assume there is no need for SalesOrder.LineItems property?
b)
The important thing to remember is that aggregates aren't meant to be
used for displaying data.
True that domain model doesn't care what upper layers will do with the data, but if not using DTO's between Application and UI layers, then I'd assume UI will extract the data to display from an aggregate ( assuming we sent to UI whole aggregate and not just some entity residing within it )?
Thank you
a) There is no domain reason to make an itinerary traverse-able through the customer entity. Why load the customer entity if it isn't needed for any behavior? Queries are usually best handled without complicating the behavioral domain.
b) I assume that his reasoning is that complex value objects are those that you want to query since you can't easily recreate them. This issue and all query related issues can be addressed with the read-model pattern.
c) The rule is quite simple IMO - if there is a need for it. It doesn't depend on the structure of the VO itself but on whether an instance of a particular VO is needed for a use case.
UPDATE
a) It is unlikely that a customer aggregate would have references to the customer's itineraries. The reason is that I don't see how an itinerary would be related to behaviors that would exist in the customer aggregate. It is also unnecessary to load the customer aggregate at all if all that is needed is some data to display. However, if you do load the aggregate and it does contain reference data that you need you may as well display it. The important thing to remember is that aggregates aren't meant to be used for displaying data.
c) The relationship between customer and itinerary could be expressed by a shared ID - each itinerary would have a customerId. This would allow lookup as required. However, just because these two things are related it does not mean that you need to traverse customer to get to the related entities or value objects for viewing purposes. More generally, associations can be implemented either as direct references or via repository search. There are trade-offs either way.
UPDATE 2
a) If implemented with a repository, there is no LineItems property - no direct references. Instead, to obtain a list of line items a repository is called.
b) Or you can create a DTO-like object, a read-model, which would be returned directly from the repository. The repository can in turn execute a simple SQL query to get all required data. This allows you to get to data that isn't part of the aggregate but is related. If an aggregate does have all the data needed for a view, then use that aggregate. But as soon as you have a need for more data that doesn't concern the aggregate, switch to a read-model.
I have a problem in upgrading an old data model to a current model. It has a couple of layers which could be causing the problem and I'm struggling to determine where the issue lies.
I have an abstract Client entity which contains generic relationships to phone numbers, email addresses, etc. In my old model there was a relationship where a client could own 1 Property (but a Property could have many owners) or a client could be a tenant in a Lease (but a Lease can have many tenants). I've now updated the model such that a Client can own many Properties and be part of many Leases.
The concrete Client entities basically add different naming information to the abstract so there are Individual, Business, Government and Import (imported from other systems) subclasses.
My expectation was that the one-to-many relationship established in the old data model would be added as a first instance in a new many-to-many relationship in the new data model. Unfortunately the upgraded data store doesn't appear to contain any relationships in the new concrete clients for Properties or Leases.
The old model:
Client{
Property<<-->Property.Owners
Tenancy<<-->Lease.Tenants
}
ImportClient:Client{
name:string
}
The new model:
Client{
Properties<<-->>Property.Owners
Tenancies<<-->>Lease.Tenants
}
ImportClient:Client{
name:string
}
So now for the possible problems I can see. Firstly the relationship names have changed in the Client entity from Property to Properties and from Lease to Leases. So I've added a mapping model. The model didn't automatically add an Entity Mapping for ClientToClient (only for the concrete classes) so I've tried adding one myself. I'm not sure however how to set up the Value Expression so at the moment it's:
FUNCTION($manager,"destinationInstancesForEntityMappingName:sourceInstances:","PropertyToProperty","$source.Property")
If I try and add the mapping to the concrete classes (so ImportClientToImportClient) it seems to be absolutely impossible to set the relationship values correctly (denied by the editor basically).
So my suspicions are that it's either failing to transfer the relationships because when the fetch is run against Client entity it returns nothing (whenever I've tried it this has been the case) or I'm just not getting the Value Expression right.
Help would be greatly appreciated because at the moment this is the only issue blocking the release of my major upgrade to the app.
So the solution as I've found it (for those who stumble across this later...) in general terms as I don't have time at present to detail all the code here.
Step 1: See if you can open your data store without any migration options. If that succeeds continue on, otherwise Step 2.
Step 2: Retrieve the store metadata with [NSPersistentStoreCoordinator metadataForPersistantStore...]
Step 3: Load your older models one at a time from most recent to oldest and use [NSManagedModel isConfiguration: compatibleWithStoreMetadata:] until you find a model that works
Step 4: Create a persistent store with the model that works and then create a managed object context from the data store and the persistent store
Step 5: Cache the failing relationship in a dictionary (I used pairs of UUIDs to identify the objects)
Step 6: Perform lightweight upgrade of the data store using your current managed object model
Step 7: Go through the dictionaries fetching the pairs of objects and associating them again
Yes you're going to have to use your own coding skills to implement this (it's about 250 lines of code for my instance) but hopefully this is the seed you need to get it working...
I have a ms sql server database with a growing number of stored procedures and user defined functions and I see some need to organize the code better. My idea was to split sps and functions over several schemata. The default schema would hold the the sps called from the outside. The API of the database in other words. A second schema would hold internal code, that should not be called from the outside. I would probably do the same for tables: Some contain "raw" data, some hold precalculated data for optimizations, ...
As I have never used schema, I have several questions:
Does this make sense at all?
Are there any implications that I'm not aware of? For example performance issues when a sp in Schema A is using a table in Schema X?
Is it possible to restrict the "outer world" to use only sps in a certain schema? For example: User A is only allowed to call objects in schema A, but sps in schema A are still allowed to use tables in schema B?
As this question is somewhat subjective, I have marked it as "community wiki". Hope that is ok.
yes, it makes sense
no difference in performance if all schemas have the same owner (ownership chaining)
yes, permission schemas explicitly per client or have some check internally
We uses schemas to separate data, internals SPs, internal functions, and then SPs per client.
One advantage is we GRANT permissions on the schema not on objects, which is what I personally needed to clarify in my question, before we started using them.