I am Beginner for subsonic.
How do i use Transaction scope after generating object class.
so can any one help me out knowing how to insert into Db performing Conditional Checks for Query and doing transactions.
Related
When I try run a query to read all entities of a kind in a transaction with google datastore it gives me this error
{ Error: Only ancestor queries are allowed inside transactions.
at /root/src/node_modules/grpc/src/client.js:554:15
code: 3,
metadata: Metadata { _internal_repr: {} },
So I need to use an ancestor query. How do I create an ancestor query? It appears to depend on how you structured the hierarchy in datastore. So my next question is, given every entity I have created in datastore has been saved like so (the identifier is unique to the entityData saved)
const entityKey = datastore.key({ namespace: ns, path: [kind, identifier] });
{ key: entityKey, method: 'upsert', data: entityData };
How do I read from the db within a transaction? I think I could do it if I knew the identifiers, but the identifiers are constructed from the entityData that I saved in the kind and I need to read the entities of the kind to figure out what I have in the db (chicken egg problem). I am hoping I am missing something.
More context
The domain of my problem involves sponsoring people. I have stored a kind people in datastore where each entity is a person consisting of a unique identifier, name and grade. I have another kind called relationships where each entity is a relationship containing two of the peoples identifiers, the sponsor & sponsee (linking to people together). So I have structured it like an RDB. If I want to get a persons sponsor, I get all the relationships from the db, loop over them returning the relationships where the person is the sponsee then query the db for the sponsor of that relationship.
How do I structure it the 'datastore' way, with entity groups/ancestors, given I have to model people and their links/relationships.
Let's assume a RDB is out of the question.
Example scenario
Two people have to be deleted from the app/db (let's say they left the company on the same day). When I delete someone, I also want to remove their relationships. The two people I delete share a relationship (one is sponsoring the other). Assume the first transaction is successful i.e. I delete one person and their relationship. Next transaction, I delete one person, then search the relationships for relevant relationships and I find one that has already been deleted because eventually consistent. I try find the person for that relationship and they don't exist. Blows up.
Note: each transaction wraps delete person & their relationship. Multiple people equals multiple transactions.
Scalability is not a concern for my application
Your understanding is correct:
you can't use an ancestor query since your entities are not in an ancestry relationship (i.e. not in the same entity group).
you can't perform non-ancestor queries inside transactions. Note that you also can't read more than 25 of your entities inside a single transaction (each entity is in a separate entity group). From Restrictions on queries:
Queries inside transactions must be ancestor queries
Cloud Datastore transactions operate on entities belonging to up
to 25 entity groups, but queries inside transactions must be
ancestor queries. All queries performed within a transaction must
specify an ancestor. For more information, refer to Datastore
Transactions.
The typical approach in a context similar to yours is to perform queries outside transactions, often just keys only queries - to obtain the entity keys, then read the corresponding entities (up to 25 at a time) by key lookup inside transactions. And use transactions only when it's absolutely needed, see, for example, this related discussion: Ancestor relation in datastore.
Your question apparently suggests you're approaching the datastore with a relational DB mindset. If your app fundamentally needs relational data (you didn't describe what you're trying to do) the datastore might not be the best product for it. See Choosing a storage option. I'm not saying that you can't use the datastore with relational data, it can still be done in many cases, but with a bit more careful design - those restrictions are driving towards scalable datastore-based apps (IMHO potentially much more scalable that you can achieve with relational DBs)
There is a difference between structuring the data RDB style (which is OK with the datastore) and using it in RDB style (which is not that good).
In the particular usage scenario you mentioned you do not need to query for the sponsor of a relationship: you already have the sponsor's key in the relationship entity, all you need to do is look it up by key, which can be done in a transaction.
Getting all relationship entities for a person needs a query, filtered by the person being the sponsor or the sponsee. But does it really have to be done in a transaction? Or is it acceptable if maybe you miss in the result list a relationship created just seconds ago? Or having one which was recently deleted? It will eventually (dis)appear in the list if you repeat the query a bit later (see Eventual Consistency on Reading an Index). If that's acceptable (IMHO it is, relationships don't change that often, chances of querying exactly right after a change are rather slim) then you don't need to make the query inside a transaction thus you don't need an ancestry relationship between the people and relationship entities. Great for scalability.
Another consideration: looping through the list of relationship entities: also doesn't necessarily have to be done in a transaction. And, if the number of relationships is large, the loop can hit the request deadline. A more scalable approach is to use query cursors and split the work across multiple tasks/requests, each handling a subset of the list. See a Python example of such approach: How to delete all the entries from google datastore?
For each person deletion case:
add something like a being_deleted property (in a transaction) to that person to flag the deletion and prevent any use during deletion, like creating new relationship while the deletion task is progressing. Add checks for this flag wherever needed in the app's logic (also in transactions).
get the list of all relationship keys for that person and delete them, using the looping technique mentioned above
in the last loop iteration, when there are no relationships left, enqueue another task, generously delayed, to re-check for any recent relationships that might have been missed in the previous loop execution due to the eventual consistency. If any shows up re-run the loop, otherwise just delete the person
If scalability is not a concern, you can also re-design you data structures to use ancestry between all your entities (placing them in the same entity group) and then you could do what you want. See, for example, What would be the purpose of putting all datastore entities in a single group?. But there are many potential risks to be aware of, for example:
max rate of 1 write/sec across the entire entity group (up to 500 entities each), see Datastore: Multiple writes against an entity group inside a transaction exceeds write limit?
large transactions taking too long and hitting the request deadlines, see Dealing with DeadlineExceededErrors
higher risk of contention, see Contention problems in Google App Engine
I have recently started using Cosmos DB for a project and I am running into a few design issues. Coming from a SQL background, I understand that related data should be nested within documents on a NoSQL DB. This does mean that documents can become quite large though.
Since partial updates are not supported, what is the best design pattern to implement when you want to update a single property on a document?
Should I be reading the entire document server side, updating the value and writing the document back immeadiately in order to perform an update? This seems problematic if the documents are large which they inevitably would be if all your data is nested.
If I take the approach of making many smaller documents and infer relationships based on IDs I think this would solve the read/write immeadiately for updates concern but it feels like I am going against the concept of a NoSQL and in essence I am building a relational DB.
Thanks
Locking and latching. That's what needs to happen if partial updates become possible. It's a difficult engineering problem to keep a <15ms write latency SLA with locking.
This seems problematic if the documents are large which they inevitably would be if all your data is nested.
Define your fear — burnt Request Units, app host memory, ingress/egress network traffic? You believe this is a problem but you're not stating concrete results. I'm not saying you're wrong or doubting the efficiency of the partial update approach, i'm just saying the argument is thin.
Usually you want to JOIN nothing in NoSQL, so i'm totally with you on the last paragraph.
Whenever you are trying to create a document try to consider this:
Does the part of document need separate access . If yes then create a referenced document and if no then create a embedded document.
And if you want to know what to choose, i think you should need to take a look at this question its for MongoDb but will help you Embedded vs Referenced Document
Embed or Reference is the most common problem I face while designing document structure in NoSQL world.
In embedded relationship, child entities has been embedded into the parent document. In Reference relationship, child entities in separate documents and their parent in another document, basically having two (or more) types of documents.
There is no one relationship pattern fits all. The approach you should take depends on the Retrieve and Update to be done on the data is being designed.
1.Do you need to retrieve all the child entities along with the parent entities? If Yes, use embedded relationships.
2.Do your use case allow entities being retrieved individually? This case use relationship pattern.
Majority of the use cases I have worked, I used relationship pattern. For example: Social Graph (Profiles with Relationship Tree), Proximity Points (GeoJSON based proximity search), Classified Listing etc.
Relationship Pattern is also easier to update and maintain, as the entities are stored in individual documents.
Partial Updates are now supported by Cosmos DB:
Azure Cosmos DB Partial Document Update feature (also known as Patch
API) provides a convenient way to modify a document in a container.
Currently, to update a document the client needs to read it, execute
Optimistic Concurrency Control checks (if necessary), update the
document locally and then send it over the wire as a whole document
Replace API call.
Partial document update feature improves this experience
significantly. The client can only send the modified properties/fields
in a document without doing a full document replace operation
Read more here: https://learn.microsoft.com/en-us/azure/cosmos-db/partial-document-update
I am trying to get to grips with the ideas behind DDD and apply them to a pet project we have, and I am having some questions that I hope that someone here would be able to answer.
The project is a document management system. The particular problem we have regards two notions that our system handles: That of a Document and that of a DocumentStatus.
A Document has a number of properties (such as title, author, etc). Users can change any of the Document's properties through out its life time.
A Document may be, at any time, be at a particular state such as NEW, UNDER_REVISION, REVISED, APPROVED etc. For each state we need to know who made that change to that state.
We need to be able to query the system based on a document status. An example query would be "Get me all documents that are in the REVISED state".
"Get me all documents whose status has been changed by user X"
The only time that a Document and a DocumentStatus need to be changed in the same transaction is when the Document is created (create the document and at the same time assign it a status of NEW).
For all other times, the UI allows the update of either but not both (i.e. you may change a document's property such as the author, but not its state.) Or you can update its state (from NEW to UNDER_REVISION) but not its properties.
I think we are safe to consider that a Document is an Entity and an Aggregate Root.
We are buffled about what DocumentStatus is. One option is to make it a Value Object part of the Document's aggregate.
The other option is to make it an Entity and be the root of its own aggregate.
We would also liked to mention that we considered CQRS as described in various DDD documents, but we think it is too much of a hassle, especially given the fact that we need to perform queries on the DocumentStatus.
Any pointers or ideas would be welcomed.
Domain
You say you need to be able to see past status changes, so the status history becomes a domain concept. A simple solution would then be the following:
Define a StatusHistory within the Document entity.
The StatusHistory is a list of StatusUpdate value objects.
The first element in the StatusHistory always reflects the current state - make sure you add the initial state as StatusUpdate value object when creating Document entities.
Depending on how much additional logic you need for the status history, consider creating a dedicated value object (or even entity) for the history itself.
Persistence
You don't really say how your persistence layer looks like, but I think creating queries against the first element of the StatusHistory list should be possible with every persistence mechanism. With a map-reduce data store, for example, create a view that is indexed by Document.StatusHistory[0] and use that view to realize the queries you need.
If you were only to record the current status, then that could well be a value object.
Since you're composing more qualifying - if not identifying - data into it, for which you also intend to query, then that sounds to me as if no DocumentStatus is like another, so a value object doesn't make much sense, does it?
It is identified by
the document
the author
the time it occurred
Furthermore, it makes even more sense in the context of the previous DocumentStatus (if you consider more states than just NEW and UNDER_REVISION).
To me, this clearly rules out modeling DocumentStatus as a value object.
In terms of the state as a property of DocumentStatus, and following the notion of everything is an object (currently reading David West's Object Thinking), then that could of course be modeled as a value object.
Follows How to model an entity's current status in DDD.
I was thinking, How we can prevent user/developer from adding unwanted nodes/relationships/properties?
What I read was - we should impose those schema's at application level. So, how can we do that in Node.js?
Is there any example on this? Or can someone post some code here?
It depends how your applications works.
You can create your own validation in your application, but it depends on the type of your application.
Better option is to create your own unmanaged extension for Neo4j. You can use Transaction Event API for that - http://graphaware.com/neo4j/transactions/2014/07/11/neo4j-transaction-event-api.html
GraphAware provides paid extensions for schema enforcemen - http://graphaware.com/enterprise/
Neo4j supports some limited schema enforcement. Specifically:
Uniqueness constraints. Specify a node property that acts as a unique id for Nodes with a given label. Transactions that attempt to violate the constraint will be rolled back. Uniqueness constraints are created in Cypher using this syntax:
CREATE CONSTRAINT ON (p:Person) ASSERT p.name IS UNIQUE
Property existence constraints. This constraint ensures that all Nodes of a given label contain the specified property. Any create statement that does not specify the given property will be rolled back. Property existence constraints can by created with this syntax:
CREATE CONSTRAINT ON (book:Book) ASSERT exists(book.isbn)
Note that property existence constraints are new in Neo4j 2.3 and are only available in Neo4j Enterprise.
I am working on a legacy project and trying to introduce CQRS in some places where it's appropriate. In order to integrate with all of the legacy which is relational I would like to project my aggregate (or part of it) into a table in the relational database.
I would also like the aggregate ID to be the auto-incremented value on that projected table. I know this seems like going against the grain since it's mixing the read model with the write model. However I don't want to pollute the legacy schema with foreign key GUUIDs.
Would this be a complete no-no, and if so what would you suggest?
Edit: Maybe I could just store the GUUID in the projected table, that way when the events get projected I can identify the row to update, but then still have an auto incremented column for joining on?
There is nothing wrong with using an id created by the infrastructure layer for your entities. This pattern is commonly used in Vaughn Vernon's 'Implementing DDD' book:
Get the next available ID from the repository.
Create an entity.
Save the entity in the repository.
Your problem is that you want to use an id created in another Bounded Context. That is a huge and complete no-no, not the fact that the id is created by the Infrastructure Layer.
You should create the id in your Bounded Context and use it to reference the aggregate from other Contexts (just as you wrote when you edited your question).