Change internal state of entity inside an aggregate root - domain-driven-design

Im developing a bank system that uses event sourcing.
I have an aggregate root named Account:
BankAccount {
private var balance: BigInteger = BigInteger.ZERO
private val transactions = mutableListOf<Transaction>()
private var revision: Long = 0
The Transaction entity:
Transaction(val amount: BigDecimal, status: TransactionStatus) {
val id: UUID = UUID.randomUUID()
var status: TransactionStatus = TransactionStatus.REGISTERED
Register a new transaction in bankAccount aggregate root is very simple, but I also have to confirm or to cancel the transaction after some asynchronous validation.
The problem starts when I need to fetch the registered transaction in order to change it state:
fun completeTransaction(transactionId: UUID, newStatus: TransactionStatus){
val transaction = transactions.first { it.id == transactionId }
transaction.changeStatus(newStatus)
}
First problem: Searching the transaction could impact performance given the number of transactions in the list.
In order to solve that, I can implement a snapshot solution, but how to garantee that the list of transaction will have the transaction with status REGISTERED that needs to change status in order to complete it? Because I cant pass the entire list to every snapshot. Its not performatic
Im thinking in transform the Transaction entity to a Transaction Aggregate Root instead. Then my stream will be very short. Use the bankaccount AR to validate balance and a Transaction AR to validate transaction state.
SOLVED
I change the implementation of completeTransaction function. Everytime a transaction is completed I remove the transaction from the list. So now I can always save the transaction list in the snapshot because it will always contains only UNCOMPLETED transactions

Related

Proper Sequelize flow to avoid duplicate rows?

I am using Sequelize in my node js server. I am ending up with validation errors because my code tries to write the record twice instead of creating it once and then updating it since it's already in DB (Postgresql).
This is the flow I use when the request runs:
const latitude = req.body.latitude;
var metrics = await models.user_car_metrics.findOne({ where: { user_id: userId, car_id: carId } })
if (metrics) {
metrics.latitude = latitude;
.....
} else {
metrics = models.user_car_metrics.build({
user_id: userId,
car_id: carId,
latitude: latitude
....
});
}
var savedMetrics = await metrics();
return res.status(201).json(savedMetrics);
At times, if the client calls the endpoint very fast twice or more the endpoint above tries to save two new rows in user_car_metrics, with the same user_id and car_id, both FK on tables user and car.
I have a constraint:
ALTER TABLE user_car_metrics DROP CONSTRAINT IF EXISTS user_id_car_id_unique, ADD CONSTRAINT user_id_car_id_unique UNIQUE (car_id, user_id);
Point is, there can only be one entry for a given user_id and car_id pair.
Because of that, I started seeing validation issues and after looking into it and adding logs I realize the code above adds duplicates in the table (without the constraint). If the constraint is there, I get validation errors when the code above tries to insert the duplicate record.
Question is, how do I avoid this problem? How do I structure the code so that it won't try to create duplicate records. Is there a way to serialize this?
If you have a unique constraint then you can use upsert to either insert or update the record depending on whether you have a record with the same primary key value or column values that are in the unique constraint.
await models.user_car_metrics.upsert({
user_id: userId,
car_id: carId,
latitude: latitude
....
})
See upsert
PostgreSQL - Implemented with ON CONFLICT DO UPDATE. If update data contains PK field, then PK is selected as the default conflict key. Otherwise, first unique constraint/index will be selected, which can satisfy conflict key requirements.

Dealing with race conditions and starvation when generating unique IDs using MongoDB + NodeJS

I am using MongoDB to generate unique IDs of this format:
{ID TYPE}{ZONE}{ALPHABET}{YY}{XXXX}
Here ID TYPE will be an alphabet from {U, E, V} depending on the input, zone will be from the set {N, S, E, W}, YY will be the last 2 digits of the current year and XXXXX will be a 5 digit number beginning from 0 (willbe padded with 0s to make it 5 digits long). When XXXXX reaches 99999, the ALPHABET part will be incremented to the next alphabet (starting from A).
I will receive ID TYPE and ZONE as input and will have to give the generated unique ID as output. Everytime, I have to generate a new ID, I will read the last generated for the given ID TYPE and ZONE, increment the number part by 1 (XXXXX + 1) and then save the new generated ID in MongoDB and return the output to the user.
This code will be run on a single NodeJS server and there can be multiple clients calling this method
Is there a possibility of a race condition like the once described below if I am ony running a single server instance:
First client reads last generated ID as USA2100000
Second client reads last generated ID as USA2100000
First client generates the new ID and saves it as USA2100001
Second client generates the new ID and saves it as USA2100001
Since 2 clients have generated IDs, finally the DB should have had USA2100002.
To overcome this, I am using MongoDB transactions. My code in Typescript using Mongoose as ODM is something like this:
session = await startSession();
session.startTransaction();
lastId = await GeneratedId.findOne({ key: idKeyStr }, "value").value
lastId = createNextId(lastId);
const newIdObj: any = {
key: `Type:${idPrefix}_Zone:${zone_letter}`,
value: lastId,
};
await GeneratedId.findOneAndUpdate({ key: idKeyStr }, newIdObj, {
upsert: true,
new: true,
});
await session.commitTransaction();
session.endSession();
I want to know what exactly will happen when the situation I
described above happens with this code?
Will the second client's transaction throw an exception and I have to abort or retry the transaction in my code or will it handle the retry on its own?
How does MongoDB or other DBs handle transactions? Does MongoDB lock the documents involved in the transaction? Are the exclusive locks (wont even allow other clients to read)?
If the same client keeps failing to commit its transaction, this client would be starved. How to deal with this starvation?
You are using MongoDB to store the ID. It's a state. Generation of the ID is a function. You use Mongodb to generate the ID when mongodb process takes arguments of the function and returns the generated ID. It's not what you are doing. You are using nodejs to generate the ID.
Number of threads, or rather event loops is critical as it defines the architecture but in either way you don't need transactions. Transactions in mongodb are being called "multi-document transactions" exactly to highlight they are intended for consistent update of several documents at once. The very first paragraph of https://docs.mongodb.com/manual/core/transactions/ warns you that if you update a single document there is no room for transactions.
A single threaded application does not require any synchronisation. You can reliably read the latest generated ID on start and guarantee the ID is unique within the nodejs process. If you exclude mongodb and other I/O from the generation function you will make it synchronous so you can maintain state of the ID within nodejs process and guarantee its uniqueness. Once generated you can persist in in the db asynchronously. In the worst case scenario you may have a gap in the sequential numbers but no duplicates.
If there is a slighteest chance that you may need to scale up to more than 1 nodejs process to handle more simultaneous requests or add another host for redundancy in the future you will need to sync generation of the ID and you can employ Mongodb unique indexes for that. The function itself doesn't change much you still generate the ID as in a single-threaded architecture but add an extra step to save the ID to mongo. The document should have unique index on the ID field, so in case of concurrent updates one of the query will successfully add the document and another will fail with "E11000 duplicate key error". You catch such errors on nodejs side and repeat the function again picking the next number:
This is what you can try. You need to store only one document in the GeneratedId collection. This document will have the last generated id's value. The document must have a known _id field, for example lets say it will be an integer with value 1. So, the document can be like this:
{ _id: 1, lastGeneratedId: "<some value>" }
In your application, you can use the findOneAndUpdate() method with a filter { _id: 1 }; which means you are targeting one document update. This update will be an atomic operation; as per the MongoDB documentation "All write operations in MongoDB are atomic on the level of a single document." . Do you need a transaction in this case? No. The update operation is atomic and performs better than using a transaction. See Update Documents - Atomicity.
Then, how do I generate the new generated id and retrieve it?
I will receive ID TYPE and ZONE...
Using the above input values and the existing lastGeneratedId value you can arrive at the new value and update the document (with the new value). The new value can be calculated / formatted within the Aggregation Pipeline of the update operation - you can use the feature Updates with Aggregation Pipeline (this is available with MongoDB v4.2 or higher).
Note the findOneAndUpdate() method returns the updated (or modified) document when you use the update option new: true. This returned document will have the newly generated lastGeneratedId value.
The update method can look like this (using NodeJS driver or even Mongoose):
const filter = { _id: 1 }
const update = [
{ $set: { lastGeneratedId: { // your calculation of new value goes here... } } }
]
const options = { new: true, projection: { _id: 0, lastGeneratedId: 1} }
const newId = await GeneratedId.findOneAndUpdate(filter, update, options).['lastGeneratedId']
Note about the JavaScript function:
With MongoDB v4.4 you can use JavaScript functions within an Aggregation Pipeline; and this is applicable for the Updates with Aggregation Pipeline. For details see $function aggregation pipeline operator.

Google Datastore can't update an entity

I'm having issues retrieving an entity from Google Datastore. Here's my code:
async function pushTaskIdToCurrentSession(taskId){
console.log(`Attempting to add ${taskId} to current Session: ${cloudDataStoreCurrentSession}`);
const transaction = datastore.transaction();
const taskKey = datastore.key(['Session', cloudDataStoreCurrentSession]);
try {
await transaction.run();
const [task] = await transaction.get(taskKey);
let sessionTasks = task.session_tasks;
sessionTasks.push(taskId);
task.session_tasks = sessionTasks;
transaction.save({
key: taskKey,
data: task,
});
transaction.commit();
console.log(`Task ${taskId} added to current Session successfully.`);
} catch (err) {
console.error('ERROR:', err);
transaction.rollback();
}
}
taskId is a string id of another entity that I want to store in an array of a property called session_tasks.
But it doesn't get that far. After this line:
const [task] = await transaction.get(taskKey);
The error is that task is undefined:
ERROR: TypeError: Cannot read property 'session_tasks' of undefined
at pushTaskIdToCurrentSession
Anything immediately obvious from this code?
UPDATE:
Using this instead:
const task = await transaction.get(taskKey).catch(console.error);
Gets me a task object, but it seems to be creating a new entity on the datastore:
I also get this error:
(node:19936) UnhandledPromiseRejectionWarning: Error: Unsupported field value, undefined, was provided.
at Object.encodeValue (/Users/.../node_modules/#google-cloud/datastore/build/src/entity.js:387:15)
This suggests the array is unsupported?
The issue here is that Datastore supports two kinds of IDs.
IDs that start with name= are custom IDs. And they are treated as strings
IDs that start with id= are numeric auto-generated IDs and are treated as integers
When you tried to updated the value in the Datastore, the cloudDataStoreCurrentSession was treated as a string. Since Datastore couldn't find an already created entity key with that custom name, it created it and added name= to specify that it is a custom name. So you have to pass cloudDataStoreCurrentSession as integer to save the data properly.
If I understand correctly, you are trying to load an Array List of Strings from Datastore, using a specific Entity Kind and Entity Key. Then you add one more Task and updated the value of the Datastore for the specific Entity Kind and Entity Key.
I have create the same case scenario as yours and done a little bit of coding myself. In this GitHub code you will find my example that does the following:
Goes to Datastore Entity Kind Session.
Retrieves all the data from Entity Key id=5639456635748352 (e.g.).
Get's the Array List from key: session_tasks.
Adds the new task that passed from the function's arguments.
Performs the transaction to Datastore and updates the values.
All steps are logged in the code and there are a lot of comments explaining exactly how the code works. Also there are two examples of currentSessionID. One for custom names and other one for automatically generated IDs. You can test the code to understand the usage of it and modify it according to your needs.

How do to get asset history in hyperledger composer using the asset id?

I have an asset that I have created in hyperledger composer and I want to get the transaction history of the asset using the asset id.
one of the work around as suggested in a similar question is to emit an event to every transaction that makes changes to the asset and then query the historian record based on the events emitted.
This is what I mean
// transaction that is going to make changes to the asset
transaction ModifyAsset{
o String assetId
}
// event
event ModifyAssetEvent {
o Asset asset
o String assetId
}
// queries the historian record
query searchProductHistory {
description: "search product by serial number"
statement:
SELECT org.hyperledger.composer.system.HistorianRecord
WHERE (eventsEmitted[0].assetId == $assetId)
}
This is could have been ideal but unfortunately Hyperledger composer cannot allow such a query.
Any other solution that I can use to achieve my goal will be highly appreciated
thanks in advance.
Maybe if you make a query to return only transactions with the specific asset id works.
Something like:
let q1 = businessNetworkConnection.buildQuery(
'SELECT org.hyperledger.composer.system.HistorianRecord'
+ 'WHERE (transactionId == assetId)'
);
My bad, but maybe something like
transaction.transactionInvoked.eventsemitted.asset == assetName
maybe Works.

How do I find the history of transactions for an Asset in a blockchain implemented using hyperledger-composer?

I'm working on the latest rev of hyperledger-composer (V0.13) and have built a network with multiple roles, each of which can invoke selected transactions within the blockchain. I would now like to query the blockchain (?Historian?) for all transactions which have been executed against a specific Order (defined type of asset).
I've used two different appoaches to pulling Historian data, once through direct API access historian.getall() and the other through a defined query:
query getHistorianRecords {
description: "get all Historian records"
statement: SELECT org.hyperledger.composer.system.HistorianRecord
}
Both queries succeed in that they return all transactions within the system. per ex:
ValidatedResource {
'$modelManager': ModelManager { modelFiles: [Object] },
'$namespace': 'org.hyperledger.composer.system',
'$type': 'HistorianRecord',
'$identifier': '0c3274475fed3703692bb7344453ab0910686905451b41d5d08ff1b032732aa1',
'$validator': ResourceValidator { options: {} },
transactionId: '0c3274475fed3703692bb7344453ab0910686905451b41d5d08ff1b032732aa1',
transactionType: 'org.acme.Z2BTestNetwork.CreateOrder',
transactionInvoked:
Relationship {
'$modelManager': [Object],
'$namespace': 'org.acme.Z2BTestNetwork',
'$type': 'CreateOrder',
'$identifier': '0c3274475fed3703692bb7344453ab0910686905451b41d5d08ff1b032732aa1',
'$class': 'Relationship' },
eventsEmitted: [],
transactionTimestamp: 2017-09-22T19:32:48.182Z }
What I can't find, and need, is a way to query the history of transactions against a single Order. An Order is defined (partial listing) as follows:
asset Order identified by orderNumber {
o String orderNumber
o String[] items
o String status
...
o String approved
o String paid
--> Provider provider
--> Shipper shipper
--> Buyer buyer
--> Seller seller
--> FinanceCo financeCo
What I'm looking for is a mechanism that will allow me to query the blockchain to get every record pertaining to an Order with orderNumber = '009'. I can, and have, easily found the current state of Order # 009, but am now looking for the history of transactions against that order. How to I tell Historian, or another service in the hyperledger-composer system, to give me that information?
What you are trying to do makes total sense, however the Historian doesn't yet support it. This requirement is being tracked here:
https://github.com/hyperledger/composer/issues/991
The plan is to add metadata into the HistorianRecord to capture the IDs of the assets and participants that where impacted by the transaction along with the operation performed (delete, update, create, read?).
Once that is in place you will be able to query for HistorianRecords that reference a given asset/participant id.
My work around is to have the transaction emit the results as event and have a query to then fetch the transaction by id from HistorianRecords which will contains the results emitted as event before.
I've posted the detailed answer here:
https://github.com/hyperledger/composer/issues/2458#issuecomment-383377837
A good workaround would be write a script function to update asset , emit the results as an event . And filter it from the rest endpoint in transactions tab .
One workaround solution is to emit an Event for your transaction. Mention ASSET as a field of the event. Please experiment with the following code.
let historian = await businessNetworkConnection.getHistorian();
let historianRecords = await historian.getAll();
for(let i=0; i< historianRecords.length;i++)
{console.log(historianRecords[i].transactionId+ "-----> " + historianRecords[i].eventsEmitted[0].YOUR_ASSET_NAME);}

Resources