Check and process expiring assets using chaincode - hyperledger-fabric

I created assets with expiry date as one of the attributes. If the asset is expiring 1 month from now, chaincode must check for certain criteria to determine whether to extend the expiry date or not. A criteria could be the financial value of the asset. If value is more than 1M, chaincode will extend the expiry date from 1 month from now to 1 year. If it is less than 1M, chaincode must notify the client app.
Is it possible to have the chaincode check everyday all assets that are expiring 1 month from now and perform the actions I mentioned above? Or do I need to use some library to make it work?

Smart contracts don't have any kind scheduled job capability built in. They just respond to transaction proposals. You could:
Wait until an asset is accessed by a transaction function before making the decision on what its expiry status should be.
Have a client application run on a schedule (perhaps as a cron job), which invokes a house-keeping transaction function on the smart contract to make changes based on expiry date and value of assets.
If querying for assets that may need managing is impractical in a house-keeping transaction function on the smart contract, you could maintain an off-chain data store of assets that can be more easily queried, and have your scheduled house-keeping application use that information to decide which assets need updating by calling appropriate smart contract transaction functions.

Related

Handling Race condition in CQRS/ES with read-side

I am building an app for managing a health clinic.
We find a race condition case when an appointment is scheduled, and until now, none of team members reaches a solution.
When an appointment is scheduled, some business rules need to be verified:
cannot be scheduled to the same time as another with the same doctor or same patient
doctors can only attend N appointments in the month
in a week, doctors can only attend N appointments
So, the first approach we think is to create an aggregate that will hold all appointments, responsible for schedule them, but this aggregate will be huge and technically is not acceptable.
The second approach, and the current one, is to create Appointment as an Aggregate Root, and then validate it using a domain service (interface in domain layer and implementation in infra layer), which queries the read side.
Today its look like:
Inside command handler, instantiate new Appointment, passing a domain service in its constructor
Appointment calls domain service, which query the read side and validate the rules. However, race conditions can occurs here (two appointments being scheduled at the same time, as the two do not see each other, both will be created).
If domain service validate the rules, then the Appointment is created, but with status PENDING, and a domain event AppointmentRequested is fired.
On the read side, this event was subscribed and a projection is inserted in the read db (status = PENDING). In same transaction, a command CompleteAppointmentSchedule is inserted in my outbox and soon is sent and received asynchronously by the write side.
write side handles the command calling appointment.CompleteSchedule(domainService). The same domain service passed when instantiate a new appointment is passed again to the appointment. But, now, the appointment will already be in the read db, and will be possible to check the business rules.
Is it correct to use read side this way? We cannot think another the way to check this rules without using the read side. A team member suggested that we could create a private read-side for our write-side, and use it instead of a read-side in these cases, but, as we use EventStore DB, we would have to create another database like the one we use on the read-side (pgsql) to be able to do it that way on this private read-side.
I am building an app for managing a health clinic.
Reserve an office, get the entire team together, and watch Trench Talk: Evolving a Model. Yves Reynhout has been doing (and talking about) domain driven design, and his domain is appointment scheduling for healthcare.
When an appointment is scheduled, some business rules need to be verified:
cannot be scheduled to the same time as another with the same doctor or same patient
doctors can only attend N appointments in the month in a week,
doctors can only attend N appointments
One of the things you are going to need to discuss with your domain experts; do you need to prevent scheduling conflicts, or do you need to identify scheduling conflicts and resolve them?
Recommended reading:
Race Conditions Don't Exist - Udi Dahan, 2010
Memories, Guesses, and Apologies - Pat Helland, 2007
That said, you are really close to a common answer.
You make the your checks against a cached copy of the calendar, to avoid the most common collisions (note that there is still a race condition, when you are checking the schedule at the same time somebody else is trying to cancel the conflicting appointment). You then put an appointment request message into a queue.
Subscribing to the queue is a Service-as-in-SOA, which is the technical authority for all information related to scheduling. That service has its own database, and checks its own authoritative copy of everything before committing a change.
The critical different here is that when the service is working directly with locked instances of the data. That might be because the event handler in the service is the only process that has write permissions on the authoritative data (and is itself handling only one message at a time), or it might be because the event handler locks all of the data necessary to ensure that the result of the write is still consistent with the business rules (conflicting writes competing for the same lock, thus ensuring that data changes are controlled).
In effect, all attempts to change the authoritative calendar data are (logically) serialized, to ensure that the writes cannot conflict with each other.
In the language of CQRS, all of this locking is happening in the write model of the calendar service. Everybody else works from unlocked copies of the data, which are provided by the read model (with some modest plumbing involved in copying data change from the write model to the read model).

How to model Betting/Accounting BoundedContexts when betting relies heavily on account balance?

Lets say you have an application where you can create a bet on a coin toss. Your account has a balance that was funded with your credit card.
The sequence of events is the following:
POST /coin_toss_bets { amount: 5 USD }
Start transaction/acquire locks inside the Bet subdomain useCase
Does the user have enough balance? (check accounting aggregate balance projection of the users deposits)
Debit the users account for the amount for 5 USD
Create bet/flip the coin to get a result
Payout the user if they bet on the correct side
Commit transaction
UI layer is given the bet and displays an animation
My question is how this can be modeled with 2 separate BoundedContexts (betting/accounting). Its said that database transactions should not cross a BoundedContext since they can be located on different machines/microservices, but in this scenario, the use case of creating a bet heavily relies on a non-dirty read of the users projected account balance (strong consistency).
There is also no way to perform a compensating action if the account is overdebited, since the UI layer is requiring that the bet is created atomically.
Is there any way to do this with CQRS/Event Sourcing that doesn't require asking for the users account balance inside the betting subdomain? Or would you always have to ensure that the balance projection is correct inside this transaction (they must be deployed together)?
Ensuring that the account has sufficient balance for a transaction seems to be an invariant business rule in your case. So let us assume that it cannot be violated.
Then the question is simply about how to handle "transactions" that span across boundary contexts.
DDD does say that transactions (invariant boundaries) should not cross a Bounded Context (BC). The rule is applicable even at the level of aggregates. But the correct way to read it would be transaction as part of a "Single Request."
The best way to deal with this scenario is to simply accept the request from UI to place a bet and return a "202 Accepted" status message, along with a unique job tracker ID. The only database interaction during request processing should be to persist the data into a "Jobs" table and probably trigger a "BET_PLACED" domain event.
You would then process the Bet asynchronously. Yes, the processing would still involve calling the Accounts bounded context, but through its published API. Since you are not in the context of a request anymore, the processing time need not fit into usual constraints.
Once the processing is completed, either the UI would refresh the page at regular intervals and update the user, or you can send a Push Notification to the browser.

What's the strategy for keeping in sync with plaid transactions?

I need to keep in sync with transactions on accounts for a set of items. To me that means:
1. Do an initial download of all historical transactions.
2. Get new transactions when available.
3. Make sure no transactions are dropped on the floor.
It's not clear from the documentation and the API how this can be accomplished reliably.
The create API has a webhook parameter, so it seems I should have a webhook set up immediately on getting transactions. If I don't have I missed out on all the transactions forever?
Can I pull all transactions via the API alone? I noticed the options have an offset. Is that for a cursor? Can I ask for transactions way back and the past to trigger a redownload of transactions?
What if a webhook drops a batch of transactions? How can I tell? How can I redownload the missing transactions?
And I remember reading somewhere in the doc the account IDs and transaction IDs are associated with an ACCESS_TOKEN. Does this mean that the account IDs and transaction IDs can't be used to identify data uniquely across tokens?
Plaid states that they can fetch transaction data up to past two years. However, the amount of historic transactions provided by banks vary from bank to bank. I’ve seen some banks provide data for the past three months, whereas some return data for last two years. I’ve also seen some banks not support returning any transaction data.
As for the webhook, please note that the amount of time it takes to retrieve historic data after connecting an account varies. That’s where a webhook is useful as you can be notified when data is available for fetching.
Plaid returns only 500 transactions per call (I think). So, you are responsible for pagination while retrieving historic data.
You can always retrieve historic data, but you will only be able to get the past two years maximum. Every day that passes, you will not be able to retrieve data for the first day two years ago. It’s a moving window. I’ve generally cached data on our side as you will not be able to access data older than two years.
If I recall correctly, each institution that is connected has a unique access token. You can use account id to uniquely identify transactions, but you might have to store the relations in your database as the returned data doesn’t have that.
Hope that helps.

Stripe - single-use cards and delayed charge

Our site currently asks customers to provide their CC data every time they put an order. Few days later we charge the card, very often with a different amount than shown during the checkout because of merging orders, changing shipment method and so on.
We'd like to migrate to Stripe, but we'd like not to change the current behaviour. Few questions arose while we read the Stripe docs:
1) Stripe.js allows to create a Source without a customer. This seems ideal for us, but the following text from Stripe docs caught our attention:
A card source must be used within a few minutes of its creation as CVC
information is only available for a short amount of time. Card sources
do not expire, but using them after a delay can result in a charge
request that is performed without CVC information. The consequences of
this can be higher decline rates and increased risk of fraud.
So we shouldn't charge the Source few days later? If so, isn't it also a problem for Sources attached to customers, because their CVCs are forgotten as well?
2) To keep things simple, we'd prefer to avoid creating customers and managing their sources in Stripe. Stripe has the concept of single-use sources, that can be charged without attaching them to customers, but the docs say that the amount is required when they are created. We don't know the exact amount at the checkout process, so could we put a bigger number as an upper limit of what we'll charge in the future and provide the exact amount during charging the source?
So you're correct, the CVC is lost anyway but this is not the point that Stripe is trying to make. When you attach the Source to a customer, Stripe will run a $0 or $1 authorization on the card with the bank to make sure it's valid. This lets them catch expired or invalid cards immediately. This also lets them send the CVC to the bank, since they have it at that point, to catch invalid CVC which is important to combat fraud.
If you were to create a Source and not charge it for a few days, that would technically work. The risk is that when you do charge it, the bank won't see the CVC and won't be able to tell you if it was correct or not, increasing the risk of fraud.
To avoid this, Stripe recommends that you save the Source on a Customer. This is just one extra API request which lets you catch invalid cards immediately. Then, days later, when you create the charge on that customer/source, the bank might still decline it (insufficient funds for example) but you would decrease the risk of charging a bad card and catch issues immediately on tokenization. It's better for you as a business but also better for the customer in case they mistyped their CVC or expiration date, etc.

How to prevent multiple charge when using Authorize.net CIM running on cron

I'm looking for programming approaches that should be required for charging payments using cron job wisely.
The website requires recurring payments to be collected using Authorize.net. To remain PCI DSS compliance credit card info is not stored locally and uses Authorize.net CIM integration where customers enter their CC info at the hosted page. Therefore, ARB subscriptions can't be used which requires CC info to be sent from the website.
As far I know there is no possibility to create subscriptions using CIM and it requires cron jobs to charge the customer when required.
I've already integrated CIM however following are my concerns when charging customers using cron job:
Prevent multiple charges so that customer is not billed for same subscription twice. (My approach: I've a table which holds payments to be collected on a daily basis and a status field [varchar: paid/unpaid]. The status will change to paid if the charge is successful).
Server break down : If the server remains down there will be no payments collected. (My approach: The cron job will run for all payments for past 3 days with unpaid status)
Security measures : Need to know what sort of security measures is required?
Any precautionary measures which I might be missing.
ARB through CIM is a work in progress. So you would have to initiate charge manually.
As for payments there are hundreds of ways to implement and thousands of ways things can go wrong. PCI DSS 3.0 has over 200 checklist items to take care. Basic tasks to note could be:
Use a proper lock mechanism to prevent race conditions
Log everything you are allowed to log
Maintain all the logs and other data on a server not accessible through web and review the read write requests on these.
Put request rate limits and raise warnings/block depending on activity
Patch all your servers OS and software, even the ones not being used or used for something else.
Close unnecessary open ports, and uninstall unnecessary software
Limit access to very few people and access through secured systems only
Follow rigorous code review and testing procedures
Check your SSL certificate here and use strong crypto algo
Check against OWASP top 10
Properly hash and salt user passwords and put limits on reset requests, shipping address changes etc
Keep an eye on updates to the softwares you are using
Monitor the logs for penetration requests and improve security as required
There are many more depending on your setup and requirement
On your side, you can have a field (which you could pass as the order number) being either subscriptionNumber.dueDate or customerNumber.dueDate (meaning if your subscriptionNumber is 123 and it's 5/24/2015, being "123.05-24-2015")
Before sending to AuthNet, make sure this field is unique. If AuthNet processes but you don't get the response, you can still have an issue.

Resources