How to check when/whether an offer on the XRP Ledger's DEX has been accepted/fully consumed? [XRP] [XRPL] - ripple

I've been testing out interacting with the XRP Ledger, coding in Python. I started coding some basic queries regarding the status of the XRP Ledger from scratch based on the XRPL documentation (available here) but then discovered XRPL-PY (I've reviewed the GitHub repo here and the documentation here) and have since been predominately using XRPL-PY to interact with the XRP Ledger given its general ease of use. I've been able to accomplish most of the basic types of interactions with the XRP Ledger one might want to do, including creating a wallet and submitting an offer to exchange one currency for another on the XRP Ledger (what I'll call an "Offer"). However, one point that I have been unable to figure out, whether by using XRPL-PY or interacting with the XRP Ledger directly, is how to determine when/whether a previously submitted Offer has been fully consumed (i.e., another transaction, or transactions, were submitted to the XRP Ledger that accepted my Offer such that it is no longer outstanding and the offered currencies were exchanged at the offered rate). This seems like a basic query that most people interacting with the XRP Ledger would want to be able to establish, but I haven't seen anything in the XRPL or XRPL-PY documentation explaining how to do it. My preference would be to be able to subscribe to updates from the XRPL Ledger such that it lets me know once my Offer has been partially or fully consumed, but if that's not possible I would like to at least be able to repeatedly query the status of my Offer from the XRP Ledger and know what will change in the response once my Offer has been partially or fully consumed. Any suggestions would be greatly appreciated.

It looks like you had your question answered on another channel so I will include this here just in case people will find this useful later. Cheers!
You have 3 options:
POLLING account_offers. You can use the sequence number to identify the offer
POLLING book_offer. You can use the "Account" and "Sequence" fields to identify the offer.
WS subscribing to "accounts" using the account you want to track. This is the most "complex" because you need to parse all the metadata for all validated transactions that involve the particular account but it's the best because it's asynchronous and via websocket.

Related

Is it possible to fine-tune access control in Hyperledger Fabric private data collections further than restricting entire organizations?

In the Hyperledger Fabric Docs, while reading about private data collections I came accross this sentence regarding memberReadOnly:
Utilize a value of false if you would like to encode more granular access control within individual chaincode functions.
If I understand this correctly, this allows me to code into the smart contract specifications that will allow me to limit control to eg. specific clients of one organization instead of all peers of member organizations.
If that is so, I am curious as to how this can be done in the contract. Is there a specific way to handle access control or is it at my own discretion to write code that will enforce it? If you can provide me with any examples it would be very helpful.
To clarify what I mean, I come from Ethereum and what I am essentially asking is whether there is something like the require method in solidity, or would I just use a simple if.
Thanks for any help. If you close question for wrong site, please point me to the right place as I have not been able to find somewhere more relevant.
You didn't understand correctly.
Setting this value (memberOnlyRead) to true means that if a client sends a proposal to a peer, and the client is not in the collection, then if the peer is in the collection and has access to the data - it will refuse with an error automatically no matter the smart contract says.
If it's false, then the peer won't enforce such a thing, and then you have more freedom to code any access control logic you want for the clients.

DDD/Event sourcing, getting data from another microservice?

I wonder if you can help. I am writing an order system and currently have implemented an order microservice which takes care of placing an order. I am using DDD with event sourcing and CQRS.
The order service itself takes in commands that produce events, the actual order service listens to its own event to create a read model (The idea here is to use CQRS, so commands for writes and queries for reads)
After implementing the above, I ran into a problem and its probably just that I am not fully understanding the correct way of doing this.
An order actually has dependents, meaning an order needs a customer and a product/s. So i will have 2 additional microservices for customer and products.
To keep things simple, i would like to concentrate on the customer (although I have exactly the same issue with products but my thinking is that if I fix the customer issue then the other one is automatically fixed also).
So back to the problem at hand. To create an order the order needs a customer (and products), I currently have the customerId on the client, so sending down a command to the order service, I can pass in the customerId.
I would like to save the name and address of the customer with the order. How do I get the name and address of the customerId from the Customer Service in the Order Service ?
I suppose to summarize, when data from one service needs data from another service, how am I able to get this data.
Would it be the case of the order service creating an event for receiving a customer record ? This is going to introduce a lot of complexity (more events) in the system
The microservices are NOT coupled so the order service can't just call into the read model of the customer.
Anybody able to help me on this ?
If you are using DDD, first of all, please read about bounded context. Forget microservices, they are just implementation strategy.
Now back to your problem. Publish these events from Customer aggregate(in your case Customer microservice): CustomerRegistered, CustomerInfoUpdated, CustomerAccountRemoved, CustomerAddressChanged etc. Then subscribe your Order service(again in your case application service inside Order microservice) to listen all above events. Okay, not all, just what order needs.
Now, you may have a question, what if majority or some of my customers don't make orders? My order service will be full of unnecessary data. Is this a good approach?
Well, answer might vary. I would say, space in hard disk is cheaper than memory or a database query is faster than a network call in performance perspective. If your database host(or your server) is limited then you should not go with microservices. Moreover, I would make some business ideas with these unused customer data e.g. list all customers who never ordered anything, I will send them some offers to grow my business. Just kidding. Don't feel bothered with unused data in microservices.
My suggestion would be to gather the required data on the front-end and pass it along. The relevant customer details that you want to denormalize into the order would be a value object. The same goes for the product data (e.g. id, description) related to the order line.
It isn't impossible to have the systems interact to retrieve data but that does couple them on a lower level that seems necessary.
When data from one service needs data from another service, how am I able to get this data?
You copy it.
So somewhere in your design there needs to be a message that carries the data from where it is to where it needs to be.
That could mean that the order service is subscribing to events that are published by the customer service, and storing a copy of the information that it needs. Or it could be that the order service queries some API that has direct access to the data stored by the customer service.
Queries for the additional data that you need could be synchronous or asynchronous - maybe the work can be deferred until you have all of the data you need.
Another possibility is that you redesign your system so that the business capability you need is with the data, either moving the capability or moving the data. Why does ordering need customer data? Can the customer service do the work instead? Should ordering own the data?
There's a certain amount of complexity that is inherent in your decision to distribute the work across multiple services. The decision to distribute your system involves weighing various trade offs.

TP in lib/script.js vs. assetRegistry from composer-client to update an asset?

I'm looking at knowing, In order to update a asset,
When should I need to write Transaction in lib/script.js
vs.
when should I be using composer-client code using bizNetworkConnection.getAssetRegistry?
I see that I cant use the feature of event emission in later case, Is there any other reason why I should be using it?
Please help me know.
The important thing about writing a Transaction is that it becomes part of the Agreed Smart Contract. So the creation of one or more assets or participants in the same transaction with the associated logic is agreed. This Transaction is a class and can have a specific ACL rule associated with it (also in the smart contract), whereas if you use composer-client you would add individual assets or participants using a generic system transaction AddAsset or AddParticipant.
So writing your code in a Transaction provides a 'better' Blockchain app with a stronger Smart Contract and improved security.

Can SagePay's callback be validated to prevent hacking?

SagePay's form callback can be hacked by re-using the success URL that the user is directed to upon a successful transaction. This can create all sorts of problems with duplicate transactions, fake transactions etc.
You can check for a duplicate VPSTxId, but these can be generated anew by hacking around the crypt parameter of the callback URL.
The crypt parameter can also be manipulated to generate a different "Amount" field.
I have not tested what other field values can be changed by hacking the callback URL crypt parameter.
Is there any way (as per PayPal's IPN validation) of doing a double-check callback to SagePay to ensure that the transaction is new and unique?
Thanks for your post. In general we encourage clients to use Server integration where they can. We also constantly monitor transactions for suspicious behaviour and proactively contact our customers if we suspect any malicious activity.
We recommend customers make sure that they’re using the latest version of our integration protocol which is currently v3. Get the latest integration documents.
As Dan suggests you could use the Reporting and Admin API to validate that a transaction does indeed exist on the Sage Pay side but having an additional validation mechanism (like PayPal's IPN) is something we will actively explore.
If you'd like us to update you on this, then please get in contact with our customer services team at support#sagepay.com or 0845 111 44 55.
Sage Pay Support
You should always redirect a user from a success URL.
I personally use a fulfil page (success url), and a thank you page. On the fulfil page, you should obviously only ever process a transaction once (based on the transaction id), and you can store crypt sent with a transaction. The crypt will have to be valid and is only possible to encrypt if you have the encryption key.
So hacking would be extremely difficult unless you are being very security lax, and the hacker would have to know your encryption key to even begin trying to hack it.
Alternatively, you should use the server integration, so that the communications are server-server, not client-server. There is little difference between form and server.
10 immutable laws of security
http://technet.microsoft.com/library/cc722487.aspx

Domain queries in CQRS

We are trying out CQRS. We have a validation situation where a CustomerService (domain service) needs to know whether or not a Customer exists. Customers are unique by their email address. Our Customer repository (a generic repository) only has Get(id) and Add(customer). How should the CustomerService find out if the Customer exists?
Take a look at this blog post: Set based validation in the CQRS Architecture.
It addresses this very issue. It's a complex issue to deal with in CQRS. What Bjarte is suggesting is to query the reporting database for existing Customer e-mail addresses and issue a Compensating Command (such as CustomerEmailAddressIsNotUniqueCompensatingCommand) back to the Domain Model if an e-mail address was found. You can then fire off appropriate events, which may include an UndoCustomerCreationEvent.
Read through the comments on the above blog post for alternative ideas.
Adam D. suggests in a comment that validation is a domain concern. As a result, you can store ReservedEmailAddresses in a service that facilitates customer creation and is hydrated by events in your event store.
I'm not sure there's an easy solution to this problem that feels completely clean. Let me know what you come up with!
Good luck!
This issues does not have to be that complex:
Check your reporting store for customer uniqueness before submitting the UpdateCustomer command.
Add a constraint to your DB for uniqueness on email address. When executing the command, handle the exception and send a notification to the user using a reply channel. (hences never firing CustomerUpdated events to the reporting store.
Use the database for what it's good for and don't get hung up on ORM limitations.
This post by Udi Dahan http://www.udidahan.com/2009/12/09/clarified-cqrs/ contains the following paragraph:
"Also, we shouldn’t need to access the query store to process commands – any state that is needed should be managed by the autonomous component – that’s part of the meaning of autonomy."
I belive Udi suggested simply adding a unique constraint to the database.
But if you don't feel like doing that, based on the statement above, I would suggest just adding the "ByEmail" method to the repository and be done with it - but then again Udi would probably have a better suggestion..
Hope I am not too late...but we faced a similar situation in our project, we actually intercept the command executor and attach it with the set of rules created for that command, which in turn uses a query to fetch the data.
So in this case, We can have a class by the name, CustomerEmailMustBeUniqueRule which is fetched by the RuleEngine when the command "RegisterCustomerCommand" is about to be executed by RegisterCustomerCommandExecutor. This rule class has the responsibility to query the database to find if the email id exist and stop the execution by raising the invalid flag...

Resources