How to Best Setup an Escrow Wallet/Account in NEAR Protocol? - rust

Objective: We are building a dual flow system on NEAR. The flow is something like:
Client -> Escrow Wallet -----true---> Beneficiary
Client -> Escrow Wallet -----false--> Client
I was just wondering if there is a standard procedure for this. Because hard-coding a wallet address to act as the escrow wallet does not sound very reasonable or safe. So please let me know, what might be a better way to do this.

You can implement your desired custodian/escrow behavior in a smart contract's logic. I'm not sure what you mean by hard-coding an account, but this escrow contract's logic would remain unchanged once you have deployed it to the network, as such, you can rely on it as much as you can rely on the network for your application's logic.

There are many ways.
One is creating an a wallet for the specific scrow service and then erasing it at the end of the transaction.
function one - Bob ask for an scrow service, the contract is deployed and tokens sent in same transaction
function two - Alice sent the other part, the contract send tokens from Bob to Alice and from Alice to Bob, closes the contract and send remaining funds to master contract
The Second and easier:
function one - send 5 N from Bob to the Smart contract address
function two - accept from Alice AND in the same function send the another part
Of course this needs more logic and information from your question, an a task maybe is something that couldn´t be sent in the blockchain, so it always will require honesty from one part.

Related

solidity re-entrancy attack explanation

Hey I have a question about re-entrancy, I get the logic but how come the attacker manages to withdraw again before the balance is set to 0, taking into account transaction time and that both functions in A and B are on the next line
re-entrancy
Let's say we have 2 contracts. target contract which holds some ether belongs to the attacker contract. And now attacker contract wants to withdraw its money
Attacker contract has 2 functions: fallback and withdraw
// this gets triggered when ether is receive
fallback(){
A.withdraw()
}
attack(){
A.withdraw()
}
Let's say attacker contract calls attack function. This will call the withdraw inside target contract:
target contract:
withdraw(){
require balance>0
send ether
// before balance gets updated attacker's fallback triggers another withdraw
// send function will give the control to the attacker's contract
set balance of attacker =0
}
Now target contract will send the ether to attacker's contract. When it does, inside attacker's contract fallback function is triggered. fallback gets executed every time ether is received. I explained fallback functions in detail
Now attacker's contract is received ether and immediately calls withdraw inside target contract. So target contract's withdraw function is in a loop till target contract's balance reaches to 0
If a contract uses call, send or transfer which may cause
control flow to the attacker contract, because those functions delegate enough gas for the fallback function. Once the control is passed to the attacker's contract, the state of the target contract will incomplete. target contract lost the contrrol.
Prevent reentrancy attacks

How to send messages between instances of a smart contract in Hyperledger Fabric?

In a Hyperledger Fabric Network, I would like the different instances of a smart contract in different peers to be able to communicate through messages (let say for example, a communication could be a message with text string).
Then I would like that the instances of the smart contract receiving the message to be able to invoke a smart contract method based on the message content (like in a switch/case control flow statement) or send its own message to the network.
Example:
We have a network made of several organizations. Each organization has a copy of the distributed ledger and an instance of a smart contract up and running.
Let say that a smart contract can read the ledger at a specific index and triggers an event message when it read the ledger. The event message could for example contain the name of the reader and the time of the read.
Then when another instance of the smart contract receives the message, it could either send another message to all the peers or invoke a smart contract method.
I would appreciate if anyone has a solution for this use case but any ideas, thoughts or pointers would be also highly appreciated !
This feature has already been proposed in the past, and I implemented a prototype of it here.
From a high level point of view, the way it works is that a smart contract has an ability to send a message to the same smart contract running the same transaction, on another peer by sending the message to its peer, and asking it to route it to a specific peer. That peer, sends the message through the native Fabric communication infrastructure (the same used for disseminating blocks) and that remote peer forwards the message to the chaincode and inside the chaincode it routes it to the right transaction.
If you want, you can roll your own fork of Fabric and cherry pick the commits, or just use this one, but note that this fork is from 2 years ago, so all the bug fixes and security fixes in these 2 years do not exist there.

Testing Stripe transfer API webhook with real data

I am using Stripe with the separate charges and transfers flow. The way this goes is that my platform receives the full payment minus the Stripe fees, and then I do a Transfer to the seller's connected account, which is then paid out to their bank. I set up a webhook to run on the "transfer.paid" event, so I can update some book-keeping records on my platform's database when the money is transferred to the connected account. I wish to test this endpoint so that I can see whether my event behaves as expected. However, it seems that the webhook testing available through the Stripe Dashboard sends only dummy data, or only populates a few items of the request body with data from the last transaction made in the account. It seems the only way to receive real data is to allow the event to trigger by itself. In my case, though,the transfers are taking up to seven days to complete, which means I have to send and wait a whole week to see the result, which is really slowing down my development time. This seems really inefficient, unless there is something fundamental that I am not understanding about webhooks. Does anyone have any idea how I can test my webhook endpoints with real data without having to wait so long? Any info will be greatly appreciated.
Unfortunately the only way to test events with 'real' payloads for some things like Subscription-based events and Payouts is to wait for the event to occur in testmode.

How to use two blockchains for the same application?

Can we use two blockchains such that one blockchain is permissioned and other is permissionless for some application so that one blockchain can be used to have private data and the other is used to store public data for verification?
I am not checking for interoperabilty..
for example, In process of buying a car, The buyer actually buys it from dealer and the buyer do not want his details plus car details(buyer id, VIN, engine number,car make,...) to be put onto public blockchain except the fact that a particular car model is sold to some blockchain address. So using private chain between RTO and Dealer, if dealer uploads details(buyer id, VIN, engine number,car make,...) onto private chain then RTO can verify the owner of the car.
Yes, you can use any number of blockchains within an application.
Blockchain is just used for the following operations:
Query the contents of a ledger.
Submit transactions to a ledger.
Computations using a smart contract.
You just need to take care of the endpoints at the application level or in the smart contract to perform write and query operations.
Interchain communication is a different problem altogether.

Azure Service Bus and transactions

I'm new to Azure Service Bus and I'm trying to establish a transactional strategy for queuing messages. Since SQL Azure doesn't support MSDTC and, therefore, with a distributed TransactionScope, I can't make use of it. So, I have a Unit of Work that can handle my database transactions manually.
The problem is that I can only find people using TransactionScope to handle both database and Azure Service Bus operations. Is there any other magical way to achieve transactions on Service Bus without using TransactionScope?
Thanks.
If you will be using cloud hosting or service like azure service bus, you should start considering to give up on two phase commits (2PC) or distributed transactions (DTC).
Instead, use a per-resource transactions (i.e. a transaction for a SQL command or a transaction for a Service Bus operation) carefully. And avoid transactions that cross that resource boundary.
You can then knit those resources/components operations together using reliable messaging and patterns like sagas, etc. for workflow management and error compensation. And scale out from there.
2PC in the cloud is hard for all sorts of reasons (but not impossible, you still can use IaaS).
2PC, as implemented by DTC, effectively depends on the coordinator and its log and connectivity to the coordinator to be very highly available. It also depends on all parties cooperating on a positive outcome in an expedient fashion. To that end, you need to run DTC in a failover cluster, because it’s the Achilles heel of the whole system and any transaction depends on DTC clearing it.
I'll quote a great example here of how to think of a 2PC transaction as a series of messages/actions and compensations
The grand canonical example for 2PC transactions is a bank account transfer. You debit one account and credit another.
These two operations need to succeed or fail together because otherwise you are either creating or destroying money (which is illegal, by the way). So that’s the example that’s very commonly used to illustrate 2PC transactions.
The catch is – that’s not how it really works, at all. Getting money from one bank account to another bank account is a fairly complicated affair that touches a ton of other accounts. More importantly, it’s not a synchronous fail-together/success-together scenario.
Instead, principles of accounting apply (surprise!). When a transfer is initiated, let’s say in online banking, the transfer is recorded in form of a message for submission into the accounting system and the debit is recorded in the account as a ‘pending’ transaction that affects the displayed balance.
From the user’s perspective, the transaction is ’done’, but factually nothing has happened, yet. Eventually, the accounting system will get the message and start performing the transfer, which often causes a cascade of operations, many of them yielding further messages, including booking into clearing accounts and notifying the other bank of the transfer.
The principle here is that all progress is forward. If an operation doesn’t work for some technical reason it can be retried once the technical reason is resolved.
If operation fails for a business reason, the operation can be aborted – but not by annihilating previous work, but by doing the inverse of previous work. If an account was credited, that credit is annulled with a debit of the same amount.
For some types of failed transactions, the ‘inverse’ operation may not be fully symmetric but may result in extra actions like imposing penalty fees.
In fact, in accounting, annihilating any work is illegal – ‘delete’ and ‘update’ are a great way to end up in prison.
You can use TransactionScope. It works pretty well. Even though you have sent a message, but something other (database update) fails, then the message will be not published to the queue.
using var scope = new TransactionScope(TransactionScopeOption.RequiresNew, TransactionScopeAsyncFlowOption.Enabled);
try
{
var messageBody = "This is my message";
var message = new Message(Encoding.UTF8.GetBytes(messageBody));
await _queueClient.SendAsync(message);
await _myOtherService.FailPotentially(); // if this method fails then message will ne rolled back
scope.Complete();
}
catch (Exception exception)
{
scope.Dispose();
_logger.LogError(" ... ", exception);
throw;
}
Based on ServiceBus documentation you must use standard pricing instad of basic

Resources