Hyperledger Fabric: ENDORSEMENT_MISMATCH on asset query - hyperledger-fabric

It seems like I misunderstand how Hyperledger Fabric processes a query. I'm currently using the fabric-go-sdk to query an asset from the ledger like
asset, err := client.Query(channel.Request{ChaincodeID: someCCname, Fcn: "query", Args: [][]byte{[]byte(someID)}})
When my system is under load (many new transactions that are unrelated to the query) I sometimes get the following error message:
endorsement validation failed: Endorser Client Status Code: (3)
ENDORSEMENT_MISMATCH. Description: ProposalResponsePayloads do not
match.
Why is an endorsement involved if data is only queried? To me the error message seems to indicate that multiple peers answered differently to the query. Does that mean that some peers have the asset already committed into the ledger while others do not? It is noteworthy that the query is ran very shortly after the asset is created and does not happen consistently.
The query chaincode is very straight-forward and minimal:
func (c *TestChaincode) query(stub shim.ChaincodeStubInterface, args []string) pb.Response {
data, err := stub.GetState(args[0])
if err != nil {
return shim.Error(err)
}
if data== nil {
return shim.Error(err)
}
return shim.Success(data)
}

Related

Can different hyperledger fabric chaincodes view all key/value pairs in world state?

I got two Smart contract definitions currently commited on the same channel. Both are written in Go and are somewhat basic, only doing basic CRUD operations. However, I've noticed that key/value pairs written with one chaincode, are unavailable to the other.
So, with go-audit I created the following record:
But then, I tried to perform a get operation on key ping with chaincode go-asset, is not found with the following error (returned by the chaincode)
bad request: failed to invoke go-asset: Error: No valid responses from any peers. Errors: **someurl***, status=500, message=the asset ping does not exist.
This is the transaction that reads:
func (c *GoAssetContract) ReadGoAsset(ctx contractapi.TransactionContextInterface, goAssetID string) (*GoAsset, error) {
exists, err := c.GoAssetExists(ctx, hlpAssetID)
if err != nil {
return nil, fmt.Errorf("could not read from world state. %s", err)
} else if !exists {
return nil, fmt.Errorf("the asset %s does not exist", goAssetID)
}
bytes, _ := ctx.GetStub().GetState(goAssetID)
goAsset := new(GoAsset)
err = json.Unmarshal(bytes, goAsset)
if err != nil {
return nil, fmt.Errorf("could not unmarshal world state data to type GoAsset")
}
return GoAsset, nil
}
and the GoAsset struct
// GoAsset stores a value
type GoAsset struct {
Value string `json:"value"`
}
shouldn't world state be available to all chaincodes approved/committed on a channel?
chaincodes deployed to the same channel are namespaced, so that their keys remain specific to the chaincode that is using them. So what you see with 2 chaincodes deployed to the same channel is working as designed, they cannot see each others keys.
However a chaincode can consist of multiple distinct contracts and in that case the contracts have access to each others keys because they are still in the same chaincode deployment.
You can have one chaincode invoke/query another chaincode on the same channel using the InvokeChaincode() API. The called chaincode can return keys/values to the caller chaincode. With this approach it is possible to embed all access control logic in the called chaincode.

Hyperledger Fabric- Contracts are required to have at least 1 (non-ignored) public method

I'm using fabric tools provided for composer to deploy fabric network as it deploys 1 peer, 1 orderer, 1 couchdb, & 1 fabric-ca. I am able to install chain code on peer but instantiation fails with following error. I am using command on fabric-peer.
error in simulation: failed to execute transaction
2037ca1d4ec2682ad17499156de49aeb28053ad5b6943f1fe3520c407bac570e:
could not launch chaincode
product_1.1.1:e2901eb986174a4ac9bb963b06db851ea347ed6b48930de813c3dbc38df94a82:
chaincode registration failed: container exited with 2
when i checked the logs of docker container it is returning me this error
2021/07/29 08:41:29 Error create network chaincode chaincode: Contracts are required to have at least 1 (non-ignored) public method.
Contract PRODUCTChainCode has none. Method names that have been
ignored: GetAfterTransaction, GetBeforeTransaction, GetInfo, GetName,
GetTransactionContextHandler, GetUnknownTransaction,
GetIgnoredFunctions and GetEvaluateTransactions panic: Error create
network chaincode chaincode: Contracts are required to have at least
1 (non-ignored) public method. Contract PRODUCTChainCode has none.
Method names that have been ignored: GetAfterTransaction,
GetBeforeTransaction, GetInfo, GetName, GetTransactionContextHandler,
GetUnknownTransaction, GetIgnoredFunctions and GetEvaluateTransactions
goroutine 1 [running]: log.Panicf(0xa40a03, 0x2e, 0xc00059ff68, 0x1,
0x1) /usr/local/go/src/log/log.go:358 +0xc5 main.main()
/chaincode/input/src/main.go:18 +0x1b0
here is my main.go file
package main
import (
"log"
"product-chaincode/core/messages"
"github.com/hyperledger/fabric-contract-api-go/contractapi"
)
// PRODUCTChainCode implementation
type PRODUCTChainCode struct {
contractapi.Contract
}
func main() {
PRODUCTChainCode, err := contractapi.NewChaincode(&PRODUCTChainCode{})
if err != nil {
log.Panicf(messages.ChaincodeCreateError, err.Error())
}
if err := PRODUCTChainCode.Start(); err != nil {
log.Panicf(messages.ChaincodeStartError, err.Error())
}
}
The following error message Contracts are required to have at last 1 (non-ignored) public method." tells us that the chain code you wrote does not have a public method.
In the Go language, functions (and variables) whose names begin with uppercase letters are treated as public and functions (and variables) whose names begin with lowercase letters are private.
Currently, there is only one main function in the chain code you created, so it seems that it cannot be executed because there is no public function.
Try adding one public function whose function name begins with uppercase letters.

Is retry handled from go client library for azure service bus?

I'm referencing to the client library in https://github.com/Azure/azure-service-bus-go. I was able to write a simple client which listens to a subscription and read the messages. If I drop the network, after a few seconds I could see the receive loop exiting with the error message "context canceled". I was hoping the client library will do some kind of retry mechanism and handle any connection issues or any server timeouts etc. Do we need to handle this our selves ? Also do we get any exceptions which we could identify and use it for this retrying mechanism ? Any sample code would be highly appreciated.
below is the sample code I tried (only including the vital parts).
err = subscription.Receive(ctx, servicebus.HandlerFunc(func(ctx context.Context, message *servicebus.Message) error {
fmt.Println(string(message.Data))
return message.Complete(ctx)
}))
if err != nil {
fmt.Println("FATAL: ", err)
return
}
Unfortunately, the older library doesn't do a good job of retrying, so you'll need to wrap the code in your own retry loop.
func demoReceiveWithRecovery(ns *servicebus.Namespace) {
parentCtx := context.TODO()
for {
q, err := ns.NewQueue(queueName)
if err != nil {
panic(err)
}
defer q.Close(parentCtx)
handler := servicebus.HandlerFunc(func(c context.Context, m *servicebus.Message) error {
log.Printf("Received message")
return m.Complete(c)
})
err = q.Receive(parentCtx, handler)
// check for potential recoverable situation
if err != nil && errors.Is(err, context.Canceled) {
// This workaround distinguishes between the handler cancelling because the
// library has disconnected vs the parent context (passed in by you) being cancelled.
if parentCtx.Err() != nil {
// our parent context cancelled, which caused the entire operation to be canceled
log.Printf("Cancelled by parent context")
return
} else {
// cancelled internally due to an error, we can restart the queue
log.Printf("Error occurred, restarting")
_ = q.Close(parentCtx)
continue
}
} else {
log.Printf("Other error, closing client and restarting: %s", err.Error())
_ = q.Close(parentCtx)
}
}
}
NOTE: This library was deprecated recently, but it was deprecated after you posted your question. The new package is here if you want to try it - azservicebus with a migration guide here to make things easier: migrationguide.md
The new package should recover properly in this scenario for receiving. There is an open bug in the new package I'm working on for the sending side to do recovery #16695.
If you want to ask further questions or have some feature requests you can submit those in the github issues for https://github.com/Azure/azure-sdk-for-go.

"operation was canceled" error message upon reading a single document

I use golang's driver (github.com/arangodb/go-driver) to talk to Arangodb and this issue seems happening sporadically but often enough to be concerned about.
Request to read a single document fails with error "operation was canceled". Not much to add here, except that it might be a timeout.
var contributor model.Contributor
_, err = col.ReadDocument(ctx, id, &contributor)
if err != nil {
log.Errorf("cannot read contributor %s document: %v", id, err)
return nil, err
}

Hyperledger Fabric -- if two update happening nearly same time, the second is dropped

I am just trying to learn the Hyperledger Fabric and I made a little test:
type Valami struct {
ObjectType string `json:"docType" binding:"required"`
Value string `json:"value" binding:"required"`
ID string `json:"id" binding:"required"`
}
func (t *SimpleChaincode) test(stub shim.ChaincodeStubInterface) pb.Response {
id := "104"
asbytes, err := stub.GetState(id) //get the marble from chaincode state
obj := &Valami{}
if err != nil {
return shim.Error("Failed to get state ")
} else if asbytes == nil {
fmt.Println("not found")
objtype := "test"
obj = &Valami{objtype, "", id}
} else {
fmt.Println("found")
err = json.Unmarshal(asbytes, obj)
if err != nil {
return shim.Error("Can not process to a JSON type!")
}
}
now := time.Now()
value := now.String()
fmt.Println("value: "+value)
obj.Value = value
// update
JSONasBytes, err := json.Marshal(obj)
if err != nil {
return shim.Error("Can not update the " + obj.ID + ". Reason: "+err.Error())
}
// save in state
err = stub.PutState(obj.ID, JSONasBytes)
if err != nil {
return shim.Error("Can not save "+ obj.ID + ". Reason: "+err.Error())
}
return shim.Success([]byte("value: "+obj.Value))
}
After I commit this twice quickly after each other:
docker exec -e CORE_PEER_LOCALMSPID=Org1MSP -e CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/users/Admin#org1.example.com/msp cli peer chaincode invoke -o orderer.example.com:7050 -C mychannel -n mur2 -c '{"Args":["test" ]}'
docker exec -e CORE_PEER_LOCALMSPID=Org1MSP -e CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/users/Admin#org1.example.com/msp cli peer chaincode invoke -o orderer.example.com:7050 -C mychannel -n mur2 -c '{"Args":["test" ]}'
The return:
2019-03-13 09:33:05.297 UTC [chaincodeCmd] chaincodeInvokeOrQuery -> INFO 001 Chaincode invoke successful. result: status:200 payload:"value: 2019-03-13 09:33:05.292254505 +0000 UTC m=+391.210396576"
2019-03-13 09:33:05.776 UTC [chaincodeCmd] chaincodeInvokeOrQuery -> INFO 001 Chaincode invoke successful. result: status:200 payload:"value: 2019-03-13 09:33:05.770792084 +0000 UTC m=+391.688934322"
So it looks like everything fine. However when I check the value:
"{\"docType\":\"test\",\"id\":\"104\",\"value\":\"2019-03-13 09:33:05.292254505 +0000 UTC m=+391.210396576\"}"
So actually the second commit does not come across. If I put a sleep between the two commits, they work. So I guess the first one is not finishing before the second start and some reason the second dropped. I have not expected this, because it could happen any time on a network. Could somebody explain for me what happening in the background and how we can handle this kind of situation?
Simple explanation: you can't update the same key several times inside the same block: if you send several transactions updating the same key and all transactions get processed in the same block, only one of them (I think the first one) will be processed and the other transactions will be rejected. That's why in your case, when you send txns very close in time, only one is processed, and if you add a sleep between calls, both get processed correctly (the sleep must be equal or higher than your block time). There are several ways to handle this situation, one could be the use of queues, and of course design your internal architecture in a way you can minimize this kind of issues.
Update:
Is it not possible to set the block size to max 1 transactions?
Can't answer with confidence without further reading/investigation. Not sure about the implications in terms of stability and performance of the network using such a configuration. There's an interesting paper about performance and optimization of HLF written a year ago (may 2018) here https://arxiv.org/pdf/1805.11390.pdf which may be of help. Maybe this weekend I can get some time to run my own tests. Let me know if you find something else about this topic because it seems interesting to me, though I smell it's not going to work fine because the network has an inherent latency itself so you can't reach consensus in near to 0 time.
Is this same with Sawtooth?
Don't have experience with that platform, but I think the same idea applies: a blockchain is a network that needs time to reach consensus about a fact, so trying to reach that consensus in lesser time than the inherent latency of the network plus the time of executing the consensus algorithms, won't work in any case.
Is this same with Hyperledger Sawtooth?
No, this restriction does not apply to Hyperledger Sawtooth. With Sawtooth you can update the same state variable several times in a block or in several consecutive transactions or even in the same transaction. However, when you do this, the Sawtooth Validator cannot process the transactions in parallel. The conflicting transactions (which operate on the same state) will be processed serially.

Resources