Hyperledger Fabric Go SDK: How to parse blocks - hyperledger-fabric

I'm using the Hyperledger Golang SDK for implementing a client to work with the ledger. My application relies on events being sent, however, I want to use BlockEvents so that I can be sure that the given data is written to the ledger already instead of chaincode events. Unfortunately, the documentation on these type of events is very limited. I registered for BlockEvents using func (c *Client) RegisterBlockEvent()... and get BlockEvent responses with a Block struct referenced in each of them. The block struct looks like this:
type Block struct {
Header *BlockHeader `protobuf:"bytes,1,opt,name=header,proto3" json:"header,omitempty"`
Data *BlockData `protobuf:"bytes,2,opt,name=data,proto3" json:"data,omitempty"`
Metadata *BlockMetadata `protobuf:"bytes,3,opt,name=metadata,proto3" json:"metadata,omitempty"`
XXX_NoUnkeyedLiteral struct{} `json:"-"`
XXX_unrecognized []byte `json:"-"`
XXX_sizecache int32 `json:"-"`
}
I can navigate to BlockData:
type BlockData struct {
Data [][]byte `protobuf:"bytes,1,rep,name=data,proto3" json:"data,omitempty"`
XXX_NoUnkeyedLiteral struct{} `json:"-"`
XXX_unrecognized []byte `json:"-"`
XXX_sizecache int32 `json:"-"`
}
However, at this point I am lost, having only a raw array of byte-arrays as data. I want to upon a specific asset creation event and need to parse the block data to search for the data. What struct or structure is used for this data? I assume every array entry represents a transaction, but without a struct to map onto it, parsing is extremely difficult.

write a function ParseBlock with protolator
// import "github.com/hyperledger/fabric-sdk-go/pkg/util/protolator"
func ParseBlock(block *common.Block) {
if err := protolator.DeepMarshalJSON(os.Stdout, block); err != nil {
log.Fatalln("DeepMarshalJSON err:", err)
}
}

Related

Solidity mapping not returns an array in a struct

The Solidity's mapping not returns an array inside a struct (when call mapping_data(), the data variable is undefined).
Just be able to read it from read() function.
Does anyone know a reason?
struct structPackage
{
uint256 ui;
string[2] data;
}
// the mapping_data(address) is not includes data variable, undefined.
mapping(address => structPackage) public mapping_data;
constructor()
{
structPackage storage data_package = mapping_data[msg.sender];
data_package.data[0] = "test1";
data_package.data[1] = "test2";
}
// This function shows data as expected, [ 'test1', 'test2' ].
function read() external view
returns (structPackage memory)
{
structPackage storage data_package = mapping_data[msg.sender];
return data_package;
}
Tested on Remix, mapping_data doesn't return the array inside struct
When a struct is used as value inside a mapping, the getter function of the mapping will ignore the mappings or arrays used inside the struct, because there is no easy way to provide keys or indexes to get the specific element from array or mapping. This mechanism exists to avoid high gas costs when returning an entire array or entire mapping.
Hence, if you want to access a complete array or mapping inside a struct (which is used as a value in a mapping which is public) make separate specific getters as required(Similar to as you wrote your read method).
Read Getter Functions docs
Read a similar GitHub issue

Complex datatypes in HyperLedger Fabric

I read in out the documentation on HyperLedger. However, I cannot find any information on storing complex datatypes by that I mean if it is even possible. For instance lets say we have two objects: an author and a book. Is it possible to create a smartcontract that would look like this? (example in typescript) :
export class Book {
public ISBN: string;
public Title: string;
}
export class Author {
public firstName: string;
public lastName: string;
public publishedBooks: Array<Book>;
}
And if so how would querying would look like in such instance. On the other hand if it is not possible how would one model such data relations in HyperLedger.
Yes you can do this.
Implement it in the smart contract and use Hyperledger directives to query the ledger.
For example in Go you can use shim PutState and GetState to determine an entity given an ID.
If you implement a DB like CouchDB you can even do more complex and rich queries on your database.
[EDIT1] Answer improvement with example:
This is how I improved this in my Go Chaincode
type V struct {
Attribute string `json:"Attribute"`
Function string `json:"Function"`
Value string `json:"Value"`
}
type AV struct {
Vs []V `json:"Vs"`
CFs map[string]string `json:"CFs"`
}
As you can see, I am using V struct for an array of Vs.
This make my dataset more complex and this is inside the chaincode.
[EDIT 2] Answer improvement with query and put:
Adding a new entity is very easy. My examples are always in GoLang.
Send a JSON to the chaincode (thanks to the SDK) and next unmarshal it:
var newEntity Entity
json.Unmarshal([]byte(args[0]), &newEntity)
Now use the PutState function to put the new entity given his ID (in my case contained in the JSON file, id field):
entityAsBytes, _ := json.Marshal(newEntity)
err := APIstub.PutState(newEntity.Id, entityAsBytes)
And here you are done. If you now want to query the ledger retrieving that id, you can do:
entityAsByte, err := APIstub.GetState(id)
return shim.Success(entityAsByte)

GraphQL string concatenation or interpolation

I'm using GitHub API v 4 to learn GraphQL. Here is a broken query to fetch blobs (files) and their text content for a given branch:
query GetTree($branch: String = "master") {
repository(name: "blog-content", owner: "lzrski") {
branch: ref(qualifiedName: "refs/heads/${branch}") {
name
target {
... on Commit {
tree {
entries {
name
object {
... on Blob {
isBinary
text
}
}
}
}
}
}
}
}
}
As you see on line 3 there is my attempt of guessing interpolation syntax, but it does not work - I leave it as an illustration of my intention.
I could provide a fully qualified name for a revision, but that doesn't seem particularly elegant. Is there any GraphQL native way of manipulating strings?
I don't think there's anything in the GraphQL specification that specifically outlines any methods for manipulating string values within a query.
However, when utilizing GraphQL queries within an actual application, you will provide most of the arguments for your query by utilizing variables that are passed alongside your query inside your request. So rather than being done inside your query, most of your string manipulation will be done within your client code when composing the JSON that will represent your variables.

golang threading model comparison

I have a piece of data
type data struct {
// all good data here
...
}
This data is owned by a manager and used by other threads for reading only. The manager needs to periodically update the data. How do I design the threading model for this? I can think of two options:
1.
type manager struct {
// acquire read lock when other threads read the data.
// acquire write lock when manager wants to update.
lock sync.RWMutex
// a pointer holding a pointer to the data
p *data
}
2.
type manager struct {
// copy the pointer when other threads want to use the data.
// When manager updates, just change p to point to the new data.
p *data
}
Does the second approach work? It seems I don't need any lock. If other threads get a pointer pointing to the old data, it would be fine if manager updates the original pointer. As GoLang will do GC, after all other threads read the old data it will be auto released. Am I correct?
Your first option is fine and perhaps simplest to do. However, it could lead to poor performance with many readers as it could struggle to obtain a write lock.
As the comments on your question have stated, your second option (as-is) can cause a race condition and lead to unpredictable behaviour.
You could implement your second option by using atomic.Value. This would allow you to store the pointer to some data struct and atomically update this for the next readers to use. For example:
// Data shared with readers
type data struct {
// all the fields
}
// Manager
type manager struct {
v atomic.Value
}
// Method used by readers to obtain a fresh copy of data to
// work with, e.g. inside loop
func (m *manager) Data() *data {
return m.v.Load().(*data)
}
// Internal method called to set new data for readers
func (m *manager) update() {
d:=&data{
// ... set values here
}
m.v.Store(d)
}

Implementing "move" thread semantics

I want to write a function to be called like this:
send("message","address");
Where some other thread that is doing
let k = recv("address");
println!("{}",k);
sees message.
In particular, the message may be large, and so I'd like "move" or "zero-copy" semantics for sending the message.
In C, the solution is something like:
Allocate messages on the heap
Have a global, threadsafe hashmap that maps "address" to some memory location
Write pointers into the memory location on send, and wake up the receiver using a semaphore
Read pointers out of the memory location on receive, and wait on a semaphore to process new messages
But according to another SO question, step #2 "sounds like a bad idea". So I'd like to see a more Rust-idiomatic way to approach this problem.
You get these sort of move semantics automatically, and get achieve light-weight moves by placing large values into a Box (i.e. allocate them on the heap). Using type ConcurrentHashMap<K, V> = Mutex<HashMap<K, V>>; as the threadsafe hashmap (there's various ways this could be improved), one might have:
use std::collections::{HashMap, RingBuf};
use std::sync::Mutex;
type ConcurrentHashMap<K, V> = Mutex<HashMap<K, V>>;
lazy_static! {
pub static ref MAP: ConcurrentHashMap<String, RingBuf<String>> = {
Mutex::new(HashMap::new())
}
}
fn send(message: String, address: String) {
MAP.lock()
// find the place this message goes
.entry(address)
.get()
// create a new RingBuf if this address was empty
.unwrap_or_else(|v| v.insert(RingBuf::new()))
// add the message on the back
.push_back(message)
}
fn recv(address: &str) -> Option<String> {
MAP.lock()
.get_mut(address)
// pull the message off the front
.and_then(|buf| buf.pop_front())
}
That code is using the lazy_static! macro to achieve a global hashmap (it may be better to use a local object that wraps an Arc<ConcurrentHashMap<...>, fwiw, since global state can make reasoning about program behaviour hard). It also uses RingBuf as a queue, so that messages bank up for a given address. If you only wish to support one message at a time, the type could be ConcurrentHashMap<String, String>, send could become MAP.lock().insert(address, message) and recv just MAP.lock().remove(address).
(NB. I haven't compiled this, so the types may not match up precisely.)

Resources