Identify peers in cluster using hash - p2p

Services like tor add ipfs use hash to identify peers but how they managed to keep track which keys point to which. Do they key track of which key points to which site (which would be a disaster for tor) or by using some algorithm to locate peers.

Related

Azure best practice

What is the best practice with windows Azure key vault. Is it a good practice to extract keys within the application every time we make use of it or it is good to set them in the OS environment variable.
It depends on what key. You should not be extracting master key, but the associated data keys to encrypt and decrypt.
There is no hard-fast rule here but you should consider the rotation of keys. If you need the keys to be rotated often you should consider pulling from Key Vault every time.
There are other times in makes more sense to pull once and cache locally. This could include your network latency being an issue so it's more cost-effective to pull once then cache. In this scenario, you will need some mechanism in place to force a new pull if/when the key is rotated.

It is possible to use the ordering service of Hyperledger Fabric for exchanging other messages?

I'd like to know how to enable the ordering service of Hyperledger Fabric for exchanging messages among these kind of nodes. For instance, if only these nodes have to compute a result together but everyone has only one part of the input, is it possible to allow them to exchange this partial input and compute the result as output in order to send to an application client afterwards? I know that ordering service is used to order transactions and to enable the broadcast of validated blocks among other peers but I'd like to know if this kind of customisation is granted on this platform.
No. Orderer nodes do not "compute results". Only endorser peer nodes do.

Updating Azure Service Fabric certificate. Why primary and secondary certificate?

We are in the process of updating an expired Service Fabric cluster primary certificate. We have read most of the documentation and searched the web, but some things are still unclear.
What's the idea behind having a primary and secondary certificate to begin with?
The recommended way to update the certificate seems to be by adding a secondary certificate to the cluster (Add-AzServiceFabricClusterCertificate).
Will the cluster automatically make use of the new (secondary) certificate with the furthest into the future expiry date? I think that's what the documentation is saying...
If so, will the secondary certificate become the primary certificate? Otherwise I think we would be left with an expired primary certificate forever - that doesn't make much sense?!
Hope someone can shed some light on this.
/Chris
There are two certificate slots so you can always have at least one valid, working certificate. If you would have only one certificate and it would expire or be deleted, the cluster would not work.
Yes, Service Fabric cluster will automatically use the declared certificate with a further into the future expiration date; when more than one validate certificate is installed on the host. This doesn't make it the primary certificate, this rule also applies to the secondary certificate. If you select a certificate by its common name, multiple results may be found, and this is they way one is selected.
The process is like this:
upload a new secondary certificate
enable auto-rollover
delete the primary certificate after the auto rollover has completed
more info here.

Hyperledger Fabric: Encrypt ledger data in a single channel

I have a multi-org fabric network where all the orgs are on a single channel.
I understand that using the composer acl file we can hide data from the users based on their roles and other conditions.
However, the data will be visible when we get into the peer container of any org and issue a peer channel fetch.
So, my question is, is there a way to encrypt this ledger data when the orgs shares the same channel? Here, they mention about encrypting the data. Is there any example/reference that can get me started on that one?
Currently, I'm not planning to use different channels between different orgs.
Yes, there are few ways to protect the ledger data. Like your mentioned in your question, Hyperledger Fabric FAQ, official gives five different ways to help us to achieve security and access control.
In the newest version of Fabric, which is tagged v1.2.0, provided a new definition called private data. I prefer to use this method to build my access control in my apps.
Since I am using Fabric Node SDK to deploy and control the fabric network, and it provides a convenient way for me to embed it into the exists projects.
Using the configuration file to define who can persist data, how many peers the data is distributed to, how many peers are required to disseminate the private data, and how long the private data is persisted in the private database. All the upgrade that you need to do is adding some parameters when install and instantiate, modifying some function to invoke the private data, writing some codes to handle the configuration file and users control.
It gives some examples for us to use this new feature:
Chaincode example
SDK example

Hyperledger communication between multiple machine

I have created a network composed by two nodes using this tutorial: Multiple Machine.
In the node with orderer and ca installed, I can use the composer-playground to interact with the blockchain. Instead, analysing the logs of the docker on the second node, I am able to see the communication between the nodes but I am not able to access the data.
How can I access data on the second machine?
It is a simple node connect to the first node (where is installed the orderer and the ca).
Thanks,
What do you mean by accessing the data?
In Hyperledger Fabric the ledger data is composed of two components i.e. World State and Transaction History Log (the blockchain).
Here World state refers to the most recent (current) state of the assets you have and Transaction History log refers to the transaction executed on these assets. Assets or Key Value set when using CouchDB as the World State allows you to have KeyValue with Value as JSON documents.
The World State by default is stored in levelDB or couchDB, if you have docker containerized network the World State levelDB is stored on the peer container while using couchDB sets up its own couchDB container associated with each peer. The couchdb for each peer can be accessed from host machine using http://couchdbIp:port
The Transaction log get's stored in the underlying file system as blockFiles somewhere under location /var/hyperledger/ledgerdata or something in the peer container.
When you mention Orderer, which is another component like peer, is a docker container assigned the role of making sure that transactions are properly ordered and verified that their endorsement are valid. This gets complicated as you go to having multiple ordering service nodes and requires Kafka implementation rather than the default SOLO implementation. You can read about each of these implementations in Hyperledger Fabric official documentation.
Also CA is associated with each organization responsible for establishing chain of trust is another component of Hyperledger Fabric that signs certificates of network components like organization peers, client and participants following PKI.
The Playground will connect to the Fabric based on the connection profile (connection.json) for the Business Network Cards you have. If you want to specifically connect to second node you could modify a card.
But remember that Playground is a development and test tool not a production tool so you shouldn't worry too much about hitting different containers with it - particularly as the data will be the same replicated across Peers.

Resources