Circular Distributed Hash Table overlay P2P network - p2p

I think I'm missing something here or confusing terms perhaps.
What happens to the key:value pairs stored at a peer in the overlay DHT when that peer leaves the p2p network? Are they moved to the new appropriate nearest successor? Is there a standard mechanism for this if that is the case.
My understand is that the successors and predecessor peer information of adjacent peers has to be modified as expected when a peer leaves however I can't seem to find information on what happens to the actual data stored at that peer. How is the data kept complete in the DHT as peer churn occurs?
Thank you.

This usually is not part of the abstract routing algorithm that's at the core of a DHT but implementation-specific behavior instead.
Usually you will want to store the data on multiple nodes neighboring the target key that way you'll get some redundancy to handle the failures.
To keep it alive you can either have the originating node republish it in regular intervals or have the storage nodes replicate it among each other. The latter causes a bit less traffic if done properly, but is more complicated to implement.

Related

Is there a way to leverage the Bittorrent DHT for small data

I have a situation where I have a series of mostly connected nodes that need to sync a pooled dataset. They are files from 200-1500K and update at irregular intervals between 30min to 6hours depending on the environment. Right now the numbers of nodes are in the hundreds, but ideally, that will grow.
Currently, I am using libtorrent right now to keep a series of files in sync between a cluster of nodes. I do a dump every few hours and create a new torrent based on the prior one. I then associate it using the strategy of BEP 38. The infohash is then posted to a known entry in the DHT where the other nodes poll to pick it up.
I am wondering if there is a better way to do this. The reason I like BitTorrent was originally for firmware updates. I do not need to worry about nodes less than awesome connectivity, and with the DHT it can self assemble reasonably well. It was then extended to sync these pooled files.
I am currently trying to see if I can make an extension that would allow me to have each node do an announce_peer for each new record. Then in theory interested parties would be able to listen for that. That brings up two big issues:
How do I let the interested nodes know that there is new data?
If I have a thousand or more nodes adding new infohashes every few minutes what will that do to the DHT?
I will admit it feels like I am trying to drive a square peg into a round hole, but I really would like to keep as few protocols in play at a time.
How do I let the interested nodes know that there is new data?
You can use BEP46 to notify clients of the most recent version of a torrent.
If I have a thousand or more nodes adding new infohashes every few minutes what will that do to the DHT?
It's hard to give a general answer here. Is each node adding a distinct dataset? Or are those thousands of nodes going to participate in the same pooled data and thus more or less share one infohash? The latter should be fairly efficient since not all of them even need to announce themselves, they could just do a read-only lookup, try to connect to the swarm and only do an announce when there are not enough reachable peers. This would be similar to the put optimiation for mutable items

Can old block data be deleted from a blockchain?

Just a general question, if I'm building a blockchain for a business I want to store 3 years of transactions but anything older than that I won't need and don't want actively in the working database. Is there a way to backup and purge a blockchain or delete items older than some time frame? I'm more interested in the event logic than the forever memory aspect.
I'm not aware of any blockchain technology capable of this yet, but Hyperledger Fabric in particular is planning to support data archiving (checkpointing). Simply put, participants need to agree on a block height, so that older blocks can be discarded. This new block then becomes the source of trust, similar to original genesis block. Also, snapshot needs to be taken and consented, which captures current state.
From serviceability point of view, it's slightly more complicated, i.e. you may have nodes that are down while snapshotting, etc.
If you just want to purge the data after a while, Fabric Private Data has an option which could satisfy your desire.
blockToLive Represents how long the data should live on the private database in
terms of blocks. The data will live for this specified number of
blocks on the private database and after that it will get purged,
making this data obsolete from the network so that it cannot be
queried from chaincode, and cannot be made available to requesting
peers
You can read more here.
Personally, I don't think there is a way to remove a block from the chain. It might destroy the Immutable property of blockchain.
There are 2 concepts which help you achieve your goals.
The one thing is already mentioned. It is about Private Data. Private data gives you the possibility to 'label' data with a time to live. Then only the private data hashes are stored on the chain (to be able to verify this transaction) but the data itself is stored in so called SideDBs and gets fully pruned (except the hashes on the chain of course). This is kind of the basis for using Fabric without workarounds and achieving GDPR.
The other thing, which was not mentioned yet and kind of is very helpful to this question
Is there a way to backup and purge a blockchain or delete items older than some time frame?
Every peer only stores the 'current state' of the ledger in his StateDB. The current state could be described as the data which is labeled 'active' and probably soon to be used again. You can think of the StateDB being like a Cache. Every Data is comes into this Cache by creating or updating a new key (invoking). To remove a key from the Cache you can use 'DelState'. So it is labeled 'deleted' and not in the Cache anymore. BUT it is still on the ledger! and you can retrieve the history and data to that key.
Conclusion: For 'real' deleting of data you have to use the concept of Private Data and for managing data in your StateDB (think of the 'Cache' analogy) you can simply use built in functions.

How can I increase the number of peers in my routing table associated with a given infohash

I'm working on a side project and trying to monitor peers on popular torrents, but I can't see how I can get a hold of the full dataset.
If the theoretical limit on routing table size is 1,280 (from 160 buckets * bucket size k = 8) then I'm never going to be able to hold the full number of peers on a popular torrent (~9,000 on a current top-100 torrent)
My concern with simulating multiple nodes is low efficiency due to overlapping values. I would assume that their bootstrapping paths being similar would result in similar routing tables.
Your approach is wrong since it would violate reliability goals of the DHT, you would essentially be performing an attack on the keyspace region and other nodes may detect and blacklist you and it would also simply be bad-mannered.
If you want to monitor specific swarms don't collect data passively from the DHT.
if the torrents have trackers, just contact them to get peer lists
connect to the swarm and get peer lists via PEX which provides far more accurate information than the DHT
if you really want to use the DHT perform active lookups (get_peers) at regular intervals

kademlia closest good nodes won't intersect enough between two requests

working on bep44 implementation, i use the defined kademlia algorithm to find the closest good node given an hash id.
Using my program i do go run main.go -put "Hello World!" -kname mykey -salt foobar2 -b public and get the value stored over a hundred nodes (good).
Now, when i run it multiple consecutive times, the sets of ip which are written by the put requests poorly intersects.
It is a problem as when i try to do a get request, the set of ips queried does not intersect with the put set, so the value is not found.
In my tests i use the public dht bootstrap node
"router.utorrent.com:6881",
"router.bittorrent.com:6881",
"dht.transmissionbt.com:6881",
When i query the nodes, I select the 8 closest nodes (nodes := s.ClosestGoodNodes(8, msg.InfoHash())), which usually end up in a list of ~1K queries after a recursive traversal.
In my understanding, storing addresses of the info hash in the dht table is deterministic given the status of the table. As i m doing consecutive queries i expect the table to change, indeed, but not that much.
How does it happen the store nodes set does not intersect ?
Since BEP44 is an extension it is only supported by a subset of the DHT nodes, which means the iterative lookup mechanism needs to take support into account when determining whether the set of closest nodes is stable and the lookup can be terminated.
If a node returns a token, v or seq field in in a get response then it is eligible for the closest-set of a read-only get.
If a node returns a token then it is eligible for the closest-set for a get that will be followed by put operation.
So your lookup may home in on a set of nodes in the keyspace that is closest to the target ID but not eligible for the operations in question. As long as you have candidates that are closer than the best known eligible contacts you have to continue searching. I call this perimeter widening, as it conceptually broadens the search area around the target.
Additionally you also need to take error responses or the absence of a response into account when performing put requests. You can either retry the node or try the next eligible node instead.
I have written down some additional constraints that one might want to put on the closest set in lookups for robustness and security reasons in the documentation of my own DHT implementation.
which usually end up in a list of ~1K queries after a recursive traversal.
This suggests something is wrong with your lookup algorithm. In my experience a lookup should only take somewhere between 60 and 200 udp requests to find its target if you're doing a lookup with concurrent requests, maybe even fewer when it is sequential.
Verbose logs of the terminal sets to eyeball how the lookups make progress and how much junk I am getting from peers have served me well.
In my tests i use the public dht bootstrap node
You should write your routing table to disk and reload it from there and only perform bootstrapping when none of the persisted nodes in your RT are reachable. Otherwise you are wasting the bootstrap nodes' resources and also waste time by having to re-populate your routing table first before performing any lookup.

Hazelcast: Questions regarding multi-node consistency

(I could not find a good source explaining this, so if it is available elsewhere, you could just point me to it)
Hazelcast replicates data across all nodes in clusters. So, if data is changed in one of the nodes, does the node update its own copy and then propagate it to other nodes?
I read somewhere that each data is owned by a node, how does Hazelcast determine the owner? Is the owner determined per datastructure or per key in the datastructure?
Does Hazelcast follow "eventually consistent" principle? (When the data is being propagated across the nodes, there could be a small window during which the data might be inconsistent between the nodes)
How are conflicts handled? (Two nodes update the same key-value simultaneously)
Hazelcast does not replicate (with exception of the ReplicatedMap, obviously ;-)) but partitions data. That means you have one node that owns a given key. All updates to that key will go to the owner and he notifies possible updates.
The owner is determined by consistent hashing using the following formula:
partitionId = hash(serialize(key)) % partitionCount
Since there is only one owner per key it is not eventually consistent but consistent whenever the mutating operations is returned. All following read operations will see the new value. Under normal operational circumstances. When any kind of failure happens (network, host, ...) we choose availability over consistency and it might happen that a not yet updated backup is reactivated (especially if you use async backups).
Conflicts can happen after split-brain when the split cluster re-merge. For this case you have to configure (or use the default one) MergePolicy to define the behavior on how conflicting elements are merged together or which one of both wins.

Resources