I've made a decentralized node network using WebSockets with use of Nodejs. I would like to visualize this network with a graph. To visualize the whole network I need all nodes connected to each other, but there is a problem - in the decentralized network there is no central point. How I can get all connected nodes from any node? Let's say we have a connection:
A <-> B <-> C
picture of the network
Like you can see now you can visualize the network from B, but what about visualizing the network from A, C?
How A know about C while drawing graphs? Should I also attach peers of connected nodes ( all peers of B) and then all nodes of next nodes - C? What would be the best way to get all peers connected to each other? Thanks for any tips.
Graph search algorithms could be of use :D Common ones are BFS and DFS. The general gist of the approach is to start with any node and visit its neighbours while keeping track of which ones you have already visited (so you don't have to go over them again).
In your case I think it would be sensible to start with any random node, and do the process iteratively. I found a fantastic explanation here, please check it out.
This is a fun exercise actually
Related
In my new weekend project I decided to write an bittorrent client from scratch, no ready to use libraries at all. After two days looking for documentation I'm already about to give up :smile:. I know there are the BEPs, but they are far from enough to understand all the specification. After reading a lot more I think the tracker and peer protocols seems to be old and easy to understand/implement (yes, I know, to write a good code with balance, peer selection, optimizations, this is not easy as I just said, but all I want to is to do the basics to learn, not to compete with tens of good clients out there.)
So, I have decided to start by the DHT which seems to the the more complex part and also the less documented. When you stop looking for bittorrent DHT or mainline DHT and start looking for kademlia DHT you have a lot more information but it not so obvious how to put it all together.
Here is what I understand so far (and there are gaps which I hope to fill in):
I start with my DHT tree empty
use find_nodes on my bootstrap node
add the received nodes to my own tree, so I can then select the ones closer to my own ID
start issuing find_nodes to the selected ones and add their responses to my tree
go back to 3 until I stop receiving unknown/new nodes
if I receive an announce_peer with an info_hash than I should save its information on a local DB (the info_hash and ip/port of the sender)
if a node uses get_peers with an info_hash I have in my DB then I send the information otherwise I should send a list of closer nodes I have in my own tree (closest to that info_hash)
when I use get_peers on other nodes I will receive peers or nodes, in the later case I think the nodes are closer to the info_hash and not to my own nodeId so, should I add these nodes to my tree or start a new tree based on them?
when I want to announce I am interested on an info_hash should I use announce_peer everywhere or just to the nodes with nodeId closer to the target info_hash? How much is closer enough?
At this point I have a lot of nodes which IDs are closer to my own ID, and informations about info_hash'es I am not really interested.
I am afraid that I have a giant stupid question: why I did that?
I mean: my selfish reason to do all this work is to locate peers to the info_hash I'm interested in. I understand that the information of one info_hash is likely to be saved on a node which ID is closer to that info_hash. So my chances to find its information is bigger if I create a tree of nodes closer to the info_hash and not closer to my own ID (at this point, if you know the subject, you already noticed how lost I am).
Should I create multiples trees? One for me (to be there to save the information of info_hashes closer to my nodeID people send me), and other tree closer to each one of my target info_hashes so I can retrieve their information?
Should I create a single tree closer to my node ID and hope for the best when querying this tree for the info_hashes I need?
Should I give up since I have completely misunderstood the idea behind DHT at all?
Well, any real documentation, flowcharts, any thing will be welcome!
So, I have decided to start by the DHT which seems to the the more complex part and also the less documented.
The original kademlia paper "Kademlia: A Peer-to-peer Information System Based on the XOR Metric" by Peter Maymounkov and David Mazieres is required reading. It is referenced fairly early in BEP-5
if I receive an announce_peer with an info_hash than I should save its information on a local DB (the info_hash and ip/port of the sender)
You only accept announces when they contain a token previously handed out via get_peers.
when I use get_peers on other nodes I will receive peers or nodes, in the later case I think the nodes are closer to the info_hash and not to my own nodeId so, should I add these nodes to my tree or start a new tree based on them?
You use a temporary tree - or a list ordered by contact-ID relative to the target ID - for iterative lookups since they are not balanced towards your node ID.
when I want to announce I am interested on an info_hash should I use announce_peer everywhere or just to the nodes with nodeId closer to the target info_hash? How much is closer enough?
You perform a get_peers lookup and when it is done you announce to the š¯‘² closest nodes set that returned a write token and verify the responses to make sure you actually get š¯‘². In case of bittorrent š¯‘² = 8.
my selfish reason to do all this work is to locate peers to the info_hash I'm interested in. I understand that the information of one info_hash is likely to be saved on a node which ID is closer to that info_hash. So my chances to find its information is bigger if I create a tree of nodes closer to the info_hash and not closer to my own ID (at this point, if you know the subject, you already noticed how lost I am).
When doing lookups you do not just visit nodes in your routing table, you also visit nodes included in the responses. This makes them iterative. The bias of each node's routing table towards their own ID ensures that the responses include neighbors closer and closer towards the target.
So the deal is that you are responsible for information close to your node ID and other nodes will provide information close to their node IDs that you are interested in. So your routing table layout serves others, their routing table layout serves you.
Note that all the information contained in this answer can be found in the BEP or Kademlia paper.
I'm working on a Mesos framework to run some jobs and it seems like a great opportunity to learn about making a highly available system. To that end, I'm doing some reading on distributed systems and I made the mistake of visiting wikipedia.
The passage in question is talking about a principle of HA engineering:
Reliable crossover. In multithreaded systems, the crossover point itself tends to
become a single point of failure. High availability engineering must provide for reliable
crossover.
My google-fu teaches me three things:
1) audio crossover devices split a single input into multiple outputs
2) genetic algorithms use crossover to combine solutions
3) buzzwordy white papers all copied from this wikipedia article :/
My question: What does a 'crossover point' mean in this context, and why is it single point of failure?
Reliable crossover in this context means:
The ability to switch from a node X (which is broken somehow) to a Node Y without losing data.
Non-reliable HA-database example:
Copy the database every 5 minutes to a passive node. => Here you can lose up to 5 minutes of data.
=> Here the copy action is the single point of failure.
Reliable HA-database example:
Setting up data replication where (per example) your insert statement only returns as "executed OK" when the transaction is copied to the secondary server.
(yes: data replication is more complex than this, this is a simplified example in the context of the question)
I was referring the paper Virtual Node Algorithm for Changing Mesh Topology During Simulation .
I have one very basic question regarding it. A node one ring is been used in this paper. I am confused as to what it refers to .
Context in which it is used is in :
For each distinct scoop cut out of a nodeā€™s one ring, a virtual copy of the
central node is created and donated to the nodes within the given scoop
to give the mesh the degrees of freedom needed to break apart.
Please help to understand the same .
A node's one ring means the set of triangles that are incident on the node. If just read that paper recently, if you have any question, you can post on the comments
I'm getting familiar with DHT and I mostly understand how it works. However, I don't quite understand what happens if you want to have separate DHTs with different entry types in each. Is this possible?
If I use a popular DHT library, does that mean I put and get entries using the same DHT as every user of said library? Or is DHT universal for everyone? How do you define an owner of a DHT, or how do you define a separate, contained DHT?
Yes, you can make separate DHTs. However you need to make the 'on the wire' protocols slightly diffrent so they can't speak to each other and get mixed up.
You can actually have unlimited numbers of DHTs using the same protocol as long as peers don't know each other.
This is the important part when you set up a network. You have to know a second peer to intitially create the network. The next peer would have to know one of the two initial nodes, the next one needs knowledge of one of the three above and so forth.
You are also able to be connected to multiple DHTs on the same host without the two interferring (at least in terms of data exchange, not in terms of local ressources). And you are also able to joins those two DHTs by telling one of them about the peers you are connected to in the other DHT. Though that might be not as easy as it sounds.
Recently, I've read a document of the Kademlia Protocol, I tried to understand the protocol, but I still have some question:
Why a node must find another node when he knows its ID but ip or port?
Why he has the ID while he doesn't know the ip or port, where did he get the ID?
I think the "distance" between two different nodes is not a routing distance or real distance, it's only a virtual distance that can be used the algorithm to find the node quickly, it's that right?
Maybe my English is not very clear because English is not my mother tongue, but I'll try to express myself clear if you need.
Thanks very much!
As cHao said, the distributed nature of the network means that nodes need to publish their IDs and their contact details to other nodes they talk to. There is no central place where IDs are mapped to contact info, so each node must keep this mapping for a subset of the nodes on the network in its own routing table.
Kademlia routing tables are structured so that nodes will have detailed knowledge of the network close to them, and exponentially decreasing knowledge further away.
The use of bitwise XOR as a measure of notional distance between IDs has the advantage that for a given target ID, no two IDs can have the same distance to the target.
Imagine a simple example where the IDs are in the range 00 to 63. If Kademlia used e.g. pure mathematical difference as a measure of distance, 15 and 35 would be the same distance to 25 - both would have a distance of 10. Using XOR, the distance between 15 and 25 is 22, and between 25 and 35 it's 58.
In this way, the group of k closest IDs to a target ID can be calculated unambiguously.
The constant k has a couple of uses in Kademlia, but it's primarily the replication factor. In other words, a piece of data is stored on the k closest nodes to the data's ID.
The lookup process is designed to return either a group of k nodes (before storing data on each of them) or return a single piece of data (from the first node holding it during the lookup iterations).
Because of this, pure Kademlia isn't best suited to finding just a single node, so I'm not sure that part of your question is too relevant. If you did want to use Kademlia to find a single node, it would probably be worth modifying the lookup process to finish early as soon as any node returns the target node's contact details (in the same way that the lookup finishes early if a target value is found during the process).
Since the network is distributed, by definition, there's no one master table of ID->address mappings. Nodes don't have to (and usually don't) know about all the other nodes. The process of "finding" a node is basically to ask known nodes "closest" to the target not so much about the target node directly, but about what nodes are closer to the target. The result of that query gives you the next group of nodes to query, and the process repeats -- and because a node would return results that are closer than it is, each iteration tends to find nodes closer and closer to the target til you finally reach a node that can say "Oh, node X? He's right over there."
At least that's what i'm understanding of it.