TCP-IP Join in Hazelcast not working in servicemix - hazelcast

According hazelcast article http://docs.hazelcast.org/docs/2.4/manual/html/ch12s02.html added hostname of another PC in hazelcast.xml which is generated in SERVICEMIX_HOME/etc like below.
<tcp-ip enabled="true">
<hostname>FABLRDT061:5702</hostname>
<interface>127.0.0.1</interface>
</tcp-ip>
If i start the servicemix, its not able to connect to the hostname i specified because of the following connection refusal. The log message in the other pc is as below
[172.16.25.64]:5702 [cellar] 5702 is accepting socket connection from /172.16.25.71:60770
[172.16.25.64]:5702 [cellar] 5702 accepted socket connection from /172.16.25.71:60770
[172.16.25.64]:5702 [cellar] Wrong bind request from Address[127.0.0.1]:5701! This node is not requested endpoint: Address[FABLRDT061]:5702
[172.16.25.64]:5702 [cellar] Connection [/172.16.25.71:60770] lost. Reason: Explicit close
what could be the reason?? Can someone help me out??

Hazelcast is configuration file using which discovery of nodes can be configured.
Eventhough the tutorials explain the following points,
According to the hands on i did, i understand that
Multicast is for auto discovery of cellar nodes in same sytem.
If cellar nodes are present in different systems over network, we use tcp-ip configuration.
For multicasting we dont need to change anything until we writing different multicast groups.
For discovering nodes using tcp-ip we need to specify ipaddresses (as explained by many tutorials but not exactly how.
under tcp-ip tag create a tag called hostname in which the hostname of the other system or ipaddress should be mentioned. In the interface tag, specify the current system's ipaddress.
Similarly in other nodes same should be done.

I would stay away from using hostnames, but replace it by ip addresses.

Related

libp2p - How to discover initial peers?

In the bitcoin p2p core client, the initial peers are found, as stated, as:
When started for the first time, programs don’t know the IP addresses
of any active full nodes. In order to discover some IP addresses, they
query one or more DNS names (called DNS seeds) hardcoded into Bitcoin
Core and BitcoinJ. The response to the lookup should include one or
more DNS A records with the IP addresses of full nodes that may accept
new incoming connections. For example, using the Unix ``dig command
https://en.wikipedia.org/wiki/Dig_%28Unix_command%29>`__:
source: https://developer.bitcoin.org/devguide/p2p_network.html
Is the same approach required for libp2p for initial peer discovery? I was not able to find any tutorial which covers this information. I was hoping libp2p would handle this problem. Does the libp2p provide guidance or facilities for this?

Regarding mDNS observations on Wireshark

I am a beginner here, would like to clarify a few things.
I have a server (OPC UA) running on my system, and the specification says it is announcing itself on the local link using DNS SRV records. On doing some research, I figured that mDNS is used along with DNS-SD for service discovery in local networks. The mDNS specifies the terms probing and announcement. I really am not understand the same on the wireshark log when i run the server.
Does the resource record mean, the different types of records (A, SRV,TXT,PTR) for the service(_opcua-tcp._tcp.local?)
Since it should be an announcement on the network, should it just not be a SRV query informing the service name with the port and host information?
When there is a single host and single service on the host, what are the implications of Multicast probe and announce?
Please let me know.
Attahced is the screenshot
enter image description here

OPC UA Multicast Discovery

I am a beginner in OPC UA, exploring the discovery mechanisms mentioned in part 12 of the specification. I have a couple of queries.
In the Multicast extension discovery, the server registers to its Local discovery server(LDS ME), and when client does the registration to its LDS-ME, the client side LDS-ME issues a multicast probe for which the server side LDS-ME responds with an announcement, thus allowing the client to know the list of servers in the network.
My question here is, why is the process referred as Multicast probe and multicast announcement. Because as per the mDNS specification, probe and announcement are used initially to secure unique ownership of a resource record. Anybody could tell me why is it referred as probe and announce?
In the open62541 stack, with the discovery examples, running the server_lds.c, i get a log message saying "Multicast DNS: outbound interface 0.0.0.0, it means that the first OS interface is used (you can explicitly set the interface by using 'discovery.mdnsInterfaceIP' config parameter)".
Now theory says multicast dns IP should be 224.0.0.251: 5353
Why is it being set to 0.0.0.0? Could anyone please let me know?
Regards,
Rakshan
There is no relation to the words "probe" and "announce" used in the mDNS Spec. It just says probe, means look-up or query, and announce like "there are the follwing results related to your probe request".
0.0.0.0 means here every Ipv4 interface is used (bound). So every capable interface in your system will be configured for mDNS. Should be the way you mentioned.
"0.0.0.0" => have a look here https://en.wikipedia.org/wiki/0.0.0.0

What is the difference between broadcast_address and broadcast_rpc_address in cassandra.yaml?

GOAL: I am trying to understand the best way to configure my Cassandra cluster so that several different drivers across several different networking scenarios can communicate with it properly.
PROBLEM/QUESTION: It is not entirely clear to me, after reading the documentation what the difference is between these two settings: broadcast_address and broadcast_rpc_address as it pertains to the way that a driver connects and interacts with the cluster. Which one or which combination of these settings should I use with my node's accessible network endpoint (DNS record attainable by the client's/drivers)?
Here is the documentation for broadcast_address from datastax:
(Default: listen_address)note The IP address a node tells other nodes in the cluster to contact it by. It allows public and private address to be different. For example, use the broadcast_address parameter in topologies where not all nodes have access to other nodes by their private IP addresses.
If your Cassandra cluster is deployed across multiple Amazon EC2 regions and you use the EC2MultiRegionSnitch, set the broadcast_address to public IP address of the node and the listen_address to the private IP.
Here is the documentation for broadcast_rpc_address from datastax:
(Default: unset)note RPC address to broadcast to drivers and other Cassandra nodes. This cannot be set to 0.0.0.0. If blank, it is set to the value of the rpc_address or rpc_interface. If rpc_address or rpc_interfaceis set to 0.0.0.0, this property must be set.
EDIT: This question pertains to Cassandra version 2.1, and may not be relevant in the future.
One of the users of #cassandra on freenode was kind enough to provide an answer to this question:
The rpc family of settings pertain to drivers that use the Thrift protocol to communicate with cassandra. For those drivers that use the native transport, the broadcast_address will be reported and used.
My test case confirms this.

How PEX protocol (Magnetic links) finds it first IP?

I'm trying to understand how can a magnetic link work, as I've read they use DHT and PEX to get the peers, but if I'm a new node in the network how can I find peers with only the hash of the file?! Doesn't it always require a link to a known host?
Thanks
The bittorrent DHT can be bootstrapped in many ways. It just needs the IP and Port of any other reachable DHT node out there.
Current clients generally use several of the following strategies:
bootstrap from a cache of long-lived nodes from a previous session
use a DNS A/AAAA record mapping to a known node (e.g. router.bittorrent.com or dht.transmissionbt.com) with a known port
use a node embedded in a .torrent file
retrieve the DHT port from a bittorrent client over a bittorrent connection established through other means, e.g. a conventional tracker.
If a peer is embedded in a magnet link one can also piggyback a DHT bootstrap on that through the port message
multicast neighbor discovery via LSD
cross-chatter from the IPv4 to the IPv6 DHTs and vice versa (if needed)
Other ways such as user-configurable bootstrap lists, DNS SRV records round-robin mapping to live nodes or - should everything else fail - adding the IP of your friend(s) manually work.
Once a node has joined the network the first strategy mentioned above will kick in and it is unlikely that it will have to bootstrap again.
So while most implementations rely on a single/few points of entry into the network for convenience, the protocol itself is flexible enough to decentralize the points of entry too.
Just for emphasis: Any node in the DHT can be used to join the network. Dedicated bootstrap nodes are an implementation detail, not part of the protocol, and could be replaced by other discovery mechanisms if necessary.

Resources