Gridgain node disconnect - gridgain

We have grid gain setup across 5 nodes and very occasionally we see a network interruption happening across the nodes for 3 - 5 secs.During this period the nodes gets disconnected. The network gets back online after that duration but since the grid nodes got disconnected they cannot communicate. Is there any way to specify the grid nodes to have a ping or heartbeat to be happening > 5 secs so that they are never disconnected due to the network interruptions ?

Right now GridGain will consider a node that is out for 5 seconds failed. I have filed an internal ticket to allow for a node to stay disconnected for a small while before it is considered failed. It should be fixed in a future release.
Having said that, it does seem rather unusual to have such break downs in your network.

Related

Why does a Cassandra node get picked as coordinator even when the driver keeps throwing OperationTimedOutException?

I set up a Cassandra cluster with several coordinator nodes.
Sometimes one of the coordinator nodes becomes unavailable...my code handles this with a retry policy which moves to the next node and the problem is solved.
However, it seems that the problematic node still receives traffic even if the driver keeps throwing OperationTimedOutException...it is a time consuming since this node useless.
Further details:
Cassandra Driver -
I'm using Cassandra driver version 3.11.0 (cassandra-driver-core-3.11.0.jar)
Loading balancing policy -
I didn't set any load balancing policy - thus, the default is used.
Retry Policy -
I implemented my own retry policy -
In case of read/write timeout or unavailable retry cause - I'm using retry while reducing the consistency level to one. In case of request error - I'm trying a different host.
Is there anyway to configure that if the driver keeps throwing OperationTimedOutException while sending query to a specific coordinator node, this node will not be called for some period of time?
Cassandra client connection does the Cassandra co-ordinator node caching. So, It will continue sending the query to the same node. Tune your application layer socket config with the client connection timeout.
SocketOptions options = new SocketOptions();
options.setConnectTimeoutMillis(30000);
options.setReadTimeoutMillis(30000);
options.setTcpNoDelay(true);
There are a few misconceptions in your question so let me begin by correcting them.
Misconception #1
I set up a Cassandra cluster with several coordinator nodes.
All nodes in a Cassandra cluster are the same. This is one of the attributes that makes Cassandra awesome. Any node in the cluster can be picked as a coordinator. You can NOT configure/nominate/setup a node to be a coordinator while others aren't.
Misconception #2
... if a coordinator node keeps throwing OperationTimedOutException ...
Cassandra nodes are not capable of throwing OperationTimedOutException. OperationTimedOutException is a client-side exception which gets thrown by the driver when it doesn't get a response from a coordinator within the configured client timeout period.
It is a different exception from read or write timeout exceptions which are thrown when the coordinator sends a response back to the driver when a read or write request timed out on the server-side.
Picking nodes
You didn't specify which driver + version you're using. OperationTimedOutException is in Java driver v3.x but not in v4.x (it was replaced with DriverTimeoutException which makes it clearer that the exception is client-side) so for the purposes of my response, I'm going to assume that you're using Java driver v3.11 (latest in the v3 series).
You also didn't specify which load balancing policies (LBP) you've configured and which retry policies. If you're using the latency-aware LBP LatencyAwarePolicy, the likely scenario is that the problematic node has the lowest latency so it is listed as the "preferred node" by the policy.
Handling misbehaving nodes is a very tough thing to do for the drivers, particularly if the nodes are unresponsive because a driver won't know what is really going on if a node doesn't respond at all. The drivers can't be too aggressive at marking nodes as "down" because if the node is just temporarily unavailable (for example, due to a GC pause), it won't get picked again as a coordinator for a bit of time.
Sometimes, the latency "signal" from a problematic node takes a while to bubble up for a driver to effectively route around it because of the algorithm used by the driver to average out the reported latencies over a period of one or two minutes, scaled such that older latencies are weighted less than newer latencies. In the case of an unresponsive node, the driver can only base the average/scaling on the last time the node reported its latency.
For this reason, the LatencyAwarePolicy was dropped in Java driver v4 in preference for the new DefaultLoadBalancingPolicy which has a much better detection algorithm for slow replicas.
Your workaround using tryNextHost() is a bit clunky because you have to effectively wait for the retry policy to kick in. What you really need to focus on is the fact that your nodes become unresponsive. If your cluster is getting overloaded, you should consider increasing the capacity by adding more nodes.
Trying to come up with a software solution for what is an infrastructure capacity issue is never going to be successful in the long run. Cheers!

Understand Cassandra pooling options (setCoreConnectionsPerHost and setMaxConnectionsPerHost)?

I recently started working with Cassandra and I was reading more about connection pooling here. I was confuse about pool size and couldn't understand what does this mean here:
poolingOptions
.setCoreConnectionsPerHost(HostDistance.LOCAL, 4)
.setMaxConnectionsPerHost( HostDistance.LOCAL, 10)
.setCoreConnectionsPerHost(HostDistance.REMOTE, 2)
.setMaxConnectionsPerHost( HostDistance.REMOTE, 4)
.setMaxRequestsPerConnection(2000);
Below is what I want to understand in detail:
I would like to know what does setCoreConnectionsPerHost, setMaxConnectionsPerHost and setMaxRequestsPerConnection means?
What is LOCAL and REMOTE means here?
If someone can explain with an example then it will really help me understand better.
We have a 6 nodes cluster all in one dc with RF as 3 and we read/write as local quorum.
Cassandra protocol allows to submit for execution multiple queries over the same network connection in parallel, without waiting for answer. The setMaxRequestsPerConnection sets how many in-flight queries could be in one connection simultaneously - maximal limit depends on protocol, and since protocol v3, it's 32k, but in reality you need to keep it around 1000-2000 - if you have more, then it's a sign that server is not keeping with your queries.
Drivers are opening connections to every node in the cluster, and these connections are marked either as LOCAL - if they are to the nodes in the data center that is local to the application (either set explicitly in load balancing policy, or inferred from first contacted point), or as REMOTE if they are to the nodes that in the other data centers.
Also, driver can open several connections to nodes. And there are 2 values that control their number: core - the minimal number of connections, and max - what is the upper limit. Driver will open new connections if you submit new requests that doesn't fit into the existing limit.
So in your example:
poolingOptions
.setCoreConnectionsPerHost(HostDistance.LOCAL, 4)
.setMaxConnectionsPerHost( HostDistance.LOCAL, 10)
.setCoreConnectionsPerHost(HostDistance.REMOTE, 2)
.setMaxConnectionsPerHost( HostDistance.REMOTE, 4)
.setMaxRequestsPerConnection(2000);
for local data center, it will open 4 connections per node initially, and it may grow up to 10 connections
for other data centers it will open 2 connections, that could grow up to 4 connections

Excessive open connections to Mongos instances

We're moving from a single replica set to shards and are experiencing some issues. We have 3 mongos instances, 3 config servers, and 15 data nodes (5 shards with 3 replicas). We're seeing really poor query performance and looking at the mongos instances I'm seeing something like 25k open connections per instance!
For example, I'm seeing log lines like
[listener] connection accepted from 10.10.36.122:35098 #521622 (23858 connections now open)
and
[conn498875] end connection 10.10.36.122:41520 (23695 connections now open)
For reference, we have another nearly identical environment that we have not yet moved to sharding which is showing ~250 total open connections.
The application code is using the nodejs driver and is using a connection url that looks something like
mongodb://mongos0.some.internal.domain:27017,mongos1.some.internal.domain:27017,mongos2.some.internal.domain:27017
I'm at a bit of a loss for how to track this issue down. Is this not the correct way to connect to mongos?
EDIT (7/7/18)
After some experimenting, I found that we were using a connectTimeoutMS of 180000 (3 minutes). Removing this value resolved the issue. However, it's still not clear why this configuration works with a standalone replica set, but causes issues when sharding. Can anyone explain what's going on here?

Cassandra nodejs driver time out after a node moves

We use vnodes on our cluster.
I noticed that when the token space of a node changes (automatically on vnodes, during a repair or a cleanup after adding new nodes), the datastax nodejs driver gets a lot of "Operation timed out - received only X responses" for a few minutes.
I tried using ONE and LOCAL_QUORUM consistencies.
I suppose this is due to the coordinator not hitting the right node just after the move. This seems to be a logical behavior (data was moved) but we really want to address this particular issue.
What do you guys suggest we should do to avoid this ? Having a custom retry policy ? Caching ? Changing the consistency ?
Example of behavior
when we see this:
4/7/2016, 10:43am Info Host 172.31.34.155 moved from '8185241953623605265' to '-1108852503760494577'
We see a spike of those:
{
"message":"Operation timed out - received only 0 responses.",
"info":"Represents an error message from the server",
"code":4608,
"consistencies":1,
"received":0,
"blockFor":1,
"isDataPresent":0,
"coordinator":"172.31.34.155:9042",
"query":"SELECT foo FROM foo_bar LIMIT 10"
}
I suppose this is due to the coordinator not hitting the right node just after the move. This seems to be a logical behavior (data was moved) but we really want to address this particular issue.
In fact, when adding new node, there will be token range movement but Cassandra can still serve read requests using the old token ranges until the scale out has finished completely. So the behavior you're facing is very suspicious.
If you can reproduce this error, please activate query tracing to narrow down the issue.
The error can also be related to a node under heavy load and not replying fast enough

What are some good settings for seeding a ton of torrents? (>10000)

I'm running into a lot of trouble when trying to seed a lot of torrents( > 10k) with libtorrent.
They include:
Choking my network connection
Tracker requests timing out(libtorrent tracker error)
When using auto-manage(they go from checking to seeding very slowly, even when my active_seeding is set to unlimited.
I used to let them be automanaged, but I'd find that it makes nearly all of them unavailable.
Here are my current settings:
sessionSettings.setActiveDownloads(5);
sessionSettings.setActiveLimit(-1);
sessionSettings.setActiveSeeds(-1);
sessionSettings.setActiveDHTLimit(5);
sessionSettings.setPeerConnectTimeout(25);
sessionSettings.announceDoubleNAT(true);
sessionSettings.setUploadRateLimit(0);
sessionSettings.setDownloadRateLimit(0);
sessionSettings.setHalgOpenLimit(5);
sessionSettings.useReadCache(false);
sessionSettings.setMaxPeerlistSize(500);
My current method is to loop over all my 10k+ torrents, and run torrent.resume(). When using automanage, this basically only starts ~ 50 of the torrents, and the others start about at a rate of 1 torrent per 10 minutes, which wouldn't work. When not using automanage, it chokes my connection.
BUT, when I do only 30 of them, they all seem to seed correctly, so my next plan is try to resume() them in groupings either with a time delay, or after they've received a tracker_reply.
I tried to garner what I could from this, but don't know what my settings should be specifically:
http://blog.libtorrent.org/2012/01/seeding-a-million-torrents/
I'd really appreciate someone sharing their settings for seeding thousands of torrents,
When not using automanage, it chokes my connection.
Since you say it can run either on a hosted server or domestic internet connection then you will have not much of a choice but to throttle torrent startups. Domestic internet connections are generally behind consumer grade routers and possibly CGNAT, both of which have fairly small NAT tables that will eventually choke from concurrently established TCP connections (peer-peer connections, tracker announces) or UDP pseudo-connections (UDP trackers, µTP, DHT)
So to run many torrents at once you will have to limit all active maintenance traffic of that kind so that the torrents are only started to listen passively for incoming connections.

Resources