Here is the sender verticle
I have set multicast enabled and set the public host to my machines ip address
VertxOptions options = new VertxOptions()
.setClusterManager(ClusterManagerConfig.getClusterManager());
EventBusOptions eventBusOptions = new EventBusOptions()
.setClustered(true)
.setClusterPublicHost("10.10.1.160");
options.setEventBusOptions(eventBusOptions);
Vertx.clusteredVertx(options, res -> {
if (res.succeeded()) {
Vertx vertx = res.result();
vertx.deployVerticle(new requestHandler());
vertx.deployVerticle(new requestSender());
EventBus eventBus = vertx.eventBus();
eventBus.send("some.address","hello",reply -> {
System.out.println(reply.toString());
});
} else {
LOGGER.info("Failed: " + res.cause());
}
});
}
here's the reciever verticle
VertxOptions options = new VertxOptions().setClusterManager(mgr);
options.setEventBusOptions(new EventBusOptions()
.setClustered(true)
.setClusterPublicHost("10.10.1.174") );
Vertx.clusteredVertx(options, res -> {
if (res.succeeded()) {
Vertx vertx1 = res.result();
System.out.println("Success");
EventBus eb = vertx1.eventBus();
System.out.println("ready");
eb.consumer("some.address", message -> {
message.reply("hello hello");
});
} else {
System.out.println("Failed");
}
});
I get this result when i run both main verticles , so the verticles are detected by hazelcast and a connection is established
INFO: [10.10.1.160]:33001 [dev] [3.10.5] Established socket connection between /10.10.1.160:33001 and /10.10.1.174:35725
Jan 11, 2021 11:45:10 AM com.hazelcast.internal.cluster.ClusterService
INFO: [10.10.1.160]:33001 [dev] [3.10.5]
Members {size:2, ver:2} [
Member [10.10.1.160]:33001 - 51b8c249-6b3c-4ca8-a238-c651845629d8 this
Member [10.10.1.174]:33001 - 1cba1680-025e-469f-bad6-884111313672
]
Jan 11, 2021 11:45:10 AM com.hazelcast.internal.partition.impl.MigrationManager
INFO: [10.10.1.160]:33001 [dev] [3.10.5] Re-partitioning cluster data... Migration queue size: 271
Jan 11, 2021 11:45:11 AM com.hazelcast.nio.tcp.TcpIpAcceptor
But when the event-bus tries to send a message to given address i encounter this error is this a problem with event-bus configuration?
Jan 11, 2021 11:59:57 AM io.vertx.core.eventbus.impl.clustered.ConnectionHolder
WARNING: Connecting to server 10.10.1.174:39561 failed
io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: /10.10.1.174:39561
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716)
at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:327)
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:340)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:665)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:612)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:529)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:491)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:905)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.ConnectException: Connection refused
... 11 more
In Vert.x 3, the cluster host and cluster public host default to localhost.
If you only change the cluster public host in VertxOptions, Vert.x will bind EventBus transport servers to localhost while telling other nodes to connect to the public host.
This kind of configuration is needed when running Vert.x on some cloud providers, but in most cases you only need to set the cluster host (and then the public host will default to its value):
EventBusOptions eventBusOptions = new EventBusOptions()
.setClustered(true)
.setHost("10.10.1.160");
Related
I'm having a serious problem with connecting to MariaDB through Consul Connect.
I'm using Nomad to create the services with the proxies, with the following job definition:
job "wordpress" {
type = "service"
datacenters = ["dc1"]
group "server" {
network {
mode = "bridge"
port "http" {
static = 8080
to = 80
}
}
task "server" {
driver = "docker"
config {
image = "wordpress"
}
env {
WORDPRESS_DB_HOST = "${NOMAD_UPSTREAM_ADDR_database}"
WORDPRESS_DB_USER = "exampleuser"
WORDPRESS_DB_PASSWORD = "examplepass"
WORDPRESS_DB_NAME = "exampledb"
}
resources {
cpu = 100
memory = 64
network {
mbits = 10
}
}
}
service {
name = "wordpress"
tags = ["production", "wordpress"]
port = "http"
connect {
sidecar_service {
proxy {
upstreams {
destination_name = "database"
local_bind_port = 3306
}
}
}
}
}
}
group "database" {
network {
mode = "bridge"
port "db" {
to = 3306
}
}
task "database" {
driver = "docker"
config {
image = "mariadb"
}
env {
MYSQL_RANDOM_ROOT_PASSWORD = "yes"
MYSQL_INITDB_SKIP_TZINFO = "yes"
MYSQL_DATABASE = "exampledb"
MYSQL_USER = "exampleuser"
MYSQL_PASSWORD = "examplepass"
}
resources {
cpu = 100
memory = 128
network {
mbits = 10
}
}
}
service {
name = "database"
tags = ["production", "mariadb"]
port = "db"
connect {
sidecar_service {}
}
}
}
}
However, it seems that the server can't reach the database.
MySQL Connection Error: (2006) MySQL server has gone away
[25-Aug-2020 10:46:53 UTC] PHP Warning: mysqli::__construct(): Error while reading greeting packet. PID=187 in Standard input code on line 22
[25-Aug-2020 10:46:53 UTC] PHP Warning: mysqli::__construct(): (HY000/2006): MySQL server has gone away in Standard input code on line 22
MySQL Connection Error: (2006) MySQL server has gone away
WARNING: unable to establish a database connection to '127.0.0.1:3306'
continuing anyways (which might have unexpected results)
And the logs of the server and database proxies shows that some sort of TLS issue is happening, but I've got no clue how to solve this problem.
Server Proxy Logs
[2020-08-25 12:20:35.841][18][debug][filter] [source/common/tcp_proxy/tcp_proxy.cc:344] [C1229] Creating connection to cluster database.default.dc1.internal.0198bec5-d0b4-332c-973e-372808379192.consul
[2020-08-25 12:20:35.841][18][debug][pool] [source/common/tcp/conn_pool.cc:82] creating a new connection
[2020-08-25 12:20:35.841][18][debug][pool] [source/common/tcp/conn_pool.cc:362] [C1230] connecting
[2020-08-25 12:20:35.841][18][debug][connection] [source/common/network/connection_impl.cc:704] [C1230] connecting to 172.29.168.233:29307
[2020-08-25 12:20:35.841][18][debug][connection] [source/common/network/connection_impl.cc:713] [C1230] connection in progress
[2020-08-25 12:20:35.841][18][debug][pool] [source/common/tcp/conn_pool.cc:108] queueing request due to no available connections
[2020-08-25 12:20:35.841][18][debug][main] [source/server/connection_handler_impl.cc:280] [C1229] new connection
[2020-08-25 12:20:35.841][18][trace][connection] [source/common/network/connection_impl.cc:458] [C1229] socket event: 2
[2020-08-25 12:20:35.841][18][trace][connection] [source/common/network/connection_impl.cc:543] [C1229] write ready
[2020-08-25 12:20:35.841][18][trace][connection] [source/common/network/connection_impl.cc:458] [C1230] socket event: 2
[2020-08-25 12:20:35.841][18][trace][connection] [source/common/network/connection_impl.cc:543] [C1230] write ready
[2020-08-25 12:20:35.841][18][debug][connection] [source/common/network/connection_impl.cc:552] [C1230] connected
[2020-08-25 12:20:35.841][18][debug][connection] [source/extensions/transport_sockets/tls/ssl_socket.cc:168] [C1230] handshake error: 2
[2020-08-25 12:20:35.842][18][trace][connection] [source/common/network/connection_impl.cc:458] [C1230] socket event: 3
[2020-08-25 12:20:35.842][18][trace][connection] [source/common/network/connection_impl.cc:543] [C1230] write ready
[2020-08-25 12:20:35.842][18][debug][connection] [source/extensions/transport_sockets/tls/ssl_socket.cc:168] [C1230] handshake error: 5
[2020-08-25 12:20:35.842][18][debug][connection] [source/extensions/transport_sockets/tls/ssl_socket.cc:201] [C1230]
[2020-08-25 12:20:35.842][18][debug][connection] [source/common/network/connection_impl.cc:190] [C1230] closing socket: 0
[2020-08-25 12:20:35.842][18][debug][pool] [source/common/tcp/conn_pool.cc:123] [C1230] client disconnected
Database Proxy Logs
[2020-08-25 12:26:07.093][15][debug][filter] [source/common/tcp_proxy/tcp_proxy.cc:201] [C927] new tcp proxy session
[2020-08-25 12:26:07.093][15][trace][connection] [source/common/network/connection_impl.cc:290] [C927] readDisable: enabled=true disable=true
[2020-08-25 12:26:07.093][15][debug][filter] [source/common/tcp_proxy/tcp_proxy.cc:344] [C927] Creating connection to cluster local_app
[2020-08-25 12:26:07.093][15][debug][pool] [source/common/tcp/conn_pool.cc:82] creating a new connection
[2020-08-25 12:26:07.093][15][debug][pool] [source/common/tcp/conn_pool.cc:362] [C928] connecting
[2020-08-25 12:26:07.093][15][debug][connection] [source/common/network/connection_impl.cc:704] [C928] connecting to 127.0.0.1:26344
[2020-08-25 12:26:07.093][15][debug][connection] [source/common/network/connection_impl.cc:713] [C928] connection in progress
[2020-08-25 12:26:07.093][15][debug][pool] [source/common/tcp/conn_pool.cc:108] queueing request due to no available connections
[2020-08-25 12:26:07.093][15][debug][main] [source/server/connection_handler_impl.cc:280] [C927] new connection
[2020-08-25 12:26:07.093][15][trace][connection] [source/common/network/connection_impl.cc:458] [C927] socket event: 2
[2020-08-25 12:26:07.093][15][trace][connection] [source/common/network/connection_impl.cc:543] [C927] write ready
[2020-08-25 12:26:07.093][15][debug][connection] [source/extensions/transport_sockets/tls/ssl_socket.cc:168] [C927] handshake error: 5
[2020-08-25 12:26:07.093][15][debug][connection] [source/extensions/transport_sockets/tls/ssl_socket.cc:201] [C927]
[2020-08-25 12:26:07.093][15][debug][connection] [source/common/network/connection_impl.cc:190] [C927] closing socket: 0
[2020-08-25 12:26:07.093][15][debug][pool] [source/common/tcp/conn_pool.cc:204] canceling pending request
[2020-08-25 12:26:07.093][15][debug][pool] [source/common/tcp/conn_pool.cc:212] canceling pending connection
[2020-08-25 12:26:07.093][15][debug][connection] [source/common/network/connection_impl.cc:101] [C928] closing data_to_write=0 type=1
[2020-08-25 12:26:07.093][15][debug][connection] [source/common/network/connection_impl.cc:190] [C928] closing socket: 1
[2020-08-25 12:26:07.093][15][debug][pool] [source/common/tcp/conn_pool.cc:123] [C928] client disconnected
[2020-08-25 12:26:07.093][15][trace][main] [source/common/event/dispatcher_impl.cc:158] item added to deferred deletion list (size=1)
[2020-08-25 12:26:07.093][15][debug][main] [source/server/connection_handler_impl.cc:80] [C927] adding to cleanup list
[2020-08-25 12:26:07.093][15][trace][main] [source/common/event/dispatcher_impl.cc:158] item added to deferred deletion list (size=2)
[2020-08-25 12:26:07.093][15][trace][main] [source/common/event/dispatcher_impl.cc:76] clearing deferred deletion list (size=2)
[2020-08-25 12:26:07.093][15][debug][pool] [source/common/tcp/conn_pool.cc:236] [C928] connection destroyed
I am trying to connect spark application with hbase. Below is the configuration I am giving
val conf = HBaseConfiguration.create()
conf.set("hbase.master", "localhost:16010")
conf.setInt("timeout", 120000)
conf.set("hbase.zookeeper.quorum", "2181")
val connection = ConnectionFactory.createConnection(conf)
and below are the 'jps' details:
5808 ResourceManager
8150 HMaster
8280 HRegionServer
5131 NameNode
8076 HQuorumPeer
5582 SecondaryNameNode
2798 org.eclipse.equinox.launcher_1.4.0.v20161219-1356.jar
8623 Jps
5951 NodeManager
5279 DataNode
I have alsotried with hbase master 16010
I am getting below error:
19/09/12 21:49:00 WARN ClientCnxn: Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect
java.net.SocketException: Invalid argument
at sun.nio.ch.Net.connect0(Native Method)
at sun.nio.ch.Net.connect(Net.java:454)
at sun.nio.ch.Net.connect(Net.java:446)
at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:648)
at org.apache.zookeeper.ClientCnxnSocketNIO.registerAndConnect(ClientCnxnSocketNIO.java:277)
at org.apache.zookeeper.ClientCnxnSocketNIO.connect(ClientCnxnSocketNIO.java:287)
at org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1024)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1060)
19/09/12 21:49:00 WARN ReadOnlyZKClient: 0x1e3ff233 to 2181:2181 failed for get of /hbase/hbaseid, code = CONNECTIONLOSS, retries = 4
19/09/12 21:49:01 INFO ClientCnxn: Opening socket connection to server 2181/0.0.8.133:2181. Will not attempt to authenticate using SASL (unknown error)
19/09/12 21:49:01 ERROR ClientCnxnSocketNIO: Unable to open socket to 2181/0.0.8.133:2181
Looks like there is a problem to join zookeeper.
Check first that zookeeper is started on your local host on port 2181.
netstat -tunelp | grep 2181 | grep -i LISTEN
tcp6 0 0 :::2181 :::* LISTEN
In your conf, in hbase.zookeeper.quorum property you have to pass the ip of your zookeeper and not the port (hbase.zookeeper.property.clientPort)
My hbase connector is build with :
val conf = HBaseConfiguration.create()
conf.set("hbase.zookeeper.quorum", "10.80.188.65")
conf.set("hbase.master", "10.80.188.64:60000")
conf.set("hbase.zookeeper.property.clientPort", "2181")
conf.set("zookeeper.znode.parent", "/hbase-unsecure")
val connection = ConnectionFactory.createConnection(conf)
I am facing an issue during integration Hazelcast with weblogic 12c. Do i have to change any configuration?
com.hazelcast.instance.NodeExtension
com.hazelcast.instance.DefaultNodeExtension
Step 1: [setDomainEnv.cmd] i have added the path for
SET CLASSPATH=%CLASSPATH%E:lib\hazelcast-all-3.7.1.jar;
Step 2: I have written a sample start-up class for weblogic server
public class HCServer {
public HCServer() {
HazelcastInstance hz = Hazelcast.newHazelcastInstance();
}
public static void main(String[] args) {
try {
Class.forName("server.HCServer").newInstance();
Set<HazelcastInstance> set = Hazelcast.getAllHazelcastInstances();
for (HazelcastInstance hcInstance : set) {
IMap<String, HCTask> iMap = hcInstance.getMap("data");
for (int i = 0; i < 5000; i++) {
iMap.put(String.valueOf(i), new HCTask(i));
}
}
} catch (Exception ex) {
ex.printStackTrace();
}
}
}
Following error occurs on starting the server
<Nov 22, 2016, 4:07:44,922 PM PKT>
Nov 22, 2016 4:07:45 PM com.hazelcast.config.XmlConfigLocator
INFO: Loading 'hazelcast-default.xml' from classpath.
Nov 22, 2016 4:07:46 PM com.hazelcast.instance.DefaultAddressPicker
INFO: [LOCAL] [dev] [3.7.1] Prefer IPv4 stack is true.
Nov 22, 2016 4:07:46 PM com.hazelcast.instance.DefaultAddressPicker
INFO: [LOCAL] [dev] [3.7.1] Picked [192.168.0.37]:5701, using socket ServerSocket[addr=/0:0:0:0:0:0:0:0,localport=5701], bind any local is true
Nov 22, 2016 4:07:47 PM com.hazelcast.instance.NodeExtensionFactory
WARNING: DefaultNodeExtension class has been loaded by two different class-loaders. Are you running Hazelcast in an OSGi environment? If so, set the bundle class-loader in the Config using the setClassloader() method
<Nov 22, 2016, 4:07:47,256 PM PKT> <com.hazelcast.instance.NodeExtensionFactory> <DefaultNodeExtension class has been loaded by two different class-loaders. Are you running Hazelcast in an OSGi environment? If so, set the bundl
e class-loader in the Config using the setClassloader() method>
com.hazelcast.core.HazelcastException: java.lang.NoSuchMethodException: com.hazelcast.instance.DefaultNodeExtension.(com.hazelcast.instance.Node)
at com.hazelcast.util.ExceptionUtil.peel(ExceptionUtil.java:73)
at com.hazelcast.util.ExceptionUtil.peel(ExceptionUtil.java:52)
at com.hazelcast.util.ExceptionUtil.rethrow(ExceptionUtil.java:83)
at com.hazelcast.instance.NodeExtensionFactory.create(NodeExtensionFactory.java:54)
at com.hazelcast.instance.DefaultNodeContext.createNodeExtension(DefaultNodeContext.java:35)
at com.hazelcast.instance.Node.createNodeExtension(Node.java:290)
at com.hazelcast.instance.Node.(Node.java:177)
at com.hazelcast.instance.HazelcastInstanceImpl.createNode(HazelcastInstanceImpl.java:155)
at com.hazelcast.instance.HazelcastInstanceImpl.(HazelcastInstanceImpl.java:126)
at com.hazelcast.instance.HazelcastInstanceFactory.constructHazelcastInstance
(HazelcastInstanceFactory.java:218)
at com.hazelcast.instance.HazelcastInstanceFactory.newHazelcastInstance
(HazelcastInstanceFactory.java:176)
at com.hazelcast.instance.HazelcastInstanceFactory.newHazelcastInstance
(HazelcastInstanceFactory.java:126)
at com.hazelcast.core.Hazelcast.newHazelcastInstance(Hazelcast.java:87)
at server.HCServer.(HCServer.java:13)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance
(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
at java.lang.Class.newInstance(Class.java:442)
at server.HCServer.main(HCServer.java:18)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
Do you want to fix constructor of HCServer like this
public HCServer() {
Config config = new XmlConfigBuilder().build();
config.setClassLoader(this.getClass().getClassLoader());
HazelcastInstance hz = Hazelcast.newHazelcastInstance(config);
}
Let me know if it works!
I have the following code on the hazelcast instance:
ITopic<String> topic;
HazelcastInstance hz = Hazelcast.newHazelcastInstance();
topic = hz.getTopic("data");
Data is repeatedly published to the topic
String event = "abc123"
topic.publish(event);
then on another machine on the LAN I run the client like this:
public class Listener implements MessageListener<String>
{
ClientConfig clientConfig = new ClientConfig();
clientConfig.getNetworkConfig().addAddress("192.168.21.89:5701");
HazelcastInstance client = HazelcastClient.newHazelcastClient(clientConfig);
ITopic<String> topic = client.getTopic("data");
topic.addMessageListener(new Listener());
public void onMessage(Message<String> m) {
System.out.println("onMessage");
}
}
The client finds the hz node and starts, but doesn't see any data message on the iTopic. I thought hz instances and clients would autodiscover each other in default? Do I have to configure hz for the network somehow?
The client log shows:
2015-09-01 15:43:05,518 INFO Listener [main] In main()
In main()
Sep 01, 2015 3:43:05 PM com.hazelcast.core.LifecycleService
INFO: HazelcastClient[hz.client_0_dev][3.5] is STARTING
Sep 01, 2015 3:43:05 PM com.hazelcast.core.LifecycleService
INFO: HazelcastClient[hz.client_0_dev][3.5] is STARTED
Sep 01, 2015 3:43:06 PM com.hazelcast.core.LifecycleService
INFO: HazelcastClient[hz.client_0_dev][3.5] is CLIENT_CONNECTED
Sep 01, 2015 3:43:06 PM com.hazelcast.client.spi.impl.ClientMembershipListener
INFO:
Members [1] {
Member [192.168.21.89]:5701
}
This was my own problem because I was not actually publishing the data I wanted to listen to. Aaargh!
I have the following code:
StringSerializer ss = StringSerializer.get();
String cf = "TEST";
CassandraHostConfigurator conf = new CassandraHostConfigurator("localhost:9160");
conf.setCassandraThriftSocketTimeout(40000);
conf.setExhaustedPolicy(ExhaustedPolicy.WHEN_EXHAUSTED_BLOCK);
conf.setRetryDownedHostsDelayInSeconds(5);
conf.setRetryDownedHostsQueueSize(128);
conf.setRetryDownedHosts(true);
conf.setLoadBalancingPolicy(new LeastActiveBalancingPolicy());
String key = Long.toString(System.currentTimeMillis());
Cluster cluster = HFactory.getOrCreateCluster("TestCluster", conf);
Keyspace keyspace = HFactory.createKeyspace("TestCluster", cluster);
Mutator<String> mutator = HFactory.createMutator(keyspace, StringSerializer.get()); int count = 0;
while (!"q".equals(new Scanner( System.in).next())) {
try{
mutator.insert(key, cf, HFactory.createColumn("column_" + count, "v_" + count, ss, ss));
count++;
} catch (Exception e) {
e.printStackTrace();
}
}
and I can write some values using it, but when I restart cassandra, it fails. Here is the log:
[15:11:07] INFO [CassandraHostRetryService ] Downed Host Retry service started with >queue size 128 and retry delay 5s
[15:11:07] INFO [JmxMonitor ] Registering JMX >me.prettyprint.cassandra.service_ASG:ServiceType=hector,MonitorType=hector
[15:11:17] ERROR [HThriftClient ] Could not flush transport (to be expected >if the pool is shutting down) in close for client: CassandraClient
org.apache.thrift.transport.TTransportException: java.net.SocketException: Broken pipe
at >org.apache.thrift.transport.TIOStreamTransport.write(TIOStreamTransport.java:147)
at org.apache.thrift.transport.TFramedTransport.flush(TFramedTransport.java:156)
at >me.prettyprint.cassandra.connection.client.HThriftClient.close(HThriftClient.java:98)
at >me.prettyprint.cassandra.connection.client.HThriftClient.close(HThriftClient.java:26)
at >me.prettyprint.cassandra.connection.HConnectionManager.closeClient(HConnectionManager.java:308)
at >me.prettyprint.cassandra.connection.HConnectionManager.operateWithFailover(HConnectionManager.java:257)
at >me.prettyprint.cassandra.model.ExecutingKeyspace.doExecuteOperation(ExecutingKeyspace.java:97)
at me.prettyprint.cassandra.model.MutatorImpl.execute(MutatorImpl.java:243)
at me.prettyprint.cassandra.model.MutatorImpl.insert(MutatorImpl.java:69)
at com.app.App.main(App.java:40)
Caused by: java.net.SocketException: Broken pipe
at java.net.SocketOutputStream.socketWrite0(Native Method)
at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:109)
at java.net.SocketOutputStream.write(SocketOutputStream.java:153)
at >org.apache.thrift.transport.TIOStreamTransport.write(TIOStreamTransport.java:145)
... 9 more
[15:11:17] ERROR [HConnectionManager ] MARK HOST AS DOWN TRIGGERED for host >localhost(127.0.0.1):9160
[15:11:17] ERROR [HConnectionManager ] Pool state on shutdown: >:{localhost(127.0.0.1):9160}; IsActive?: true; Active: 1; Blocked: 0; Idle: 15; NumBeforeExhausted: 49
[15:11:17] INFO [ConcurrentHClientPool ] Shutdown triggered on :{localhost(127.0.0.1):9160}
[15:11:17] INFO [ConcurrentHClientPool ] Shutdown complete on :{localhost(127.0.0.1):9160}
[15:11:17] INFO [CassandraHostRetryService ] Host detected as down was added to retry queue: localhost(127.0.0.1):9160
[15:11:17] WARN [HConnectionManager ] Could not fullfill request on this host CassandraClient
[15:11:17] WARN [HConnectionManager ] Exception:
me.prettyprint.hector.api.exceptions.HectorTransportException: org.apache.thrift.transport.TTransportException: java.net.SocketException: Broken pipe
at >me.prettyprint.cassandra.connection.client.HThriftClient.getCassandra(HThriftClient.java:82)
at >me.prettyprint.cassandra.connection.HConnectionManager.operateWithFailover(HConnectionManager.java:236)
at >me.prettyprint.cassandra.model.ExecutingKeyspace.doExecuteOperation(ExecutingKeyspace.java:97)
at me.prettyprint.cassandra.model.MutatorImpl.execute(MutatorImpl.java:243)
at me.prettyprint.cassandra.model.MutatorImpl.insert(MutatorImpl.java:69)
at com.app.App.main(App.java:40)
Caused by: org.apache.thrift.transport.TTransportException: java.net.SocketException: Broken pipe
at org.apache.thrift.transport.TIOStreamTransport.write(TIOStreamTransport.java:147)
at org.apache.thrift.transport.TFramedTransport.flush(TFramedTransport.java:157)
at org.apache.cassandra.thrift.Cassandra$Client.send_set_keyspace(Cassandra.java:466)
at org.apache.cassandra.thrift.Cassandra$Client.set_keyspace(Cassandra.java:455)
at >me.prettyprint.cassandra.connection.client.HThriftClient.getCassandra(HThriftClient.java:78)
... 5 more
Caused by: java.net.SocketException: Broken pipe
at java.net.SocketOutputStream.socketWrite0(Native Method)
at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:109)
at java.net.SocketOutputStream.write(SocketOutputStream.java:153)
at >org.apache.thrift.transport.TIOStreamTransport.write(TIOStreamTransport.java:145)
... 9 more
[15:11:17] INFO [HConnectionManager ] Client CassandraClient released to inactive or dead pool. Closing.
[15:11:17] INFO [HConnectionManager ] Client CassandraClient released to inactive or dead pool. Closing.
[15:11:17] INFO [HConnectionManager ] Added host localhost(127.0.0.1):9160 to pool
You have set -
conf.setRetryDownedHostsDelayInSeconds(5);
Try to to wait after the restart for more than 5 seconds.
Also, you may need to upgrade.
What is the size thrift_max_message_length_in_mb you have set?
Kind regards.