How can I connect a Wordpress service to a MariaDB service through Consul Connect? - connect

I'm having a serious problem with connecting to MariaDB through Consul Connect.
I'm using Nomad to create the services with the proxies, with the following job definition:
job "wordpress" {
type = "service"
datacenters = ["dc1"]
group "server" {
network {
mode = "bridge"
port "http" {
static = 8080
to = 80
}
}
task "server" {
driver = "docker"
config {
image = "wordpress"
}
env {
WORDPRESS_DB_HOST = "${NOMAD_UPSTREAM_ADDR_database}"
WORDPRESS_DB_USER = "exampleuser"
WORDPRESS_DB_PASSWORD = "examplepass"
WORDPRESS_DB_NAME = "exampledb"
}
resources {
cpu = 100
memory = 64
network {
mbits = 10
}
}
}
service {
name = "wordpress"
tags = ["production", "wordpress"]
port = "http"
connect {
sidecar_service {
proxy {
upstreams {
destination_name = "database"
local_bind_port = 3306
}
}
}
}
}
}
group "database" {
network {
mode = "bridge"
port "db" {
to = 3306
}
}
task "database" {
driver = "docker"
config {
image = "mariadb"
}
env {
MYSQL_RANDOM_ROOT_PASSWORD = "yes"
MYSQL_INITDB_SKIP_TZINFO = "yes"
MYSQL_DATABASE = "exampledb"
MYSQL_USER = "exampleuser"
MYSQL_PASSWORD = "examplepass"
}
resources {
cpu = 100
memory = 128
network {
mbits = 10
}
}
}
service {
name = "database"
tags = ["production", "mariadb"]
port = "db"
connect {
sidecar_service {}
}
}
}
}
However, it seems that the server can't reach the database.
MySQL Connection Error: (2006) MySQL server has gone away
[25-Aug-2020 10:46:53 UTC] PHP Warning: mysqli::__construct(): Error while reading greeting packet. PID=187 in Standard input code on line 22
[25-Aug-2020 10:46:53 UTC] PHP Warning: mysqli::__construct(): (HY000/2006): MySQL server has gone away in Standard input code on line 22
MySQL Connection Error: (2006) MySQL server has gone away
WARNING: unable to establish a database connection to '127.0.0.1:3306'
continuing anyways (which might have unexpected results)
And the logs of the server and database proxies shows that some sort of TLS issue is happening, but I've got no clue how to solve this problem.
Server Proxy Logs
[2020-08-25 12:20:35.841][18][debug][filter] [source/common/tcp_proxy/tcp_proxy.cc:344] [C1229] Creating connection to cluster database.default.dc1.internal.0198bec5-d0b4-332c-973e-372808379192.consul
[2020-08-25 12:20:35.841][18][debug][pool] [source/common/tcp/conn_pool.cc:82] creating a new connection
[2020-08-25 12:20:35.841][18][debug][pool] [source/common/tcp/conn_pool.cc:362] [C1230] connecting
[2020-08-25 12:20:35.841][18][debug][connection] [source/common/network/connection_impl.cc:704] [C1230] connecting to 172.29.168.233:29307
[2020-08-25 12:20:35.841][18][debug][connection] [source/common/network/connection_impl.cc:713] [C1230] connection in progress
[2020-08-25 12:20:35.841][18][debug][pool] [source/common/tcp/conn_pool.cc:108] queueing request due to no available connections
[2020-08-25 12:20:35.841][18][debug][main] [source/server/connection_handler_impl.cc:280] [C1229] new connection
[2020-08-25 12:20:35.841][18][trace][connection] [source/common/network/connection_impl.cc:458] [C1229] socket event: 2
[2020-08-25 12:20:35.841][18][trace][connection] [source/common/network/connection_impl.cc:543] [C1229] write ready
[2020-08-25 12:20:35.841][18][trace][connection] [source/common/network/connection_impl.cc:458] [C1230] socket event: 2
[2020-08-25 12:20:35.841][18][trace][connection] [source/common/network/connection_impl.cc:543] [C1230] write ready
[2020-08-25 12:20:35.841][18][debug][connection] [source/common/network/connection_impl.cc:552] [C1230] connected
[2020-08-25 12:20:35.841][18][debug][connection] [source/extensions/transport_sockets/tls/ssl_socket.cc:168] [C1230] handshake error: 2
[2020-08-25 12:20:35.842][18][trace][connection] [source/common/network/connection_impl.cc:458] [C1230] socket event: 3
[2020-08-25 12:20:35.842][18][trace][connection] [source/common/network/connection_impl.cc:543] [C1230] write ready
[2020-08-25 12:20:35.842][18][debug][connection] [source/extensions/transport_sockets/tls/ssl_socket.cc:168] [C1230] handshake error: 5
[2020-08-25 12:20:35.842][18][debug][connection] [source/extensions/transport_sockets/tls/ssl_socket.cc:201] [C1230]
[2020-08-25 12:20:35.842][18][debug][connection] [source/common/network/connection_impl.cc:190] [C1230] closing socket: 0
[2020-08-25 12:20:35.842][18][debug][pool] [source/common/tcp/conn_pool.cc:123] [C1230] client disconnected
Database Proxy Logs
[2020-08-25 12:26:07.093][15][debug][filter] [source/common/tcp_proxy/tcp_proxy.cc:201] [C927] new tcp proxy session
[2020-08-25 12:26:07.093][15][trace][connection] [source/common/network/connection_impl.cc:290] [C927] readDisable: enabled=true disable=true
[2020-08-25 12:26:07.093][15][debug][filter] [source/common/tcp_proxy/tcp_proxy.cc:344] [C927] Creating connection to cluster local_app
[2020-08-25 12:26:07.093][15][debug][pool] [source/common/tcp/conn_pool.cc:82] creating a new connection
[2020-08-25 12:26:07.093][15][debug][pool] [source/common/tcp/conn_pool.cc:362] [C928] connecting
[2020-08-25 12:26:07.093][15][debug][connection] [source/common/network/connection_impl.cc:704] [C928] connecting to 127.0.0.1:26344
[2020-08-25 12:26:07.093][15][debug][connection] [source/common/network/connection_impl.cc:713] [C928] connection in progress
[2020-08-25 12:26:07.093][15][debug][pool] [source/common/tcp/conn_pool.cc:108] queueing request due to no available connections
[2020-08-25 12:26:07.093][15][debug][main] [source/server/connection_handler_impl.cc:280] [C927] new connection
[2020-08-25 12:26:07.093][15][trace][connection] [source/common/network/connection_impl.cc:458] [C927] socket event: 2
[2020-08-25 12:26:07.093][15][trace][connection] [source/common/network/connection_impl.cc:543] [C927] write ready
[2020-08-25 12:26:07.093][15][debug][connection] [source/extensions/transport_sockets/tls/ssl_socket.cc:168] [C927] handshake error: 5
[2020-08-25 12:26:07.093][15][debug][connection] [source/extensions/transport_sockets/tls/ssl_socket.cc:201] [C927]
[2020-08-25 12:26:07.093][15][debug][connection] [source/common/network/connection_impl.cc:190] [C927] closing socket: 0
[2020-08-25 12:26:07.093][15][debug][pool] [source/common/tcp/conn_pool.cc:204] canceling pending request
[2020-08-25 12:26:07.093][15][debug][pool] [source/common/tcp/conn_pool.cc:212] canceling pending connection
[2020-08-25 12:26:07.093][15][debug][connection] [source/common/network/connection_impl.cc:101] [C928] closing data_to_write=0 type=1
[2020-08-25 12:26:07.093][15][debug][connection] [source/common/network/connection_impl.cc:190] [C928] closing socket: 1
[2020-08-25 12:26:07.093][15][debug][pool] [source/common/tcp/conn_pool.cc:123] [C928] client disconnected
[2020-08-25 12:26:07.093][15][trace][main] [source/common/event/dispatcher_impl.cc:158] item added to deferred deletion list (size=1)
[2020-08-25 12:26:07.093][15][debug][main] [source/server/connection_handler_impl.cc:80] [C927] adding to cleanup list
[2020-08-25 12:26:07.093][15][trace][main] [source/common/event/dispatcher_impl.cc:158] item added to deferred deletion list (size=2)
[2020-08-25 12:26:07.093][15][trace][main] [source/common/event/dispatcher_impl.cc:76] clearing deferred deletion list (size=2)
[2020-08-25 12:26:07.093][15][debug][pool] [source/common/tcp/conn_pool.cc:236] [C928] connection destroyed

Related

How to setup a distributed eventbus in a vertx Hazelcast Cluster?

Here is the sender verticle
I have set multicast enabled and set the public host to my machines ip address
VertxOptions options = new VertxOptions()
.setClusterManager(ClusterManagerConfig.getClusterManager());
EventBusOptions eventBusOptions = new EventBusOptions()
.setClustered(true)
.setClusterPublicHost("10.10.1.160");
options.setEventBusOptions(eventBusOptions);
Vertx.clusteredVertx(options, res -> {
if (res.succeeded()) {
Vertx vertx = res.result();
vertx.deployVerticle(new requestHandler());
vertx.deployVerticle(new requestSender());
EventBus eventBus = vertx.eventBus();
eventBus.send("some.address","hello",reply -> {
System.out.println(reply.toString());
});
} else {
LOGGER.info("Failed: " + res.cause());
}
});
}
here's the reciever verticle
VertxOptions options = new VertxOptions().setClusterManager(mgr);
options.setEventBusOptions(new EventBusOptions()
.setClustered(true)
.setClusterPublicHost("10.10.1.174") );
Vertx.clusteredVertx(options, res -> {
if (res.succeeded()) {
Vertx vertx1 = res.result();
System.out.println("Success");
EventBus eb = vertx1.eventBus();
System.out.println("ready");
eb.consumer("some.address", message -> {
message.reply("hello hello");
});
} else {
System.out.println("Failed");
}
});
I get this result when i run both main verticles , so the verticles are detected by hazelcast and a connection is established
INFO: [10.10.1.160]:33001 [dev] [3.10.5] Established socket connection between /10.10.1.160:33001 and /10.10.1.174:35725
Jan 11, 2021 11:45:10 AM com.hazelcast.internal.cluster.ClusterService
INFO: [10.10.1.160]:33001 [dev] [3.10.5]
Members {size:2, ver:2} [
Member [10.10.1.160]:33001 - 51b8c249-6b3c-4ca8-a238-c651845629d8 this
Member [10.10.1.174]:33001 - 1cba1680-025e-469f-bad6-884111313672
]
Jan 11, 2021 11:45:10 AM com.hazelcast.internal.partition.impl.MigrationManager
INFO: [10.10.1.160]:33001 [dev] [3.10.5] Re-partitioning cluster data... Migration queue size: 271
Jan 11, 2021 11:45:11 AM com.hazelcast.nio.tcp.TcpIpAcceptor
But when the event-bus tries to send a message to given address i encounter this error is this a problem with event-bus configuration?
Jan 11, 2021 11:59:57 AM io.vertx.core.eventbus.impl.clustered.ConnectionHolder
WARNING: Connecting to server 10.10.1.174:39561 failed
io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: /10.10.1.174:39561
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716)
at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:327)
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:340)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:665)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:612)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:529)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:491)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:905)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.ConnectException: Connection refused
... 11 more
In Vert.x 3, the cluster host and cluster public host default to localhost.
If you only change the cluster public host in VertxOptions, Vert.x will bind EventBus transport servers to localhost while telling other nodes to connect to the public host.
This kind of configuration is needed when running Vert.x on some cloud providers, but in most cases you only need to set the cluster host (and then the public host will default to its value):
EventBusOptions eventBusOptions = new EventBusOptions()
.setClustered(true)
.setHost("10.10.1.160");

failed for get of /hbase/hbaseid, code = CONNECTIONLOSS, retries = 6

I am trying to connect spark application with hbase. Below is the configuration I am giving
val conf = HBaseConfiguration.create()
conf.set("hbase.master", "localhost:16010")
conf.setInt("timeout", 120000)
conf.set("hbase.zookeeper.quorum", "2181")
val connection = ConnectionFactory.createConnection(conf)
and below are the 'jps' details:
5808 ResourceManager
8150 HMaster
8280 HRegionServer
5131 NameNode
8076 HQuorumPeer
5582 SecondaryNameNode
2798 org.eclipse.equinox.launcher_1.4.0.v20161219-1356.jar
8623 Jps
5951 NodeManager
5279 DataNode
I have alsotried with hbase master 16010
I am getting below error:
19/09/12 21:49:00 WARN ClientCnxn: Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect
java.net.SocketException: Invalid argument
at sun.nio.ch.Net.connect0(Native Method)
at sun.nio.ch.Net.connect(Net.java:454)
at sun.nio.ch.Net.connect(Net.java:446)
at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:648)
at org.apache.zookeeper.ClientCnxnSocketNIO.registerAndConnect(ClientCnxnSocketNIO.java:277)
at org.apache.zookeeper.ClientCnxnSocketNIO.connect(ClientCnxnSocketNIO.java:287)
at org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1024)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1060)
19/09/12 21:49:00 WARN ReadOnlyZKClient: 0x1e3ff233 to 2181:2181 failed for get of /hbase/hbaseid, code = CONNECTIONLOSS, retries = 4
19/09/12 21:49:01 INFO ClientCnxn: Opening socket connection to server 2181/0.0.8.133:2181. Will not attempt to authenticate using SASL (unknown error)
19/09/12 21:49:01 ERROR ClientCnxnSocketNIO: Unable to open socket to 2181/0.0.8.133:2181
Looks like there is a problem to join zookeeper.
Check first that zookeeper is started on your local host on port 2181.
netstat -tunelp | grep 2181 | grep -i LISTEN
tcp6 0 0 :::2181 :::* LISTEN
In your conf, in hbase.zookeeper.quorum property you have to pass the ip of your zookeeper and not the port (hbase.zookeeper.property.clientPort)
My hbase connector is build with :
val conf = HBaseConfiguration.create()
conf.set("hbase.zookeeper.quorum", "10.80.188.65")
conf.set("hbase.master", "10.80.188.64:60000")
conf.set("hbase.zookeeper.property.clientPort", "2181")
conf.set("zookeeper.znode.parent", "/hbase-unsecure")
val connection = ConnectionFactory.createConnection(conf)

Receiving: 'Auth error:Error: read ECONNRESET' when connecting to google cloud platform behind a proxy

I've been working on a Node.js project that accesses the google cloud api. When I'm in an external network the request works fine and I'm receiving the expected answer. Yet when I'm behind my cooperate proxy, which is necessary to access our smpt server, I receive the following error:
Auth error:Error: read ECONNRESET
To get through the cooperate proxy I'm using cntlm and I've set it as the environment proxy as mentioned in this article.
Furthermore I've enabled GRPC verbosity and GRPC handshake trace as seen in the code below.
process.env.GRPC_VERBOSITY = 'DEBUG'
process.env.GRPC_TRACE = 'handshaker'
process.env.HTTP_PROXY = 'http://127.0.0.1:3128'
process.env.http_proxy = 'http://127.0.0.1:3128'
process.env.https_proxy = 'http://127.0.0.1:3128'
process.env.HTTPS_PROXY = 'http://127.0.0.1:3128'
process.env.PROXY = 'http://127.0.0.1:3128'
process.env.proxy = 'http://127.0.0.1:3128'
// Imports the Google Cloud client library
const language = require('#google-cloud/language');
// Instantiates a client
const client = new language.LanguageServiceClient({
projectId:'CNL-Showcase',
keyFilename: 'path/to/file/creds.json'
});
const document = {
content: request,
type: 'PLAIN_TEXT',
language: 'DE',
encodingType: 'UTF-8'
};
console.log('sending to Google Api');
client
.analyzeEntities({ document: document })
.then((results: any) => {
...
}).catch((err: any) => {
console.error('ERROR:', err);
res.send(err);
});
Note: The path to my creds file is changed on purpose, as mentioned above if I'm not behind a proxy it works just fine.
The console output looks as follows:
D0809 09:49:45.850000000 3460 dns_resolver.cc:339] Using native dns resolver
sending to Google Api
D0809 09:49:46.039000000 3460 dns_resolver.cc:280] Start resolving.
I0809 09:49:46.069000000 3460 handshaker.cc:141] handshake_manager
000000000423BE30: adding handshaker http_connect [0000000
00286BDD0] at index 0
I0809 09:49:46.073000000 3460 handshaker.cc:141] handshake_manager
000000000423BE30: adding handshaker security [00000000028
6AE00] at index 1
I0809 09:49:46.076000000 3460 handshaker.cc:212] handshake_manager
000000000423BE30: error="No Error" shutdown=0 index=0, ar
gs={endpoint=0000000002868FA0, args=00000000028617C0 {size=9:
grpc.primary_user_agent=grpc-node/1.13.1, grpc.client_channel_f
actory=000007FEEA85EBA0, grpc.channel_credentials=000000000423C960,
grpc.server_uri=dns:///language.googleapis.com:443, grpc.
http_connect_server=language.googleapis.com:443,
grpc.default_authority=language.googleapis.com:443, grpc.http2_scheme=https,
grpc.security_connector=00000000028F54E0,
grpc.subchannel_address=ipv4:127.0.0.1:3128}, read_buffer=0000000002876860
(length
=0), exit_early=0}
I0809 09:49:46.081000000 3460 handshaker.cc:253] handshake_manager
000000000423BE30: calling handshaker http_connect [000000
000286BDD0] at index 0
I0809 09:49:46.083000000 3460 http_connect_handshaker.cc:300] Connecting to
server language.googleapis.com:443 via HTTP prox
y ipv4:127.0.0.1:3128
I0809 09:49:46.211000000 3460 handshaker.cc:212] handshake_manager
000000000423BE30: error="No Error" shutdown=0 index=1, ar
gs={endpoint=0000000002868FA0, args=00000000028617C0 {size=9:
grpc.primary_user_agent=grpc-node/1.13.1, grpc.client_channel_f
actory=000007FEEA85EBA0, grpc.channel_credentials=000000000423C960,
grpc.server_uri=dns:///language.googleapis.com:443, grpc.
http_connect_server=language.googleapis.com:443,
grpc.default_authority=language.googleapis.com:443, grpc.http2_scheme=https,
grpc.security_connector=00000000028F54E0,
grpc.subchannel_address=ipv4:127.0.0.1:3128}, read_buffer=0000000002876860
(length
=0), exit_early=0}
I0809 09:49:46.211000000 3460 handshaker.cc:253] handshake_manager
000000000423BE30: calling handshaker security [0000000002
86AE00] at index 1
I0809 09:49:46.303000000 3460 handshaker.cc:212] handshake_manager
000000000423BE30: error="No Error" shutdown=0 index=2, ar
gs={endpoint=000000000287A7F0, args=0000000002862720 {size=10:
grpc.primary_user_agent=grpc-node/1.13.1, grpc.client_channel_
factory=000007FEEA85EBA0, grpc.channel_credentials=000000000423C960,
grpc.server_uri=dns:///language.googleapis.com:443, grpc
.http_connect_server=language.googleapis.com:443,
grpc.default_authority=language.googleapis.com:443, grpc.http2_scheme=https
, grpc.security_connector=00000000028F54E0,
grpc.subchannel_address=ipv4:127.0.0.1:3128,
grpc.auth_context=000000000285DD60},
read_buffer=0000000002876860 (length=0), exit_early=0}
I0809 09:49:46.304000000 3460 handshaker.cc:240] handshake_manager
000000000423BE30: handshaking complete -- scheduling on_h
andshake_done with error="No Error"
I0809 09:49:46.305000000 3460 subchannel.cc:608] New connected subchannel at
0000000002867790 for subchannel 00000000028A027
0
Auth error:Error: read ECONNRESET
Auth error:Error: read ECONNRESET
Auth error:Error: read ECONNRESET
Auth error:Error: read ECONNRESET
Auth error:Error: read ECONNRESET
Auth error:Error: read ECONNRESET
I cannot identify what causes the issue as the handshake seems to work just fine.
After extensive search I found a solution:
Install a Proxyfier that routes the request from the library on the local machine over the proxy.
For Windows I found two apps:
http://www.proxycap.com/
https://www.proxifier.com/
I hope this helps someone trying to get PubSub running on proxy networks.

autobahn.twisted.websocket opening handshake error 400

Problem: when Bitfinex websocket API connection established application gets errors:
2018-02-07T18:51:52+0200 connecting once using transport type "websocket" over endpoint "tcp"
2018-02-07T18:51:52+0200 Starting factory <autobahn.twisted.websocket.WampWebSocketClientFactory object at 0x106708550>
2018-02-07T18:51:52+0200 failing WebSocket opening handshake ('WebSocket connection upgrade failed (400 - BadRequest)')
2018-02-07T18:51:52+0200 dropping connection to peer tcp4:104.16.173.181:443 with abort=True: WebSocket connection upgrade failed (400 - BadRequest)
2018-02-07T18:51:52+0200 component failed: ConnectionAborted: Connection was aborted locally, using.
2018-02-07T18:51:52+0200 Connection failed: ConnectionAborted: Connection was aborted locally, using.
2018-02-07T18:51:52+0200 Stopping factory <autobahn.twisted.websocket.WampWebSocketClientFactory object at 0x106708550>
Code below:
from autobahn.twisted.component import Component, run
import json
cmp_Bitfinex = Component(
transports=[
{
u'type': u'websocket',
u'url': u'wss://api.bitfinex.com/ws',
u'endpoint': {
u'type': u'tcp',
u'host': 'api.bitfinex.com',
u'port': 443,
},
u'options': {
u"open_handshake_timeout": 100,
}
}
],
realm=u"realm1",
)
#cmp_Bitfinex.on_join
def joined(self):
def onTicker(*args):
print("Ticker event received:", args)
try:
yield from self.subscribe(onTicker, 'ticker', options=json.dumps({
"event": "subscribe",
"channel": "ticker",
"pair": "BTCUSD",
"prec": "P0",
"freq": "F0"
}))
except Exception as e:
print("Could not subscribe to topic:", e)
#cmp_Bitfinex.on_connect
def connected(session, details):
print('Connected: {} \r\n {}'.format(session, details))
def main():
run([cmp_Bitfinex])
if __name__ == "__main__":
main()
I understand, that problem can be in data that application sends. But, I can't understand what exactly wrong in my code.
App uses python 3.6, latest autobahn and latest Twisted
AFAICT autobahn components are for programming with WAMP.
I think you should be using their websocket programming. Check this example

How do I reconnect to Cassandra using Hector?

I have the following code:
StringSerializer ss = StringSerializer.get();
String cf = "TEST";
CassandraHostConfigurator conf = new CassandraHostConfigurator("localhost:9160");
conf.setCassandraThriftSocketTimeout(40000);
conf.setExhaustedPolicy(ExhaustedPolicy.WHEN_EXHAUSTED_BLOCK);
conf.setRetryDownedHostsDelayInSeconds(5);
conf.setRetryDownedHostsQueueSize(128);
conf.setRetryDownedHosts(true);
conf.setLoadBalancingPolicy(new LeastActiveBalancingPolicy());
String key = Long.toString(System.currentTimeMillis());
Cluster cluster = HFactory.getOrCreateCluster("TestCluster", conf);
Keyspace keyspace = HFactory.createKeyspace("TestCluster", cluster);
Mutator<String> mutator = HFactory.createMutator(keyspace, StringSerializer.get()); int count = 0;
while (!"q".equals(new Scanner( System.in).next())) {
try{
mutator.insert(key, cf, HFactory.createColumn("column_" + count, "v_" + count, ss, ss));
count++;
} catch (Exception e) {
e.printStackTrace();
}
}
and I can write some values using it, but when I restart cassandra, it fails. Here is the log:
[15:11:07] INFO [CassandraHostRetryService ] Downed Host Retry service started with >queue size 128 and retry delay 5s
[15:11:07] INFO [JmxMonitor ] Registering JMX >me.prettyprint.cassandra.service_ASG:ServiceType=hector,MonitorType=hector
[15:11:17] ERROR [HThriftClient ] Could not flush transport (to be expected >if the pool is shutting down) in close for client: CassandraClient
org.apache.thrift.transport.TTransportException: java.net.SocketException: Broken pipe
at >org.apache.thrift.transport.TIOStreamTransport.write(TIOStreamTransport.java:147)
at org.apache.thrift.transport.TFramedTransport.flush(TFramedTransport.java:156)
at >me.prettyprint.cassandra.connection.client.HThriftClient.close(HThriftClient.java:98)
at >me.prettyprint.cassandra.connection.client.HThriftClient.close(HThriftClient.java:26)
at >me.prettyprint.cassandra.connection.HConnectionManager.closeClient(HConnectionManager.java:308)
at >me.prettyprint.cassandra.connection.HConnectionManager.operateWithFailover(HConnectionManager.java:257)
at >me.prettyprint.cassandra.model.ExecutingKeyspace.doExecuteOperation(ExecutingKeyspace.java:97)
at me.prettyprint.cassandra.model.MutatorImpl.execute(MutatorImpl.java:243)
at me.prettyprint.cassandra.model.MutatorImpl.insert(MutatorImpl.java:69)
at com.app.App.main(App.java:40)
Caused by: java.net.SocketException: Broken pipe
at java.net.SocketOutputStream.socketWrite0(Native Method)
at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:109)
at java.net.SocketOutputStream.write(SocketOutputStream.java:153)
at >org.apache.thrift.transport.TIOStreamTransport.write(TIOStreamTransport.java:145)
... 9 more
[15:11:17] ERROR [HConnectionManager ] MARK HOST AS DOWN TRIGGERED for host >localhost(127.0.0.1):9160
[15:11:17] ERROR [HConnectionManager ] Pool state on shutdown: >:{localhost(127.0.0.1):9160}; IsActive?: true; Active: 1; Blocked: 0; Idle: 15; NumBeforeExhausted: 49
[15:11:17] INFO [ConcurrentHClientPool ] Shutdown triggered on :{localhost(127.0.0.1):9160}
[15:11:17] INFO [ConcurrentHClientPool ] Shutdown complete on :{localhost(127.0.0.1):9160}
[15:11:17] INFO [CassandraHostRetryService ] Host detected as down was added to retry queue: localhost(127.0.0.1):9160
[15:11:17] WARN [HConnectionManager ] Could not fullfill request on this host CassandraClient
[15:11:17] WARN [HConnectionManager ] Exception:
me.prettyprint.hector.api.exceptions.HectorTransportException: org.apache.thrift.transport.TTransportException: java.net.SocketException: Broken pipe
at >me.prettyprint.cassandra.connection.client.HThriftClient.getCassandra(HThriftClient.java:82)
at >me.prettyprint.cassandra.connection.HConnectionManager.operateWithFailover(HConnectionManager.java:236)
at >me.prettyprint.cassandra.model.ExecutingKeyspace.doExecuteOperation(ExecutingKeyspace.java:97)
at me.prettyprint.cassandra.model.MutatorImpl.execute(MutatorImpl.java:243)
at me.prettyprint.cassandra.model.MutatorImpl.insert(MutatorImpl.java:69)
at com.app.App.main(App.java:40)
Caused by: org.apache.thrift.transport.TTransportException: java.net.SocketException: Broken pipe
at org.apache.thrift.transport.TIOStreamTransport.write(TIOStreamTransport.java:147)
at org.apache.thrift.transport.TFramedTransport.flush(TFramedTransport.java:157)
at org.apache.cassandra.thrift.Cassandra$Client.send_set_keyspace(Cassandra.java:466)
at org.apache.cassandra.thrift.Cassandra$Client.set_keyspace(Cassandra.java:455)
at >me.prettyprint.cassandra.connection.client.HThriftClient.getCassandra(HThriftClient.java:78)
... 5 more
Caused by: java.net.SocketException: Broken pipe
at java.net.SocketOutputStream.socketWrite0(Native Method)
at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:109)
at java.net.SocketOutputStream.write(SocketOutputStream.java:153)
at >org.apache.thrift.transport.TIOStreamTransport.write(TIOStreamTransport.java:145)
... 9 more
[15:11:17] INFO [HConnectionManager ] Client CassandraClient released to inactive or dead pool. Closing.
[15:11:17] INFO [HConnectionManager ] Client CassandraClient released to inactive or dead pool. Closing.
[15:11:17] INFO [HConnectionManager ] Added host localhost(127.0.0.1):9160 to pool
You have set -
conf.setRetryDownedHostsDelayInSeconds(5);
Try to to wait after the restart for more than 5 seconds.
Also, you may need to upgrade.
What is the size thrift_max_message_length_in_mb you have set?
Kind regards.

Resources