failed for get of /hbase/hbaseid, code = CONNECTIONLOSS, retries = 6 - apache-spark

I am trying to connect spark application with hbase. Below is the configuration I am giving
val conf = HBaseConfiguration.create()
conf.set("hbase.master", "localhost:16010")
conf.setInt("timeout", 120000)
conf.set("hbase.zookeeper.quorum", "2181")
val connection = ConnectionFactory.createConnection(conf)
and below are the 'jps' details:
5808 ResourceManager
8150 HMaster
8280 HRegionServer
5131 NameNode
8076 HQuorumPeer
5582 SecondaryNameNode
2798 org.eclipse.equinox.launcher_1.4.0.v20161219-1356.jar
8623 Jps
5951 NodeManager
5279 DataNode
I have alsotried with hbase master 16010
I am getting below error:
19/09/12 21:49:00 WARN ClientCnxn: Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect
java.net.SocketException: Invalid argument
at sun.nio.ch.Net.connect0(Native Method)
at sun.nio.ch.Net.connect(Net.java:454)
at sun.nio.ch.Net.connect(Net.java:446)
at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:648)
at org.apache.zookeeper.ClientCnxnSocketNIO.registerAndConnect(ClientCnxnSocketNIO.java:277)
at org.apache.zookeeper.ClientCnxnSocketNIO.connect(ClientCnxnSocketNIO.java:287)
at org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1024)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1060)
19/09/12 21:49:00 WARN ReadOnlyZKClient: 0x1e3ff233 to 2181:2181 failed for get of /hbase/hbaseid, code = CONNECTIONLOSS, retries = 4
19/09/12 21:49:01 INFO ClientCnxn: Opening socket connection to server 2181/0.0.8.133:2181. Will not attempt to authenticate using SASL (unknown error)
19/09/12 21:49:01 ERROR ClientCnxnSocketNIO: Unable to open socket to 2181/0.0.8.133:2181

Looks like there is a problem to join zookeeper.
Check first that zookeeper is started on your local host on port 2181.
netstat -tunelp | grep 2181 | grep -i LISTEN
tcp6 0 0 :::2181 :::* LISTEN
In your conf, in hbase.zookeeper.quorum property you have to pass the ip of your zookeeper and not the port (hbase.zookeeper.property.clientPort)
My hbase connector is build with :
val conf = HBaseConfiguration.create()
conf.set("hbase.zookeeper.quorum", "10.80.188.65")
conf.set("hbase.master", "10.80.188.64:60000")
conf.set("hbase.zookeeper.property.clientPort", "2181")
conf.set("zookeeper.znode.parent", "/hbase-unsecure")
val connection = ConnectionFactory.createConnection(conf)

Related

How to setup a distributed eventbus in a vertx Hazelcast Cluster?

Here is the sender verticle
I have set multicast enabled and set the public host to my machines ip address
VertxOptions options = new VertxOptions()
.setClusterManager(ClusterManagerConfig.getClusterManager());
EventBusOptions eventBusOptions = new EventBusOptions()
.setClustered(true)
.setClusterPublicHost("10.10.1.160");
options.setEventBusOptions(eventBusOptions);
Vertx.clusteredVertx(options, res -> {
if (res.succeeded()) {
Vertx vertx = res.result();
vertx.deployVerticle(new requestHandler());
vertx.deployVerticle(new requestSender());
EventBus eventBus = vertx.eventBus();
eventBus.send("some.address","hello",reply -> {
System.out.println(reply.toString());
});
} else {
LOGGER.info("Failed: " + res.cause());
}
});
}
here's the reciever verticle
VertxOptions options = new VertxOptions().setClusterManager(mgr);
options.setEventBusOptions(new EventBusOptions()
.setClustered(true)
.setClusterPublicHost("10.10.1.174") );
Vertx.clusteredVertx(options, res -> {
if (res.succeeded()) {
Vertx vertx1 = res.result();
System.out.println("Success");
EventBus eb = vertx1.eventBus();
System.out.println("ready");
eb.consumer("some.address", message -> {
message.reply("hello hello");
});
} else {
System.out.println("Failed");
}
});
I get this result when i run both main verticles , so the verticles are detected by hazelcast and a connection is established
INFO: [10.10.1.160]:33001 [dev] [3.10.5] Established socket connection between /10.10.1.160:33001 and /10.10.1.174:35725
Jan 11, 2021 11:45:10 AM com.hazelcast.internal.cluster.ClusterService
INFO: [10.10.1.160]:33001 [dev] [3.10.5]
Members {size:2, ver:2} [
Member [10.10.1.160]:33001 - 51b8c249-6b3c-4ca8-a238-c651845629d8 this
Member [10.10.1.174]:33001 - 1cba1680-025e-469f-bad6-884111313672
]
Jan 11, 2021 11:45:10 AM com.hazelcast.internal.partition.impl.MigrationManager
INFO: [10.10.1.160]:33001 [dev] [3.10.5] Re-partitioning cluster data... Migration queue size: 271
Jan 11, 2021 11:45:11 AM com.hazelcast.nio.tcp.TcpIpAcceptor
But when the event-bus tries to send a message to given address i encounter this error is this a problem with event-bus configuration?
Jan 11, 2021 11:59:57 AM io.vertx.core.eventbus.impl.clustered.ConnectionHolder
WARNING: Connecting to server 10.10.1.174:39561 failed
io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: /10.10.1.174:39561
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716)
at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:327)
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:340)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:665)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:612)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:529)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:491)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:905)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.ConnectException: Connection refused
... 11 more
In Vert.x 3, the cluster host and cluster public host default to localhost.
If you only change the cluster public host in VertxOptions, Vert.x will bind EventBus transport servers to localhost while telling other nodes to connect to the public host.
This kind of configuration is needed when running Vert.x on some cloud providers, but in most cases you only need to set the cluster host (and then the public host will default to its value):
EventBusOptions eventBusOptions = new EventBusOptions()
.setClustered(true)
.setHost("10.10.1.160");

How can I connect a Wordpress service to a MariaDB service through Consul Connect?

I'm having a serious problem with connecting to MariaDB through Consul Connect.
I'm using Nomad to create the services with the proxies, with the following job definition:
job "wordpress" {
type = "service"
datacenters = ["dc1"]
group "server" {
network {
mode = "bridge"
port "http" {
static = 8080
to = 80
}
}
task "server" {
driver = "docker"
config {
image = "wordpress"
}
env {
WORDPRESS_DB_HOST = "${NOMAD_UPSTREAM_ADDR_database}"
WORDPRESS_DB_USER = "exampleuser"
WORDPRESS_DB_PASSWORD = "examplepass"
WORDPRESS_DB_NAME = "exampledb"
}
resources {
cpu = 100
memory = 64
network {
mbits = 10
}
}
}
service {
name = "wordpress"
tags = ["production", "wordpress"]
port = "http"
connect {
sidecar_service {
proxy {
upstreams {
destination_name = "database"
local_bind_port = 3306
}
}
}
}
}
}
group "database" {
network {
mode = "bridge"
port "db" {
to = 3306
}
}
task "database" {
driver = "docker"
config {
image = "mariadb"
}
env {
MYSQL_RANDOM_ROOT_PASSWORD = "yes"
MYSQL_INITDB_SKIP_TZINFO = "yes"
MYSQL_DATABASE = "exampledb"
MYSQL_USER = "exampleuser"
MYSQL_PASSWORD = "examplepass"
}
resources {
cpu = 100
memory = 128
network {
mbits = 10
}
}
}
service {
name = "database"
tags = ["production", "mariadb"]
port = "db"
connect {
sidecar_service {}
}
}
}
}
However, it seems that the server can't reach the database.
MySQL Connection Error: (2006) MySQL server has gone away
[25-Aug-2020 10:46:53 UTC] PHP Warning: mysqli::__construct(): Error while reading greeting packet. PID=187 in Standard input code on line 22
[25-Aug-2020 10:46:53 UTC] PHP Warning: mysqli::__construct(): (HY000/2006): MySQL server has gone away in Standard input code on line 22
MySQL Connection Error: (2006) MySQL server has gone away
WARNING: unable to establish a database connection to '127.0.0.1:3306'
continuing anyways (which might have unexpected results)
And the logs of the server and database proxies shows that some sort of TLS issue is happening, but I've got no clue how to solve this problem.
Server Proxy Logs
[2020-08-25 12:20:35.841][18][debug][filter] [source/common/tcp_proxy/tcp_proxy.cc:344] [C1229] Creating connection to cluster database.default.dc1.internal.0198bec5-d0b4-332c-973e-372808379192.consul
[2020-08-25 12:20:35.841][18][debug][pool] [source/common/tcp/conn_pool.cc:82] creating a new connection
[2020-08-25 12:20:35.841][18][debug][pool] [source/common/tcp/conn_pool.cc:362] [C1230] connecting
[2020-08-25 12:20:35.841][18][debug][connection] [source/common/network/connection_impl.cc:704] [C1230] connecting to 172.29.168.233:29307
[2020-08-25 12:20:35.841][18][debug][connection] [source/common/network/connection_impl.cc:713] [C1230] connection in progress
[2020-08-25 12:20:35.841][18][debug][pool] [source/common/tcp/conn_pool.cc:108] queueing request due to no available connections
[2020-08-25 12:20:35.841][18][debug][main] [source/server/connection_handler_impl.cc:280] [C1229] new connection
[2020-08-25 12:20:35.841][18][trace][connection] [source/common/network/connection_impl.cc:458] [C1229] socket event: 2
[2020-08-25 12:20:35.841][18][trace][connection] [source/common/network/connection_impl.cc:543] [C1229] write ready
[2020-08-25 12:20:35.841][18][trace][connection] [source/common/network/connection_impl.cc:458] [C1230] socket event: 2
[2020-08-25 12:20:35.841][18][trace][connection] [source/common/network/connection_impl.cc:543] [C1230] write ready
[2020-08-25 12:20:35.841][18][debug][connection] [source/common/network/connection_impl.cc:552] [C1230] connected
[2020-08-25 12:20:35.841][18][debug][connection] [source/extensions/transport_sockets/tls/ssl_socket.cc:168] [C1230] handshake error: 2
[2020-08-25 12:20:35.842][18][trace][connection] [source/common/network/connection_impl.cc:458] [C1230] socket event: 3
[2020-08-25 12:20:35.842][18][trace][connection] [source/common/network/connection_impl.cc:543] [C1230] write ready
[2020-08-25 12:20:35.842][18][debug][connection] [source/extensions/transport_sockets/tls/ssl_socket.cc:168] [C1230] handshake error: 5
[2020-08-25 12:20:35.842][18][debug][connection] [source/extensions/transport_sockets/tls/ssl_socket.cc:201] [C1230]
[2020-08-25 12:20:35.842][18][debug][connection] [source/common/network/connection_impl.cc:190] [C1230] closing socket: 0
[2020-08-25 12:20:35.842][18][debug][pool] [source/common/tcp/conn_pool.cc:123] [C1230] client disconnected
Database Proxy Logs
[2020-08-25 12:26:07.093][15][debug][filter] [source/common/tcp_proxy/tcp_proxy.cc:201] [C927] new tcp proxy session
[2020-08-25 12:26:07.093][15][trace][connection] [source/common/network/connection_impl.cc:290] [C927] readDisable: enabled=true disable=true
[2020-08-25 12:26:07.093][15][debug][filter] [source/common/tcp_proxy/tcp_proxy.cc:344] [C927] Creating connection to cluster local_app
[2020-08-25 12:26:07.093][15][debug][pool] [source/common/tcp/conn_pool.cc:82] creating a new connection
[2020-08-25 12:26:07.093][15][debug][pool] [source/common/tcp/conn_pool.cc:362] [C928] connecting
[2020-08-25 12:26:07.093][15][debug][connection] [source/common/network/connection_impl.cc:704] [C928] connecting to 127.0.0.1:26344
[2020-08-25 12:26:07.093][15][debug][connection] [source/common/network/connection_impl.cc:713] [C928] connection in progress
[2020-08-25 12:26:07.093][15][debug][pool] [source/common/tcp/conn_pool.cc:108] queueing request due to no available connections
[2020-08-25 12:26:07.093][15][debug][main] [source/server/connection_handler_impl.cc:280] [C927] new connection
[2020-08-25 12:26:07.093][15][trace][connection] [source/common/network/connection_impl.cc:458] [C927] socket event: 2
[2020-08-25 12:26:07.093][15][trace][connection] [source/common/network/connection_impl.cc:543] [C927] write ready
[2020-08-25 12:26:07.093][15][debug][connection] [source/extensions/transport_sockets/tls/ssl_socket.cc:168] [C927] handshake error: 5
[2020-08-25 12:26:07.093][15][debug][connection] [source/extensions/transport_sockets/tls/ssl_socket.cc:201] [C927]
[2020-08-25 12:26:07.093][15][debug][connection] [source/common/network/connection_impl.cc:190] [C927] closing socket: 0
[2020-08-25 12:26:07.093][15][debug][pool] [source/common/tcp/conn_pool.cc:204] canceling pending request
[2020-08-25 12:26:07.093][15][debug][pool] [source/common/tcp/conn_pool.cc:212] canceling pending connection
[2020-08-25 12:26:07.093][15][debug][connection] [source/common/network/connection_impl.cc:101] [C928] closing data_to_write=0 type=1
[2020-08-25 12:26:07.093][15][debug][connection] [source/common/network/connection_impl.cc:190] [C928] closing socket: 1
[2020-08-25 12:26:07.093][15][debug][pool] [source/common/tcp/conn_pool.cc:123] [C928] client disconnected
[2020-08-25 12:26:07.093][15][trace][main] [source/common/event/dispatcher_impl.cc:158] item added to deferred deletion list (size=1)
[2020-08-25 12:26:07.093][15][debug][main] [source/server/connection_handler_impl.cc:80] [C927] adding to cleanup list
[2020-08-25 12:26:07.093][15][trace][main] [source/common/event/dispatcher_impl.cc:158] item added to deferred deletion list (size=2)
[2020-08-25 12:26:07.093][15][trace][main] [source/common/event/dispatcher_impl.cc:76] clearing deferred deletion list (size=2)
[2020-08-25 12:26:07.093][15][debug][pool] [source/common/tcp/conn_pool.cc:236] [C928] connection destroyed

Using CCM with Opscenter and manual agent installation

I installed with CCM a 4 nodes local cluster (127.0.0.1-127.0.0.4).
ccm create -v 2.1.5 -n 4 gp
I'm trying to manually install the agents to use OpsCenter (5.2.3.2015121015). I started the OpsCenter removing the JMX port, so the configuration is
[jmx]
username =
password =
[agents]
[cassandra]
username =
seed_hosts = 127.0.0.1,127.0.0.2,127.0.0.3,127.0.0.4
password =
cql_port = 9042
I've configured all the four agents. For example agent3 configuration is
[Giampaolo]: ~/opscenter/> cat agent3/conf/address.yaml
stomp_interface: "127.0.0.1"
agent_rpc_interface: 127.0.0.3
jmx_host: 127.0.0.3
jmx_port: 7300
I started the OpsCenter in foreground and I have this error. As I start agent4 I have this error:
2015-12-30 17:46:21+0100 [gp] ERROR: The state of the following nodes could not be determined, most likely due to agents on those nodes not being properly connected: [<Node 127.0.0.4='4611686018427387904'>, <Node 127.0.0.3='0'>, <Node 127.0.0.2='-4611686018427387904'>, <Node 127.0.0.1='-9223372036854775808'>]
2015-12-30 17:46:24+0100 [gp] INFO: Agent for ip 127.0.0.4 is version None
2015-12-30 17:46:24+0100 [gp] INFO: Agent for ip 127.0.0.4 is version u'5.2.3'
2015-12-30 17:46:37+0100 [gp] INFO: Nodes without agents: 127.0.0.3, 127.0.0.2, 127.0.0.1
2015-12-30 17:46:51+0100 [gp] ERROR: The state of the following nodes could not be determined, most likely due to agents on those nodes not being properly connected: [<Node 127.0.0.4='4611686018427387904'>, <Node 127.0.0.3='0'>, <Node 127.0.0.2='-4611686018427387904'>, <Node 127.0.0.1='-9223372036854775808'>]
I'm a bit confused by the fact that first log tells me that an agent was found for 4, while after a bit I have the error on console.
The agent 4 log tells me at the beginning that there are no errors (only INFO severity):
[Giampaolo]: ~/opscenter/>agent4/bin/datastax-agent -f
INFO [main] 2015-12-30 17:46:20,399 Loading conf files: ./conf/address.yaml
INFO [main] 2015-12-30 17:46:20,461 Java vendor/version: Java HotSpot(TM) 64-Bit Server VM/1.8.0_40
INFO [main] 2015-12-30 17:46:20,462 DataStax Agent version: 5.2.3
INFO [main] 2015-12-30 17:46:20,489 Default config values: {:cassandra_port 9042, :rollups300_ttl 2419200, :finished-request-cache-size 100, :settings_cf "settings", :agent_rpc_interface "127.0.0.4", :restore_req_update_period 60, :my_channel_prefix "/agent", :poll_period 60, :monitored_cassandra_pass "*REDACTED*", :thrift_conn_timeout 10000, :cassandra_pass "*REDACTED*", :rollups60_ttl 604800, :stomp_port 61620, :shorttime_interval 10, :longtime_interval 300, :max-seconds-to-sleep 25, :private-conf-props {:cassandra.yaml #{"broadcast_address" "rpc_address" "broadcast_rpc_address" "listen_address" "initial_token"}, :cassandra-rackdc.properties #{}}, :thrift_port 9160, :agent-conf-group "global-cluster-agent-group", :jmx_host "127.0.0.4", :ec2_metadata_api_host "169.254.169.254", :metrics_enabled 1, :async_queue_size 5000, :backup_staging_dir nil, :remote_verify_max 30000, :disk_usage_update_period 60, :throttle-bytes-per-second 500000, :rollups7200_ttl 31536000, :trace_delay 300, :remote_backup_retries 3, :cassandra_user "*REDACTED*", :ssl_keystore nil, :rollup_snapshot_period 300, :is_package false, :monitor_command "/usr/share/datastax-agent/bin/datastax_agent_monitor", :thrift_socket_timeout 5000, :remote_verify_initial_delay 1000, :cassandra_log_location "/var/log/cassandra/system.log", :restore_on_transfer_failure false, :ssl_keystore_password "*REDACTED*", :tmp_dir "/var/lib/datastax-agent/tmp/", :monitored_thrift_port 9160, :config_md5 nil, :jmx_port 7400, :jmx_metrics_threadpool_size 4, :use_ssl 0, :max_pending_repairs 5, :rollups86400_ttl 0, :monitored_cassandra_user "*REDACTED*", :nodedetails_threadpool_size 3, :api_port 61621, :monitored_ssl_keystore nil, :slow_query_fetch_size 2000, :kerberos_service nil, :backup_file_queue_max 10000, :jmx_thread_pool_size 5, :production 1, :monitored_cassandra_port 9042, :runs_sudo 1, :max_file_transfer_attempts 30, :config_encryption_active false, :running-request-cache-size 500, :monitored_ssl_keystore_password "*REDACTED*", :stomp_interface "127.0.0.1", :storage_keyspace "OpsCenter", :hosts ["127.0.0.1"], :rollup_snapshot_threshold 300, :jmx_retry_timeout 30, :unthrottled-default 10000000000, :multipart-chunk-size 5000000, :remote_backup_retry_delay 5000, :sstableloader_max_heap_size nil, :jmx_operations_pool_size 4, :slow_query_refresh 5, :remote_backup_timeout 1000, :slow_query_ignore ["OpsCenter" "dse_perf"], :max_reconnect_time 15000, :seconds-to-read-kill-channel 0.005, :slow_query_past 3600000, :realtime_interval 5, :pdps_ttl 259200}
INFO [main] 2015-12-30 17:46:20,492 Waiting for the config from OpsCenter
INFO [main] 2015-12-30 17:46:20,493 Attempting to determine Cassandra's broadcast address through JMX
INFO [main] 2015-12-30 17:46:20,494 Starting Stomp
INFO [main] 2015-12-30 17:46:20,495 Starting up agent communcation with OpsCenter.
INFO [main] 2015-12-30 17:46:24,595 Reconnecting to a backup OpsCenter instance
INFO [main] 2015-12-30 17:46:24,597 SSL communication is disabled
INFO [main] 2015-12-30 17:46:24,597 Creating stomp connection to 127.0.0.1:61620
INFO [StompConnection receiver] 2015-12-30 17:46:24,603 Reconnecting in 0s.
INFO [StompConnection receiver] 2015-12-30 17:46:24,608 Connected to 127.0.0.1:61620
INFO [main] 2015-12-30 17:46:24,609 Starting Jetty server: {:join? false, :ssl? false, :host "127.0.0.4", :port 61621}
INFO [Initialization] 2015-12-30 17:46:24,608 Sleeping for 2s before trying to determine IP over JMX again
INFO [StompConnection receiver] 2015-12-30 17:46:24,681 Got new config from OpsCenter [note values in address.yaml override those from OpsCenter]: {:cassandra_port 9042, :rollups300_ttl 2419200, :destinations [], :restore_req_update_period 1, :monitored_cassandra_pass "*REDACTED*", :cassandra_pass "*REDACTED*", :cassandra_rpc_interface "127.0.0.4", :rollups60_ttl 604800, :jmx_pass "*REDACTED*", :thrift_port 9160, :ec2_metadata_api_host "169.254.169.254", :metrics_enabled 1, :backup_staging_dir "", :rollups7200_ttl 31536000, :cassandra_user "*REDACTED*", :jmx_user "*REDACTED*", :metrics_ignored_column_families "", :cassandra_log_location "/var/log/cassandra/system.log", :monitored_thrift_port 9160, :config_md5 "e78e9aaea4de0b15ec94b11c6b2788d5", :provisioning 0, :use_ssl 0, :max_pending_repairs 5, :rollups86400_ttl -1, :monitored_cassandra_user "*REDACTED*", :api_port "61621", :monitored_cassandra_port 9042, :storage_keyspace "OpsCenter", :hosts ["127.0.0.4"], :metrics_ignored_solr_cores "", :metrics_ignored_keyspaces "system, system_traces, system_auth, dse_auth, OpsCenter", :rollup_subscriptions [], :jmx_operations_pool_size 4, :cassandra_install_location ""}
INFO [StompConnection receiver] 2015-12-30 17:46:24,693 Couldn't get broadcast address, will retry in five seconds.
INFO [Jetty] 2015-12-30 17:46:24,715 Jetty server started
INFO [Initialization] 2015-12-30 17:46:26,615 Sleeping for 4s before trying to determine IP over JMX again
INFO [StompConnection receiver] 2015-12-30 17:46:29,696 Couldn't get broadcast address, will retry in five seconds.
however after a while:
INFO [qtp153482676-24] 2015-12-30 17:49:07,057 HTTP: :get /cassandra/conf {:private_props "True"} - 500
ERROR [qtp153482676-24] 2015-12-30 17:49:09,084 Unhandled route Exception (:bad-permissions): Unable to locate the cassandra.yaml configuration file. If your configuration file is not located with the Cassandra install, please set the 'conf_location' option in the Cassandra section of the OpsCenter cluster configuration file and restart opscenterd. Checked the following locations: /etc/dse/cassandra/cassandra.yaml, /etc/cassandra/conf/cassandra.yaml, /etc/cassandra/cassandra.yaml
INFO [qtp153482676-24] 2015-12-30 17:49:09,085 HTTP: :get /cassandra/conf {:private_props "True"} - 500
INFO [StompConnection receiver] 2015-12-30 17:49:09,845 Couldn't get broadcast address, will retry in five seconds.
ERROR [qtp153482676-19] 2015-12-30 17:49:11,102 Unhandled route Exception (:bad-permissions): Unable to locate the cassandra.yaml configuration file. If your configuration file is not located with the Cassandra install, please set the 'conf_location' option in the Cassandra section of the OpsCenter cluster configuration file and restart opscenterd. Checked the following locations: /etc/dse/cassandra/cassandra.yaml, /etc/cassandra/conf/cassandra.yaml, /etc/cassandra/cassandra.yaml
I've tried after this to add the other agents, but I got strange results. Something like two agents are going to the same C* node so I've stopped trying before I think that this error causes the others.
Questions:
What is the error is OpsCenter log?
Is it somehow related to error on agent log?
Am I missing something on configuration (are more details needed?)
Why OpsCenter complains about a missing cassandra.yaml file? Shouldn't it deployable on any host even if it has not a local C* installation?
Thanks in advance
edit
I added some additional information:
[Giampaolo]: ~/> netstat -an | grep 7400
tcp4 0 0 127.0.0.1.7400 *.* LISTEN
should it be 127.0.0.4?
This is (part of ) configuration for node4:
[Giampaolo]: ~/.ccm/gp/node4/conf/> cat cassandra.yaml | grep 127
listen_address: 127.0.0.4
rpc_address: 127.0.0.4
- seeds: 127.0.0.1,127.0.0.2,127.0.0.3,127.0.0.4
and this is address.yaml for node4:
[Giampaolo]: ~/opscenter/agent4/conf/> cat address.yaml
stomp_interface: "127.0.0.1"
agent_rpc_interface: 127.0.0.4
jmx_host: 127.0.0.4
jmx_port: 7400
alias: "the4node"
The configuration was ok, except for a little detail. For every agent, in address.yaml file, localhost must be used instead of 127.0.0.1 in jmx_hostparameter as suggested in cassandra user mailing list

How do I reconnect to Cassandra using Hector?

I have the following code:
StringSerializer ss = StringSerializer.get();
String cf = "TEST";
CassandraHostConfigurator conf = new CassandraHostConfigurator("localhost:9160");
conf.setCassandraThriftSocketTimeout(40000);
conf.setExhaustedPolicy(ExhaustedPolicy.WHEN_EXHAUSTED_BLOCK);
conf.setRetryDownedHostsDelayInSeconds(5);
conf.setRetryDownedHostsQueueSize(128);
conf.setRetryDownedHosts(true);
conf.setLoadBalancingPolicy(new LeastActiveBalancingPolicy());
String key = Long.toString(System.currentTimeMillis());
Cluster cluster = HFactory.getOrCreateCluster("TestCluster", conf);
Keyspace keyspace = HFactory.createKeyspace("TestCluster", cluster);
Mutator<String> mutator = HFactory.createMutator(keyspace, StringSerializer.get()); int count = 0;
while (!"q".equals(new Scanner( System.in).next())) {
try{
mutator.insert(key, cf, HFactory.createColumn("column_" + count, "v_" + count, ss, ss));
count++;
} catch (Exception e) {
e.printStackTrace();
}
}
and I can write some values using it, but when I restart cassandra, it fails. Here is the log:
[15:11:07] INFO [CassandraHostRetryService ] Downed Host Retry service started with >queue size 128 and retry delay 5s
[15:11:07] INFO [JmxMonitor ] Registering JMX >me.prettyprint.cassandra.service_ASG:ServiceType=hector,MonitorType=hector
[15:11:17] ERROR [HThriftClient ] Could not flush transport (to be expected >if the pool is shutting down) in close for client: CassandraClient
org.apache.thrift.transport.TTransportException: java.net.SocketException: Broken pipe
at >org.apache.thrift.transport.TIOStreamTransport.write(TIOStreamTransport.java:147)
at org.apache.thrift.transport.TFramedTransport.flush(TFramedTransport.java:156)
at >me.prettyprint.cassandra.connection.client.HThriftClient.close(HThriftClient.java:98)
at >me.prettyprint.cassandra.connection.client.HThriftClient.close(HThriftClient.java:26)
at >me.prettyprint.cassandra.connection.HConnectionManager.closeClient(HConnectionManager.java:308)
at >me.prettyprint.cassandra.connection.HConnectionManager.operateWithFailover(HConnectionManager.java:257)
at >me.prettyprint.cassandra.model.ExecutingKeyspace.doExecuteOperation(ExecutingKeyspace.java:97)
at me.prettyprint.cassandra.model.MutatorImpl.execute(MutatorImpl.java:243)
at me.prettyprint.cassandra.model.MutatorImpl.insert(MutatorImpl.java:69)
at com.app.App.main(App.java:40)
Caused by: java.net.SocketException: Broken pipe
at java.net.SocketOutputStream.socketWrite0(Native Method)
at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:109)
at java.net.SocketOutputStream.write(SocketOutputStream.java:153)
at >org.apache.thrift.transport.TIOStreamTransport.write(TIOStreamTransport.java:145)
... 9 more
[15:11:17] ERROR [HConnectionManager ] MARK HOST AS DOWN TRIGGERED for host >localhost(127.0.0.1):9160
[15:11:17] ERROR [HConnectionManager ] Pool state on shutdown: >:{localhost(127.0.0.1):9160}; IsActive?: true; Active: 1; Blocked: 0; Idle: 15; NumBeforeExhausted: 49
[15:11:17] INFO [ConcurrentHClientPool ] Shutdown triggered on :{localhost(127.0.0.1):9160}
[15:11:17] INFO [ConcurrentHClientPool ] Shutdown complete on :{localhost(127.0.0.1):9160}
[15:11:17] INFO [CassandraHostRetryService ] Host detected as down was added to retry queue: localhost(127.0.0.1):9160
[15:11:17] WARN [HConnectionManager ] Could not fullfill request on this host CassandraClient
[15:11:17] WARN [HConnectionManager ] Exception:
me.prettyprint.hector.api.exceptions.HectorTransportException: org.apache.thrift.transport.TTransportException: java.net.SocketException: Broken pipe
at >me.prettyprint.cassandra.connection.client.HThriftClient.getCassandra(HThriftClient.java:82)
at >me.prettyprint.cassandra.connection.HConnectionManager.operateWithFailover(HConnectionManager.java:236)
at >me.prettyprint.cassandra.model.ExecutingKeyspace.doExecuteOperation(ExecutingKeyspace.java:97)
at me.prettyprint.cassandra.model.MutatorImpl.execute(MutatorImpl.java:243)
at me.prettyprint.cassandra.model.MutatorImpl.insert(MutatorImpl.java:69)
at com.app.App.main(App.java:40)
Caused by: org.apache.thrift.transport.TTransportException: java.net.SocketException: Broken pipe
at org.apache.thrift.transport.TIOStreamTransport.write(TIOStreamTransport.java:147)
at org.apache.thrift.transport.TFramedTransport.flush(TFramedTransport.java:157)
at org.apache.cassandra.thrift.Cassandra$Client.send_set_keyspace(Cassandra.java:466)
at org.apache.cassandra.thrift.Cassandra$Client.set_keyspace(Cassandra.java:455)
at >me.prettyprint.cassandra.connection.client.HThriftClient.getCassandra(HThriftClient.java:78)
... 5 more
Caused by: java.net.SocketException: Broken pipe
at java.net.SocketOutputStream.socketWrite0(Native Method)
at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:109)
at java.net.SocketOutputStream.write(SocketOutputStream.java:153)
at >org.apache.thrift.transport.TIOStreamTransport.write(TIOStreamTransport.java:145)
... 9 more
[15:11:17] INFO [HConnectionManager ] Client CassandraClient released to inactive or dead pool. Closing.
[15:11:17] INFO [HConnectionManager ] Client CassandraClient released to inactive or dead pool. Closing.
[15:11:17] INFO [HConnectionManager ] Added host localhost(127.0.0.1):9160 to pool
You have set -
conf.setRetryDownedHostsDelayInSeconds(5);
Try to to wait after the restart for more than 5 seconds.
Also, you may need to upgrade.
What is the size thrift_max_message_length_in_mb you have set?
Kind regards.

Dovecot not working pop3 with postfix

telnet localhost pop3
Trying ::1...
Trying 127.0.0.1...
telnet: Unable to connect to remote host: Connection refused
netstat -l
tcp 0 0 *:www : LISTEN
tcp 0 0 localhost.localdoma:ipp : LISTEN
tcp 0 0 *:smtp : LISTEN
tcp 0 0 localhost.localdo:mysql : LISTEN
when i run this service dovecot start i got
start: Rejected send message, 1 matched rules; type="method_call", sender=":1.553" (uid=1000 pid=26250 comm="start) interface="com.ubuntu.Upstart0_6.Job" member="Start" error name="(unset)" requested_reply=0 destination="com.ubuntu.Upstart" (uid=0 pid=1 comm="/sbin/init"))
on Dovecot.conf
protocols = imap imaps pop3 pop3s
disable_plaintext_auth = no
log_timestamp = "%Y-%m-%d %H:%M:%S "
mail_location = maildir:/var/spool/mail/%d/%n
mail_access_groups = mail
first_valid_uid = 106
first_valid_gid = 106
protocol imap {
}
protocol pop3 {
listen=*:110
pop3_uidl_format = %08Xu%08Xv
}
protocol lda {
postmaster_address = samer#aiu.com
mail_plugins = quota
log_path = /var/log/dovecot-deliver.log
info_log_path = /var/log/dovecot-deliver.log
}
auth default {
mechanisms = digest-md5 plain
passdb sql {
args = /etc/dovecot/dovecot-mysql.conf
}
userdb sql {
args = /etc/dovecot/dovecot-mysql.conf
}
user = root
}
If you get this message on the command line:
start: Rejected send message, 1 matched rules; type="method_call", sender=":1.553" (uid=1000 pid=26250 comm="start) interface="com.ubuntu.Upstart0_6.Job" member="Start" error name="(unset)" requested_reply=0 destination="com.ubuntu.Upstart" (uid=0 pid=1 comm="/sbin/init"))
It indicates that you're not doing service dovecot restart as root. So make sure you do:
sudo service dovecot restart
The directive protocols = imap imaps pop3 pop3s should be sufficient to activate pop3 with dovecot. You can add
listen = *
to be sure dovecot will listen on all available interfaces. You can verify this by netstat -apn | grep 110. Are there any failures while starting dovecot? Can you post the dovecot related logs?
By default, dovecot logs to syslog, you can explicitly specify the log files:
# Log file to use for error messages, instead of sending them to syslog.
# /dev/stderr can be used to log into stderr.
log_path = /var/log/dovecot.log
# Log file to use for informational and debug messages.
# Default is the same as log_path.
info_log_path = /var/log/dovecot.info.log

Resources