Connecting opscenter but no data - cassandra

I have deployed a datastax cassandra cluster on google cloud. The cluster size is 4 nodes on two different datacenters.
Servers are running fine and able to view opscenter but no data on it and getting the following errors.
This is a fresh deployment and all nodes are up and running. No tables were created yet.
Error Message from opscenter log:
[opscenterd] ERROR: Unhandled error in Deferred: com.datastax.driver.core.exceptions.UnavailableException: Not enough replicas available for query at consistency ONE (1 required but only 0 alive)
address.yaml from one of the node:
stomp_interface: 10.128.x.x [ ops center ip ]
agent_rpc_interface: 10.132.x.x [ one of the node ip]
agent_rpc_broadcast_address: 10.132.x.x
**cassandra.yaml from one of the node:**
cluster_name: 'Test Cluster'
num_tokens: 64
hinted_handoff_enabled: true
max_hint_window_in_ms: 10800000 # 3 hours
hinted_handoff_throttle_in_kb: 1024
max_hints_delivery_threads: 2
hints_directory: /var/lib/cassandra/hints
hints_flush_period_in_ms: 10000
max_hints_file_size_in_mb: 128
batchlog_replay_throttle_in_kb: 1024
authenticator: AllowAllAuthenticator
authorizer: AllowAllAuthorizer
role_manager: com.datastax.bdp.cassandra.auth.DseRoleManager
roles_validity_in_ms: 2000
permissions_validity_in_ms: 2000
partitioner: org.apache.cassandra.dht.Murmur3Partitioner
data_file_directories:
- /mnt/data
commitlog_directory: /mnt/commitlog
disk_failure_policy: stop
commit_failure_policy: stop
key_cache_size_in_mb:
key_cache_save_period: 14400
row_cache_size_in_mb: 0
row_cache_save_period: 0
counter_cache_size_in_mb:
counter_cache_save_period: 7200
saved_caches_directory: /mnt/saved_caches
commitlog_sync: periodic
commitlog_sync_period_in_ms: 10000
commitlog_segment_size_in_mb: 32
seed_provider:
# Addresses of hosts that are deemed contact points.
# Cassandra nodes use this list of hosts to find each other and learn
# the topology of the ring. You must change this if you are running
# multiple nodes!
- class_name: org.apache.cassandra.locator.SimpleSeedProvider
parameters:
# seeds is actually a comma-delimited list of addresses.
# Ex: "<ip1>,<ip2>,<ip3>"
- **seeds: "XX.XXX.X.X" [ The node IP from different Data Center]**
concurrent_reads: 32
concurrent_writes: 32
concurrent_counter_writes: 32
concurrent_materialized_view_writes: 32
memtable_allocation_type: heap_buffers
index_summary_capacity_in_mb:
index_summary_resize_interval_in_minutes: 60
trickle_fsync: true
trickle_fsync_interval_in_kb: 10240
storage_port: 7000
ssl_storage_port: 7001
listen_address: **SAME NODE IP**
broadcast_address: **SAME NODE IP**
start_native_transport: true
native_transport_port: 9042
start_rpc: true
rpc_address: 0.0.0.0
rpc_port: 9160
broadcast_rpc_address: SAME NODE IP
rpc_keepalive: true
rpc_server_type: sync
thrift_framed_transport_size_in_mb: 15
incremental_backups: false
snapshot_before_compaction: false
auto_snapshot: true
tombstone_warn_threshold: 1000
tombstone_failure_threshold: 100000
Opscenter.conf from opscenter:
[webserver]
port = 8888
interface = 0.0.0.0
[authentication]
enabled = False
[stat_reporter]
nodetool status:
Datacenter: europe-west1-b
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns Host ID Rack
UN 10.XXX.0.2 220 KB 64 ? 5381a371-c532-4be7-a7f5-80162bef4541 europe-west1-b
UN 10.XXX.0.3 148.6 KB 64 ? e5b3dfec-f9bf-4952-b48f-b5a82749a9fe europe-west1-b
Datacenter: us-east1-b
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns Host ID Rack
UN 10.XXX.0.2 158.72 KB 64 ? b93980e0-3c28-4bb0-9e49-220cb009d44c us-east1-b
UN 10.XXX.0.3 192.85 KB 64 ? af242b71-b7b4-4345-9888-5b89eb5e6199 us-east1-b
Note: Non-system keyspaces don't have the same replication settings, effective ownership information is meaningless

You have a DC in your replication factor ($dc) that doesnt exist, so when OpsCenter sends a request to read data the coordinator responds with an UnavailableException.
Try updating your keyspace like ALTER KEYSPACE "OpsCenter" WITH replication = {'class': 'NetworkTopologyStrategy', 'us-east1-b': '2'} and doing a full repair if you want to keep the data thats there.
If it hasnt run yet so you dont have data you want to save, just delete the opscenter keyspace (drop keyspace "OpsCenter") and restart opscenterd and the agents (after opscenter service has started back up).

Related

Unable to gossip with any seeds cassandra

I installed Data Stax 3.7 on my Windows machine(IP:10.175.12.249) and made following changes in my cassandra.yaml file:
cluster_name: 'Test_cluster'
listen_address: "10.175.12.249"
start_rpc: true
rpc_address: "0.0.0.0"
broadcast_rpc_address: "10.175.12.249"
seeds: "10.175.12.249"
endpoint_snitch: SimpleSnitch
Now, I started the service and cassandra is running fine on seed node.
I tried adding another node to my cluster. So I installed Data Stax 3.7 on another Windows machine(IP:192.168.158.78) and made following changes in cassandra.yaml file:
cluster_name: 'Test_cluster'
listen_address: "192.168.158.78"
start_rpc: true
rpc_address: "0.0.0.0"
broadcast_rpc_address: "192.168.158.78"
seeds: "10.175.12.249"
endpoint_snitch: SimpleSnitch
Now when I started the cassandra service on my 2nd machine, I am getting the following error:
INFO 09:41:27 Cassandra version: 3.7.0
INFO 09:41:27 Thrift API version: 20.1.0
INFO 09:41:27 CQL supported versions: 3.4.2 (default: 3.4.2)
INFO 09:41:27 Initializing index summary manager with a memory pool size of 100 MB and a resize interval of 60 minutes
INFO 09:41:27 Starting Messaging Service on /192.168.158.78:7000 (Intel(R) Centrino(R) Advanced-N 6235)
INFO 09:41:27 Scheduling approximate time-check task with a precision of 10 milliseconds
Exception (java.lang.RuntimeException) encountered during startup: Unable to gossip with any seeds
java.lang.RuntimeException: Unable to gossip with any seeds
at org.apache.cassandra.gms.Gossiper.doShadowRound(Gossiper.java:1386)
at org.apache.cassandra.service.StorageService.checkForEndpointCollision(StorageService.java:561)
at org.apache.cassandra.service.StorageService.prepareToJoin(StorageService.java:855)
at org.apache.cassandra.service.StorageService.initServer(StorageService.java:725)
at org.apache.cassandra.service.StorageService.initServer(StorageService.java:625)
at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:370)
at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:585)
at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:714)
ERROR 09:41:58 Exception encountered during startup
java.lang.RuntimeException: Unable to gossip with any seeds
at org.apache.cassandra.gms.Gossiper.doShadowRound(Gossiper.java:1386) ~[apache-cassandra-3.7.0.jar:3.7.0]
at org.apache.cassandra.service.StorageService.checkForEndpointCollision(StorageService.java:561) ~[apache-cassandra-3.7.0.jar:3.7.0]
at org.apache.cassandra.service.StorageService.prepareToJoin(StorageService.java:855) ~[apache-cassandra-3.7.0.jar:3.7.0]
at org.apache.cassandra.service.StorageService.initServer(StorageService.java:725) ~[apache-cassandra-3.7.0.jar:3.7.0]
at org.apache.cassandra.service.StorageService.initServer(StorageService.java:625) ~[apache-cassandra-3.7.0.jar:3.7.0]
at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:370) [apache-cassandra-3.7.0.jar:3.7.0]
at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:585) [apache-cassandra-3.7.0.jar:3.7.0]
at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:714) [apache-cassandra-3.7.0.jar:3.7.0]
WARN 09:41:58 No local state or state is in silent shutdown, not announcing shutdown
INFO 09:41:58 Waiting for messaging service to quiesce
Below is the output of nodetool status on seed node(IP:10.175.12.249):
C:\Program Files\DataStax-DDC\apache-cassandra\bin>nodetool status
Datacenter: datacenter1
========================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns (effective) Host ID Rack
DN 192.168.158.78 ? 256 68.1% 6bc4e927-3def-4dfc-b5e7-31f5882ce475 rack1
UN 10.175.12.249 257.76 KiB 256 65.7% 300d731e-a27c-4922-aacc-6d42e8e49151 rack1
Thanks!!!
The - seeds: in conf/cassandra.yaml should have the same value (same IP or the hostname) as listen_address: in the same conf file.
I came across this error when the IP addresses were not matching. Try keeping the same and restart the cluster. Hope this helps...

cassandra 3.4 on virtual box not starting

I am using mac osx. i created 3 virtual box by virtualbox. I've installed centos7 minimal version on each of the virtual box.
Then i installed cassandra on each of the box. After installation it was starting by cqlsh and nodetool status command.
But after then when i was trying to link each other and edit cassandra.yaml file its started showing
('Unable to connect to any servers', {'127.0.0.1': error(111, "Tried connecting to [('127.0.0.1', 9042)]. Last error: Connection refused")})
i've edited the cassandra.yaml file as follows:
cluster_name: 'Home Cluster'
num_tokens: 256
partitioner: org.apache.cassandra.dht.Murmur3Partitioner
- seeds: "192.168.56.102,192.168.56.103"
storage_port: 7000
listen_address: 192.168.56.102
rpc_address: 192.168.56.102
rpc_port: 9160
endpoint_snitch: SimpleSnitch
my /etc/hosts file contains:
192.168.56.102 node01
192.168.56.103 node02
192.168.56.104 node03
Please tell me whats wrong i'm doing? My cassandra cluster not working.
solution: I got the solution from AKKI. The problem was enpoint_snitch. I made the endpoint_snitch=GossipingPropertyFileSnitch and it fixed. My now output is as follows:
[root#dbnode2 ~]# nodetool status
Datacenter: dc1
===============
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns (effective) Host ID Rack
UN 192.168.56.101 107.38 KB 256 62.5% 0526a2e1-e6ce-4bb4-abeb-b9e33f72510a rack1
UN 192.168.56.102 106.85 KB 256 73.0% 0b7b76c2-27e8-490f-8274-571d00e60c20 rack1
UN 192.168.56.103 83.1 KB 256 64.5% 6c8d80ec-adbb-4be1-b255-f7a0b63e95c2 rack1
I had faced similar problem,
I tried the following solution:
In Cassandra.yaml file check if you have,
start_rpc = true
Changed my endpoint snitch to
endpoint_snitch: GossipingFilePropertySnitch
Opened all ports Cassandra uses on my CentOS
Cassandra inter-node ports
Port number Description
7000 Cassandra inter-node cluster communication.
7001 Cassandra SSL inter-node cluster communication.
7199 Cassandra JMX monitoring port.
Cassandra client port
Port number Description
9042 Cassandra client port.
9160 Cassandra client port (Thrift).
Command to open ports on CentOs 7(Find it according to your OS):
>sudo firewall-cmd --zone=public --add-port=9042/tcp --permanent
>sudo firewall-cmd –reload
Then Restart your systems
Also it seems that you are changing the Cassandra.Yaml file after starting cassandra.
Make sure you edit your Cassandra.yaml file on all nodes before starting Cassandra
Also remember to start the seed node first.

Datastax connection exception when using beeline or hive2 jdbc driver (Tableau)

I have installed Datastax enterprise 2.8 on my dev VM (Centos 7). The install went through smoothly and the single node cluster is working great. But when I try to connect to the cluster using beeline or hive2 jdbc driver I get an error as shown below. My main aim is to connect Tableau using the Datastax Enterprise driver or Spark Sql driver.
Error observed is:
ERROR 2016-04-14 17:57:56,915
org.apache.thrift.server.TThreadPoolServer: Error occurred during
processing of message. java.lang.RuntimeException:
org.apache.thrift.transport.TTransportException: Invalid status -128
at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:219)
~[libthrift-0.9.3.jar:0.9.3]
at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:269)
~[libthrift-0.9.3.jar:0.9.3]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
[na:1.7.0_99]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
[na:1.7.0_99]
at java.lang.Thread.run(Thread.java:745) [na:1.7.0_99] Caused by: org.apache.thrift.transport.TTransportException: Invalid status
-128
at org.apache.thrift.transport.TSaslTransport.sendAndThrowMessage(TSaslTransport.java:232)
~[libthrift-0.9.3.jar:0.9.3]
at org.apache.thrift.transport.TSaslTransport.receiveSaslMessage(TSaslTransport.java:184)
~[libthrift-0.9.3.jar:0.9.3]
at org.apache.thrift.transport.TSaslServerTransport.handleSaslStartMessage(TSaslServerTransport.java:125)
~[libthrift-0.9.3.jar:0.9.3]
at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:271)
~[libthrift-0.9.3.jar:0.9.3]
at org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41)
~[libthrift-0.9.3.jar:0.9.3]
at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216)
~[libthrift-0.9.3.jar:0.9.3]
... 4 common frames omitted ERROR 2016-04-14 17:58:59,140 org.apache.spark.scheduler.cluster.SparkDeploySchedulerBackend:
Application has been killed. Reason: Master removed our application:
KILLED
My cassandra.yml config:
cluster_name: 'Cluster1'
num_tokens: 256
hinted_handoff_enabled: true hinted_handoff_throttle_in_kb: 1024
max_hints_delivery_threads: 2
batchlog_replay_throttle_in_kb: 1024
authenticator: AllowAllAuthenticator
authorizer: AllowAllAuthorizer
permissions_validity_in_ms: 2000
partitioner: org.apache.cassandra.dht.Murmur3Partitioner
data_file_directories:
- /var/lib/cassandra/data
commitlog_directory: /var/lib/cassandra/commitlog
disk_failure_policy: stop
commit_failure_policy: stop
key_cache_size_in_mb:
key_cache_save_period: 14400
row_cache_size_in_mb: 0
row_cache_save_period: 0
counter_cache_size_in_mb:
counter_cache_save_period: 7200
saved_caches_directory: /var/lib/cassandra/saved_caches
commitlog_sync: periodic commitlog_sync_period_in_ms: 10000
commitlog_segment_size_in_mb: 32
seed_provider:
- class_name: org.apache.cassandra.locator.SimpleSeedProvider
parameters:
- seeds: "10.33.1.124"
concurrent_reads: 32 concurrent_writes: 32 concurrent_counter_writes:
32
memtable_allocation_type: heap_buffers
index_summary_capacity_in_mb:
index_summary_resize_interval_in_minutes: 60
trickle_fsync: false trickle_fsync_interval_in_kb: 10240
storage_port: 7000
ssl_storage_port: 7001
listen_address: 10.33.1.124
start_native_transport: true native_transport_port: 9042
start_rpc: true
rpc_address: 10.33.1.124
rpc_port: 9160
rpc_keepalive: true
rpc_server_type: sync
thrift_framed_transport_size_in_mb: 15
incremental_backups: false
snapshot_before_compaction: false
auto_snapshot: true
tombstone_warn_threshold: 1000 tombstone_failure_threshold: 100000
column_index_size_in_kb: 64
batch_size_warn_threshold_in_kb: 64
compaction_throughput_mb_per_sec: 16
compaction_large_partition_warning_threshold_mb: 100
sstable_preemptive_open_interval_in_mb: 50
read_request_timeout_in_ms: 5000 range_request_timeout_in_ms: 10000
write_request_timeout_in_ms: 2000 counter_write_request_timeout_in_ms:
5000 cas_contention_timeout_in_ms: 1000
truncate_request_timeout_in_ms: 60000 request_timeout_in_ms: 10000
cross_node_timeout: false
endpoint_snitch: com.datastax.bdp.snitch.DseSimpleSnitch
dynamic_snitch_update_interval_in_ms: 100
dynamic_snitch_reset_interval_in_ms: 600000
dynamic_snitch_badness_threshold: 0.1
request_scheduler: org.apache.cassandra.scheduler.NoScheduler
server_encryption_options:
internode_encryption: none
keystore: resources/dse/conf/.keystore
keystore_password: cassandra
truststore: resources/dse/conf/.truststore
truststore_password: cassandra
client_encryption_options:
enabled: false
optional: false
keystore: resources/dse/conf/.keystore
keystore_password: cassandra
internode_compression: dc
inter_dc_tcp_nodelay: false
concurrent_counter_writes: 32
counter_cache_size_in_mb:
counter_cache_save_period: 7200
memtable_allocation_type: heap_buffers
index_summary_capacity_in_mb:
index_summary_resize_interval_in_minutes: 60
sstable_preemptive_open_interval_in_mb: 50
counter_write_request_timeout_in_ms: 5000
When connecting with beeline, I get the error:
dse beeline Beeline version 0.12.0.11 by Apache Hive beeline> !connect
jdbc:hive2://10.33.1.124:10000 scan complete in 10ms Connecting to
jdbc:hive2://10.33.1.124:10000 Enter username for
jdbc:hive2://10.33.1.124:10000: cassandra Enter password for
jdbc:hive2://10.33.1.124:10000: ********* Error: Invalid URL:
jdbc:hive2://10.33.1.124:10000 (state=08S01,code=0) 0:
jdbc:hive2://10.33.1.124:10000> !connect
jdbc:hive2://10.33.1.124:10000 Connecting to
jdbc:hive2://10.33.1.124:10000 Enter username for
jdbc:hive2://10.33.1.124:10000: Enter password for
jdbc:hive2://10.33.1.124:10000: Error: Invalid URL:
jdbc:hive2://10.33.1.124:10000 (state=08S01,code=0) 1:
jdbc:hive2://10.33.1.124:10000>
I see similar errors when connecting through Tableau as well.
The JDBC driver connects to the SparkSql Thrift server. If you do not start it, you cannot connect to it.
dse spark-sql-thriftserver
/Users/russellspitzer/dse/bin/dse:
usage: dse spark-sql-thriftserver <command> [Spark SQL Thriftserver Options]
Available commands:
start Start Spark SQL Thriftserver
stop Stops Spark SQL Thriftserver

Cassandra on multi node cluster -- nodes not linking to Seed

Below mentioned is my Cassandra set up:
192.168.80.115 (Seed)
192..168.80.116 (Node1)
192..168.80.117 (Node2)
192..168.80.118 (Node3)
I have installed the Cassandra in all the nodes
below mentioned is the configuration of the cassandra.yaml
cassandra.yaml - 192.168.80.115 (Seed)
======================================
cluster_name: 'Test Cluster'
initial_token: 0
seed_provider:
- seeds: "192.168.80.115"
listen_address: 192.168.80.115
rpc_address: 0.0.0.0
endpoint_snitch: RackInferringSnitch
cassandra.yaml - 192.168.80.116 (Node1)
========================================
cluster_name: 'Test Cluster'
initial_token: -4611686018427387904
seed_provider:
- seeds: "192.168.80.115"
listen_address: 192.168.80.116
rpc_address: 0.0.0.0
endpoint_snitch: RackInferringSnitch
cassandra.yaml - 192.168.80.117 (Node2)
========================================
cluster_name: 'Test Cluster'
initial_token: 4611686018427387904
seed_provider:
- seeds: "192.168.80.115"
listen_address: 192.168.80.117
rpc_address: 0.0.0.0
endpoint_snitch: RackInferringSnitch
cassandra.yaml - 192.168.80.118 (Node3)
========================================
cluster_name: 'Test Cluster'
initial_token: -9223372036854775808
seed_provider:
- seeds: "192.168.80.115"
listen_address: 192.168.80.118
rpc_address: 0.0.0.0
endpoint_snitch: RackInferringSnitch
When I start Cassandra and check the node status in the seed I am able to see only my localhost and not other nodes
192.168.80.115 Seed
Datacenter: datacenter1
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns Host ID Rack
UN 127.0.0.1 96.98 KB 256 ? deb3f882-1886-4e09-be08-21ebd7e99c8d rack1
And status in the nodes are as bellow
192.168.80.116 Node 1
Datacenter: datacenter1
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns Host ID Rack
UN 127.0.0.1 116.33 KB 256 ? 21a310ee-5652-4179-8b59-d1c7dcd538ee rack1
Note: Non-system keyspaces don't have the same replication settings, effective ownership information is meaningless
192.168.80.117 Node 2
Datacenter: datacenter1
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns Host ID Rack
UN 127.0.0.1 156.95 KB 256 ? b64a5a17-21d0-4cf6-bbd6-1eff8d8feb93 rack1
Note: Non-system keyspaces don't have the same replication settings, effective ownership information is meaningless
192.168.80.118 Node 3
Datacenter: datacenter1
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns Host ID Rack
UN 127.0.0.1 157.04 KB 256 ? 43a24dea-26cc-4094-9da7-2b26b8c5f7e5 rack1
Note: Non-system keyspaces don't have the same replication settings, effective ownership information is meaningless
Why I am not able to see worker nodes in seed node status

sstableloader does not transmit the data, and refer to the weirf ports

I want to bulkload my cassandra data from node A to node B.
when I set the 'listen_address' of each cassandra.yaml file to localhost,
they do not show error on console but the data is never transmitted.
when I set each node's listen address to their own local network[eth1 ipv4]address (192.168....), I get the following error.
I can read from this error log that the application is trying to access to port 1..4
and I do not have no idea what on earth is going on.
each node is on the virtual machine on the Virtual Box Hypervisor. Both OS is centOS.
[vagrant#localhost conf]$ ../bin/sstableloader -v -d 192.168.33.12 -p 9160 /db/data/m
oomin/hoahoa2/
Streaming revelant part of /db/data/moomin/hoahoa2/moomin-hoahoa2-hf-69-Data.db to [/192.168.33.12]
progress: [/192.168.33.12 0/1 (0)] [total: 0 - 0MB/s (avg: 0MB/s)] WARN 16:55:42,655 Failed attempt 1 to connect to /192.168.33.12 to stream /db/data/moomin/hoahoa2/moomin-hoahoa2-h
f-69-Data.db sections=1 progress=0/378000000 - 0%. Retrying in 4000 ms. (java.net.SocketException: Invalid argument or cannot assign requested address)
progress: [/192.168.33.12 0/1 (0)] [total: 0 - 0MB/s (avg: 0MB/s)] WARN 16:55:46,658 Failed attempt 2 to connect to /192.168.33.12 to stream /db/data/moomin/hoahoa2/moomin-hoahoa2-h
f-69-Data.db sections=1 progress=0/378000000 - 0%. Retrying in 8000 ms. (java.net.SocketException: Invalid argument or cannot assign requested address)
progress: [/192.168.33.12 0/1 (0)] [total: 0 - 0MB/s (avg: 0MB/s)] WARN 16:55:54,666 Failed attempt 3 to connect to /192.168.33.12 to stream /db/data/moomin/hoahoa2/moomin-hoahoa2-h
f-69-Data.db sections=1 progress=0/378000000 - 0%. Retrying in 16000 ms. (java.net.SocketException: Invalid argument or cannot assign requested address)
progress: [/192.168.33.12 0/1 (0)] [total: 0 - 0MB/s (avg: 0MB/s)]
Here is my cassandra.yaml (the cassandra.yaml of target file is also configured the same way)
# communicate!
#
# Leaving it blank leaves it up to InetAddress.getLocalHost(). This
# will always do the Right Thing *if* the node is properly configured
# (hostname, name resolution, etc), and the Right Thing is to use the
# address associated with the hostname (it might not be).
#
# Setting this to 0.0.0.0 is always wrong.
listen_address: 192.168.33.12
#listen_address: localhost
rpc_address: 0.0.0.0
# port for Thrift to listen for clients on
rpc_port: 9160
# enable or disable keepalive on rpc connections
rpc_keepalive: true
rpc_server_type: sync
thrift_framed_transport_size_in_mb: 15
thrift_max_message_length_in_mb: 16
incremental_backups: false
snapshot_before_compaction: false
auto_snapshot: true
column_index_size_in_kb: 64
in_memory_compaction_limit_in_mb: 64
multithreaded_compaction: false
compaction_throughput_mb_per_sec: 16
compaction_preheat_key_cache: true
rpc_timeout_in_ms: 10000
endpoint_snitch: org.apache.cassandra.locator.PropertyFileSnitch
dynamic_snitch_update_interval_in_ms: 100
dynamic_snitch_reset_interval_in_ms: 600000
dynamic_snitch_badness_threshold: 0.1
request_scheduler: org.apache.cassandra.scheduler.NoScheduler
emory usage without a impact on performance.
index_interval: 128
Can anybody give me advice? I am really suffering as hell.

Resources