java.io.IOException: An existing connection was forcibly closed by the remote host
at sun.nio.ch.SocketDispatcher.read0(Native Method) ~[na:1.8.0_252]
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:43) ~[na:1.8.0_252]
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) ~[na:1.8.0_252]
at sun.nio.ch.IOUtil.read(IOUtil.java:197) ~[na:1.8.0_252]
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:377) ~[na:1.8.0_252]
at org.apache.kafka.common.network.SslTransportLayer.readFromSocketChannel(SslTransportLayer.java:205) ~[kafka-clients-2.3.0.jar:na]
at org.apache.kafka.common.network.SslTransportLayer.read(SslTransportLayer.java:528) ~[kafka-clients-2.3.0.jar:na]
at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:94) ~[kafka-clients-2.3.0.jar:na]
at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:424) ~[kafka-clients-2.3.0.jar:na]
at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:385) ~[kafka-clients-2.3.0.jar:na]
at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:651) [kafka-clients-2.3.0.jar:na]
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:572) [kafka-clients-2.3.0.jar:na]
at org.apache.kafka.common.network.Selector.poll(Selector.java:483) [kafka-clients-2.3.0.jar:na]
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:539) [kafka-clients-2.3.0.jar:na]
at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:307) [kafka-clients-2.3.0.jar:na]
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:238) [kafka-clients-2.3.0.jar:na]
Events gets published with plain kafka client program. But not with Axon. Following are the configurations
axon.kafka.client-id=producer
axon.kafka.default-topic=test
axon.kafka.producer.transaction-id-prefix=deafultTxPrefix
axon.kafka.bootstrap-servers=****.servicebus.windows.net:9093
axon.kafka.properties.security.protocol=SASL_SSL
axon.kafka.properties.sasl.mechanism=PLAIN
axon.kafka.properties.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="$ConnectionString" password="Endpoint=sb://****.servicebus.windows.net/;SharedAccessKeyName=RootManageSharedAccessPolicy;SharedAccessKey=***********;EntityPath=test";
Other dependecies:
org.springframework.boot:spring-boot-starter-parent:2.1.4.RELEASE
org.apache.kafka:kafka-clients:2.3.0
Could you please check the logs for the producer properties logged both with and without Axon? I think there might be a property that's not correctly 'forwarded' to the producer with Axon.
Related
How do I start the Jenkins process in offline mode on one of my servers?
I am trying to start Jenkins by running command java -jar jenkins.war, but it is failing at the below warning because of no connectivity.
Is there any way to skip this plugin upgrade through the internet and bring the Jenkins up and running?
2021-04-27 16:53:43.490+0000 [id=64] WARNING hudson.model.UpdateCenter#updateDefaultSite: Upgrading Jenkins. Failed to update the default Update Site 'default'. Plugin upgrades may fail.
2021-04-27 16:53:43.490+0000 [id=64] WARNING hudson.model.UpdateCenter#updateDefaultSite: Upgrading Jenkins. Failed to update the default Update Site 'default'. Plugin upgrades may fail.
java.net.SocketTimeoutException: connect timed out
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:607)
at sun.security.ssl.SSLSocketImpl.connect(SSLSocketImpl.java:666)
at sun.net.NetworkClient.doConnect(NetworkClient.java:175)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:463)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:558)
at sun.net.www.protocol.https.HttpsClient.<init>(HttpsClient.java:264)
at sun.net.www.protocol.https.HttpsClient.New(HttpsClient.java:367)
at sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.getNewHttpClient(AbstractDelegateHttpsURLConnection.java:191)
at sun.net.www.protocol.http.HttpURLConnection.plainConnect0(HttpURLConnection.java:1162)
at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:1056)
at sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:177)
at sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1570)
at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1498)
at sun.net.www.protocol.https.HttpsURLConnectionImpl.getInputStream(HttpsURLConnectionImpl.java:268)
at hudson.model.DownloadService.loadJSON(DownloadService.java:116)
at hudson.model.UpdateSite.updateDirectlyNow(UpdateSite.java:218)
at hudson.model.UpdateSite.updateDirectlyNow(UpdateSite.java:213)
at hudson.model.UpdateCenter.updateDefaultSite(UpdateCenter.java:2611)
at jenkins.install.SetupWizard.init(SetupWizard.java:212)
at jenkins.install.InstallState$InitialSecuritySetup.initializeState(InstallState.java:168)
at jenkins.model.Jenkins.setInstallState(Jenkins.java:1104)
at jenkins.install.InstallUtil.proceedToNextStateFrom(InstallUtil.java:98)
at jenkins.install.InstallState$Unknown.initializeState(InstallState.java:86)
at jenkins.model.Jenkins$16.run(Jenkins.java:3356)
at org.jvnet.hudson.reactor.TaskGraphBuilder$TaskImpl.run(TaskGraphBuilder.java:169)
at org.jvnet.hudson.reactor.Reactor.runTask(Reactor.java:296)
at jenkins.model.Jenkins$5.runTask(Jenkins.java:1131)
at org.jvnet.hudson.reactor.Reactor$2.run(Reactor.java:214)
at org.jvnet.hudson.reactor.Reactor$Node.run(Reactor.java:117)
at jenkins.security.ImpersonatingExecutorService$1.run(ImpersonatingExecutorService.java:68)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2021-04-27 16:53:43.491+0000 [id=53] INFO jenkins.InitReactorRunner$1#onAttained: Completed initialization
2021-04-27 16:53:43.494+0000 [id=92] INFO hudson.util.Retrier#start: The attempt #1 to do the action check updates server failed with an allowed exception:
java.net.SocketTimeoutException: connect timed out
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:607)
at sun.security.ssl.SSLSocketImpl.connect(SSLSocketImpl.java:666)
at sun.net.NetworkClient.doConnect(NetworkClient.java:175)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:463)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:558)
at sun.net.www.protocol.https.HttpsClient.<init>(HttpsClient.java:264)
at sun.net.www.protocol.https.HttpsClient.New(HttpsClient.java:367)
at sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.getNewHttpClient(AbstractDelegateHttpsURLConnection.java:191)
at sun.net.www.protocol.http.HttpURLConnection.plainConnect0(HttpURLConnection.java:1162)
at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:1056)
at sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:177)
at sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1570)
at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1498)
at sun.net.www.protocol.https.HttpsURLConnectionImpl.getInputStream(HttpsURLConnectionImpl.java:268)
at hudson.model.DownloadService.loadJSON(DownloadService.java:116)
at hudson.model.UpdateSite.updateDirectlyNow(UpdateSite.java:218)
at hudson.model.UpdateSite.updateDirectlyNow(UpdateSite.java:213)
at hudson.PluginManager.checkUpdatesServer(PluginManager.java:1882)
at hudson.util.Retrier.start(Retrier.java:63)
at hudson.PluginManager.doCheckUpdatesServer(PluginManager.java:1853)
at jenkins.DailyCheck.execute(DailyCheck.java:93)
at hudson.model.AsyncPeriodicWork.lambda$doRun$0(AsyncPeriodicWork.java:100)
at java.lang.Thread.run(Thread.java:748)
2021-04-27 16:53:43.494+0000 [id=92] INFO hudson.util.Retrier#start: Calling the listener of the allowed exception 'connect timed out' at the attempt #1 to do the action check updates server
2021-04-27 16:53:43.497+0000 [id=92] INFO hudson.util.Retrier#start: Attempted the action check updates server for 1 time(s) with no success
2021-04-27 16:53:43.499+0000 [id=92] SEVERE hudson.PluginManager#doCheckUpdatesServer: Error checking update sites for 1 attempt(s). Last exception was: SocketTimeoutException: connect timed out
2021-04-27 16:53:43.504+0000 [id=36] INFO hudson.WebAppMain$3#run: Jenkins is fully up and running
2021-04-27 16:53:43.504+0000 [id=92] INFO hudson.model.AsyncPeriodicWork#lambda$doRun$0: Finished Download metadata. 20,249 ms
2021-04-27 16:54:13.279+0000 [id=106] INFO hudson.model.AsyncPeriodicWork#lambda$doRun$0: Started Periodic background build discarder
2021-04-27 16:54:13.280+0000 [id=106] INFO hudson.model.AsyncPeriodicWork#lambda$doRun$0: Finished Periodic background build discarder. 1 ms```
Make sure to pass all of these arguments before the -jar argument, otherwise they will be ignored. Example:
java -Dhudson.footerURL=http://example.org -jar jenkins.war
After running the command
spark-submit --class org.apache.spark.examples.SparkPi --proxy-user yarn --master yarn --deploy-mode cluster --driver-memory 4g --executor-memory 2g --executor-cores 1 --queue default ./examples/jars/spark-examples_2.11-2.3.0.jar 10000
I get this in the output and it keeps on retrying. Where am I going wrong? Am I missing some configuration?
I have created a new user for yarn and running that user.
WARN Utils:66 - Your hostname, ukaleem-HP-EliteBook-850-G3 resolves to a loopback address: 127.0.1.1; using 10.XX.XX.XX instead (on interface enp0s31f6)
2018-06-14 16:50:41 WARN Utils:66 - Set SPARK_LOCAL_IP if you need to bind to another address
Warning: Local jar /home/yarn/Documents/Scala-Examples/./examples/jars/spark-examples_2.11-2.3.0.jar does not exist, skipping.
2018-06-14 16:50:42 INFO RMProxy:98 - Connecting to ResourceManager at /0.0.0.0:8032
2018-06-14 16:50:44 INFO Client:871 - Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
And in the end, it gives the exception
Exception in thread "main" java.net.ConnectException: Call From ukaleem-HP-EliteBook-850-G3/127.0.1.1 to 0.0.0.0:8032 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
at sun.reflect.GeneratedConstructorAccessor4.newInstance(Unknown Source)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:732)
at org.apache.hadoop.ipc.Client.call(Client.java:1479)
at org.apache.hadoop.ipc.Client.call(Client.java:1412)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
at com.sun.proxy.$Proxy8.getClusterMetrics(Unknown Source)
at org.apache.hadoop.yarn.api.impl.pb.client.ApplicationClientProtocolPBClientImpl.getClusterMetrics(ApplicationClientProtocolPBClientImpl.java:206)
at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy9.getClusterMetrics(Unknown Source)
at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.getYarnClusterMetrics(YarnClientImpl.java:487)
at org.apache.spark.deploy.yarn.Client$$anonfun$submitApplication$1.apply(Client.scala:155)
at org.apache.spark.deploy.yarn.Client$$anonfun$submitApplication$1.apply(Client.scala:155)
at org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)
at org.apache.spark.deploy.yarn.Client.logInfo(Client.scala:59)
at org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:154)
at org.apache.spark.deploy.yarn.Client.run(Client.scala:1146)
at org.apache.spark.deploy.yarn.YarnClusterApplication.start(Client.scala:1518)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:879)
at org.apache.spark.deploy.SparkSubmit$$anon$1.run(SparkSubmit.scala:179)
at org.apache.spark.deploy.SparkSubmit$$anon$1.run(SparkSubmit.scala:177)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:177)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:227)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:136)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:614)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:712)
at org.apache.hadoop.ipc.Client$Connection.access$2900(Client.java:375)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1528)
at org.apache.hadoop.ipc.Client.call(Client.java:1451)
... 28 more
2018-06-14 17:10:53 INFO ShutdownHookManager:54 - Shutdown hook called
2018-06-14 17:10:53 INFO ShutdownHookManager:54 - Deleting directory /tmp/spark-5bddb7f3-165f-451c-8ab4-bb7729f4237c
EDIT : After adding config files to my spark/conf dir, I get this error now.
The files I added are
*core-site.xml
dfs.hosts
masters
slaves
yarn-site.xml*
And some more. What I understand is that I only need yarn-site.xml to tell spark the location of the yarn cluster. (ids, address, hostname etc).
All this time I had been thinking that even we want to submit a job on Yarn these config need to go in /etc/Hadoop dir not in Spark/conf. Whats the purpose of installing hadoop then (other than communicating)?
And following this question. If the config need to go in spark/conf then HADOOP_CONF_DIR & YARN_CONF_DIR should point to etc/hadoop dir or spark/conf?
INFO client.ConfiguredRMFailoverProxyProvider: Failing over to rm2
18/06/19 11:04:50 INFO retry.RetryInvocationHandler: Exception while invoking getClusterMetrics of class ApplicationClientProtocolPBClientImpl over rm2 after 1 fail over attempts. Trying to fail over after sleeping for 38176ms.
java.net.ConnectException: Call From ukaleem-HP-EliteBook-850-G3/127.0.1.1 to svc-hadoop-mgnt-pre-c2-01.jamba.net:8032 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:732)
at org.apache.hadoop.ipc.Client.call(Client.java:1479)
at org.apache.hadoop.ipc.Client.call(Client.java:1412)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
at com.sun.proxy.$Proxy13.getClusterMetrics(Unknown Source)
at org.apache.hadoop.yarn.api.impl.pb.client.ApplicationClientProtocolPBClientImpl.getClusterMetrics(ApplicationClientProtocolPBClientImpl.java:206)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy14.getClusterMetrics(Unknown Source)
at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.getYarnClusterMetrics(YarnClientImpl.java:487)
at org.apache.spark.deploy.yarn.Client$$anonfun$submitApplication$1.apply(Client.scala:155)
at org.apache.spark.deploy.yarn.Client$$anonfun$submitApplication$1.apply(Client.scala:155)
at org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)
at org.apache.spark.deploy.yarn.Client.logInfo(Client.scala:59)
at org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:154)
at org.apache.spark.deploy.yarn.Client.run(Client.scala:1146)
at org.apache.spark.deploy.yarn.YarnClusterApplication.start(Client.scala:1518)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:879)
at org.apache.spark.deploy.SparkSubmit$$anon$1.run(SparkSubmit.scala:179)
at org.apache.spark.deploy.SparkSubmit$$anon$1.run(SparkSubmit.scala:177)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:177)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:227)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:136)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:614)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:712)
at org.apache.hadoop.ipc.Client$Connection.access$2900(Client.java:375)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1528)
at org.apache.hadoop.ipc.Client.call(Client.java:1451)
... 29 more
Assuming you have a fully distributed yarn cluster: your spark-submit script is unable to find the configuration for the yarn resourcemanager (basically the yarn master node). Ensure you have HADOOP_CONF_DIR properly set in your environment, and that it points to your cluster's configuration. Specifically your yarn-site.xml.
Edit: more detail
The hadoop package comes with both server and client software. The server software would be the many daemons that run that make up the cluster. If your workstation is acting as a client (using that term loosely, not fully related to sparks --deploy-mode), then the hadoop client software must know the network locations of the server daemons running in the cluster. If your yarn-site.xml is empty, then it is pulling it's default values from yarn-defauls.xml (which is hard-coded, I believe).
Assuming your cluster is not running in HA mode, and is a mostly default configuration, then your workstation's yarn-site.xml should contain at least an entry like the following:
<property>
<name>yarn.resourcemanager.hostname</name>
<value>rm-host.yourdomain.com</value>
</property>
Obviously, replace the hostname with the hostname where your actual resource manager is running. Of course, any spark interaction with HDFS will require a properly configured hdfs-site.xml, etc.
Some cluster managing software will have something like "generate client configs" (thinking of my cloudera experience specifically), which will give you a .tar.gz with all of the config files correctly populated to access the cluster from an external workstation.
Further recommendations:
If you plan to do spark on yarn a lot in this cluster, spark recommends making sure that you have the external shuffle service configured to launch with your yarn node managers. (Please bear in mind, this config directive would have to be present in the yarn-site.xml where yarn's node manager services are running, not on your workstation.
If you are running this on your local machine,
Update your /etc/hosts file, Enter 127.0.0.1 against your hostname.
Hi I am getting this exception in my member logs.What might be the reason for this exception?
2018-04-17 15:38:59.734 - WARN --- [hz._hzInstance_1_dev.IO.thread-in-2] com.hazelcast.nio.tcp.TcpIpConnection : [172.16.42.193]:5701 [dev] [3.9.3] Connection[id=30, /172.16.42.193:5701->/172.16.15.16:54266, endpoint=[172.16.15.16]:5701, alive=false, type=MEMBER] closed. Reason: Exception in Connection[id=30, /172.16.42.193:5701->/172.16.15.16:54266, endpoint=[172.16.15.16]:5701, alive=true, type=MEMBER], thread=hz._hzInstance_1_dev.IO.thread-in-2
java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
at sun.nio.ch.IOUtil.read(IOUtil.java:197)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380)
at com.hazelcast.internal.networking.AbstractChannel.read(AbstractChannel.java:94)
at com.hazelcast.internal.networking.nio.NioChannelReader.handle(NioChannelReader.java:127)
at com.hazelcast.internal.networking.nio.NioThread.handleSelectionKey(NioThread.java:401)
at com.hazelcast.internal.networking.nio.NioThread.handleSelectionKeys(NioThread.java:386)
at com.hazelcast.internal.networking.nio.NioThread.selectLoop(Unknown Source)
at com.hazelcast.internal.networking.nio.NioThread.run(Unknown Source)
Connection reset by peer message essentially means connection was closed by the other side, which is the other member [172.16.15.16]:5701 in this case. What do you see in the logs of the other member?
It might be due to a forced shutdown or an ungraceful shutdown on the other side, or a network error.
I'm trying to connect kafka with cassandra sink, it seems like there is a connection issue though.
I first got this report:
Validating connector properties before posting
Connector properties valid. Creating connector cassandra-sink-orders
java.net.ConnectException: Verbindungsaufbau abgelehnt (**Connection refused**)
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:589)
at sun.net.NetworkClient.doConnect(NetworkClient.java:175)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:463)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:558)
at sun.net.www.http.HttpClient.<init>(HttpClient.java:242)
at sun.net.www.http.HttpClient.New(HttpClient.java:339)
at sun.net.www.http.HttpClient.New(HttpClient.java:357)
at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:1202)
at sun.net.www.protocol.http.HttpURLConnection.plainConnect0(HttpURLConnection.java:1138)
at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:1032)
at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:966)
at scalaj.http.HttpRequest$$anonfun$8.apply(Http.scala:426)
at scalaj.http.HttpRequest$$anonfun$8.apply(Http.scala:424)
at scalaj.http.HttpRequest.exec(Http.scala:347)
at scalaj.http.HttpRequest.execute(Http.scala:322)
at scalaj.http.HttpRequest.asString(Http.scala:537)
at com.datamountaineer.connect.tools.ScalajHttpClient$.request(RestKafkaConnectApi.scala:39)
at com.datamountaineer.connect.tools.RestKafkaConnectApi.com$datamountaineer$connect$tools$RestKafkaConnectApi$$req(RestKafkaConnectApi.scala:129)
at com.datamountaineer.connect.tools.RestKafkaConnectApi$$anonfun$addConnector$1.apply(RestKafkaConnectApi.scala:167)
at com.datamountaineer.connect.tools.RestKafkaConnectApi$$anonfun$addConnector$1.apply(RestKafkaConnectApi.scala:168)
at scala.util.Try$.apply(Try.scala:192)
at com.datamountaineer.connect.tools.RestKafkaConnectApi.addConnector(RestKafkaConnectApi.scala:167)
at com.datamountaineer.connect.tools.ExecuteCommand$.apply(Cli.scala:55)
at com.datamountaineer.connect.tools.Cli$.main(Cli.scala:167)
at com.datamountaineer.connect.tools.Cli.main(Cli.scala)
I looked it up and changed some settings I got recommended:
in cassandra.yaml: now (before)
start_rpc: true (false)
rpc_address: 0.0.0.0 (localhost)
broadcast_rpc_address: 255.255.255.255 (commented out)
and chmod of var/lib/cassandra and var/log/cassandra to 777 to have all permissions.
Now I m getting another issue report:
Validating connector properties before posting
Connector properties valid. Creating connector cassandra-sink-orders
java.net.SocketTimeoutException: **Read timed out**
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at sun.net.www.protocol.http.HttpURLConnection$10.run(HttpURLConnection.java:192)
at sun.net.www.protocol.http.HttpURLConnection$10.run(HttpURLConnection.java:192)
at java.security.AccessController.doPrivileged(Native Method)
at sun.net.www.protocol.http.HttpURLConnection.getChainedException(HttpURLConnection.java:1920)
at sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1490)
at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1474)
at java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:480)
at scalaj.http.HttpRequest.exec(Http.scala:351)
at scalaj.http.HttpRequest.execute(Http.scala:322)
at scalaj.http.HttpRequest.asString(Http.scala:537)
at com.datamountaineer.connect.tools.ScalajHttpClient$.request(RestKafkaConnectApi.scala:39)
at com.datamountaineer.connect.tools.RestKafkaConnectApi.com$datamountaineer$connect$tools$RestKafkaConnectApi$$req(RestKafkaConnectApi.scala:129)
at com.datamountaineer.connect.tools.RestKafkaConnectApi$$anonfun$addConnector$1.apply(RestKafkaConnectApi.scala:167)
at com.datamountaineer.connect.tools.RestKafkaConnectApi$$anonfun$addConnector$1.apply(RestKafkaConnectApi.scala:168)
at scala.util.Try$.apply(Try.scala:192)
at com.datamountaineer.connect.tools.RestKafkaConnectApi.addConnector(RestKafkaConnectApi.scala:167)
at com.datamountaineer.connect.tools.ExecuteCommand$.apply(Cli.scala:55)
at com.datamountaineer.connect.tools.Cli$.main(Cli.scala:167)
at com.datamountaineer.connect.tools.Cli.main(Cli.scala)
Caused by: java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
at java.net.SocketInputStream.read(SocketInputStream.java:171)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
at java.io.BufferedInputStream.read1(BufferedInputStream.java:286)
at java.io.BufferedInputStream.read(BufferedInputStream.java:345)
at sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:735)
at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:678)
at sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1569)
at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1474)
at scalaj.http.HttpRequest.exec(Http.scala:349)
... 11 more
Hope there is anyone out there who can help out.
Thanks in advance.
I downloaded wso2 cep 4.0.0-SNAPSHOT from jenkins around 2 weeks ago.
When I configure the cassandra output publisher from cep I tie it to an event stream. When I test the event stream, the cassandra output publisher is invoked and I have an exception. Above the entire log with the exception:
log4j: reset attribute= "false".
log4j: Threshold ="null".
log4j: Level value for root is [DEBUG].
log4j: root level set to DEBUG
log4j: Class name: [org.apache.log4j.ConsoleAppender]
log4j: Parsing layout of class: "org.apache.log4j.PatternLayout"
log4j: Setting property [conversionPattern] to [%d{ABSOLUTE} %-5p [%c{1}] %m%n].
log4j: Adding appender named [myAppender] to category [root].
17:12:16,449 INFO [CassandraHostRetryService] Downed Host Retry service started with queue size -1 and retry delay 10s
17:12:16,517 INFO [JmxMonitor] Registering JMX me.prettyprint.cassandra.service_EventPublisher_risultato_cassandra:ServiceType=hector,MonitorType=hector
17:12:16,543 DEBUG [HThriftClient] Creating a new thrift connection to localhost(127.0.0.1):9042
17:12:16,548 DEBUG [HThriftClient] Creating a new thrift connection to localhost(127.0.0.1):9042
17:12:16,549 DEBUG [HThriftClient] Creating a new thrift connection to localhost(127.0.0.1):9042
17:12:16,550 DEBUG [HThriftClient] Creating a new thrift connection to localhost(127.0.0.1):9042
17:12:16,551 DEBUG [HThriftClient] Creating a new thrift connection to localhost(127.0.0.1):9042
17:12:16,552 DEBUG [HThriftClient] Creating a new thrift connection to localhost(127.0.0.1):9042
17:12:16,553 DEBUG [HThriftClient] Creating a new thrift connection to localhost(127.0.0.1):9042
17:12:16,554 DEBUG [HThriftClient] Creating a new thrift connection to localhost(127.0.0.1):9042
17:12:16,558 DEBUG [HThriftClient] Creating a new thrift connection to localhost(127.0.0.1):9042
17:12:16,563 DEBUG [HThriftClient] Creating a new thrift connection to localhost(127.0.0.1):9042
17:12:16,569 DEBUG [HThriftClient] Creating a new thrift connection to localhost(127.0.0.1):9042
17:12:16,576 DEBUG [HThriftClient] Creating a new thrift connection to localhost(127.0.0.1):9042
17:12:16,579 DEBUG [HThriftClient] Creating a new thrift connection to localhost(127.0.0.1):9042
17:12:16,584 DEBUG [HThriftClient] Creating a new thrift connection to localhost(127.0.0.1):9042
17:12:16,589 DEBUG [HThriftClient] Creating a new thrift connection to localhost(127.0.0.1):9042
17:12:16,591 DEBUG [HThriftClient] Creating a new thrift connection to localhost(127.0.0.1):9042
17:12:16,592 DEBUG [ConcurrentHClientPool] Concurrent Host pool started with 16 active clients; max: 50 exhausted wait: 0
17:12:16,641 DEBUG [HThriftClient] Closing client CassandraClient<localhost:9042-1>
17:12:16,643 ERROR [HConnectionManager] MARK HOST AS DOWN TRIGGERED for host localhost(127.0.0.1):9042
17:12:16,645 ERROR [HConnectionManager] Pool state on shutdown: <ConcurrentCassandraClientPoolByHost>:{localhost(127.0.0.1):9042}; IsActive?: true; Active: 1; Blocked: 0; Idle: 15; NumBeforeExhausted: 49
17:12:16,646 INFO [ConcurrentHClientPool] Shutdown triggered on <ConcurrentCassandraClientPoolByHost>:{localhost(127.0.0.1):9042}
17:12:16,647 DEBUG [HThriftClient] Closing client CassandraClient<localhost:9042-5>
17:12:16,650 DEBUG [HThriftClient] Closing client CassandraClient<localhost:9042-15>
17:12:16,650 DEBUG [HThriftClient] Closing client CassandraClient<localhost:9042-4>
17:12:16,652 DEBUG [HThriftClient] Closing client CassandraClient<localhost:9042-11>
17:12:16,653 DEBUG [HThriftClient] Closing client CassandraClient<localhost:9042-12>
17:12:16,655 DEBUG [HThriftClient] Closing client CassandraClient<localhost:9042-14>
17:12:16,655 DEBUG [HThriftClient] Closing client CassandraClient<localhost:9042-7>
17:12:16,656 DEBUG [HThriftClient] Closing client CassandraClient<localhost:9042-13>
17:12:16,658 DEBUG [HThriftClient] Closing client CassandraClient<localhost:9042-9>
17:12:16,659 DEBUG [HThriftClient] Closing client CassandraClient<localhost:9042-6>
17:12:16,659 DEBUG [HThriftClient] Closing client CassandraClient<localhost:9042-16>
17:12:16,661 DEBUG [HThriftClient] Closing client CassandraClient<localhost:9042-2>
17:12:16,663 DEBUG [HThriftClient] Closing client CassandraClient<localhost:9042-10>
17:12:16,664 DEBUG [HThriftClient] Closing client CassandraClient<localhost:9042-3>
17:12:16,667 DEBUG [HThriftClient] Closing client CassandraClient<localhost:9042-8>
17:12:16,669 INFO [ConcurrentHClientPool] Shutdown complete on <ConcurrentCassandraClientPoolByHost>:{localhost(127.0.0.1):9042}
17:12:16,669 INFO [CassandraHostRetryService] Host detected as down was added to retry queue: localhost(127.0.0.1):9042
17:12:16,670 DEBUG [HThriftClient] Creating a new thrift connection to localhost(127.0.0.1):9042
17:12:16,670 WARN [HConnectionManager] Could not fullfill request on this host CassandraClient<localhost:9042-1>
17:12:16,671 WARN [HConnectionManager] Exception:
me.prettyprint.hector.api.exceptions.HectorTransportException: org.apache.thrift.transport.TTransportException: Read a negative frame size (-2080374784)!
at me.prettyprint.cassandra.service.ExceptionsTranslatorImpl.translate(ExceptionsTranslatorImpl.java:39)
at me.prettyprint.cassandra.service.AbstractCluster$4.execute(AbstractCluster.java:195)
at me.prettyprint.cassandra.service.AbstractCluster$4.execute(AbstractCluster.java:185)
at me.prettyprint.cassandra.service.Operation.executeAndSetResult(Operation.java:104)
at me.prettyprint.cassandra.connection.HConnectionManager.operateWithFailover(HConnectionManager.java:253)
at me.prettyprint.cassandra.service.AbstractCluster.describeKeyspace(AbstractCluster.java:199)
at it.vige.test.cassandra.CassandraWso2Test.cassandraConnection(CassandraWso2Test.java:46)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:86)
at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:459)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:675)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:382)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:192)
Caused by: org.apache.thrift.transport.TTransportException: Read a negative frame size (-2080374784)!
at org.apache.thrift.transport.TFramedTransport.readFrame(TFramedTransport.java:133)
at org.apache.thrift.transport.TFramedTransport.read(TFramedTransport.java:101)
at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
at org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:378)
at org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:297)
at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:204)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69)
at org.apache.cassandra.thrift.Cassandra$Client.recv_describe_keyspace(Cassandra.java:1241)
at org.apache.cassandra.thrift.Cassandra$Client.describe_keyspace(Cassandra.java:1228)
at me.prettyprint.cassandra.service.AbstractCluster$4.execute(AbstractCluster.java:190)
... 28 more
17:12:16,675 ERROR [CassandraHostRetryService] Downed Host retry failed attempt to verify CassandraHost
org.apache.thrift.transport.TTransportException: Read a negative frame size (-2080374784)!
at org.apache.thrift.transport.TFramedTransport.readFrame(TFramedTransport.java:133)
at org.apache.thrift.transport.TFramedTransport.read(TFramedTransport.java:101)
at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
at org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:378)
at org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:297)
at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:204)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69)
at org.apache.cassandra.thrift.Cassandra$Client.recv_describe_cluster_name(Cassandra.java:1101)
at org.apache.cassandra.thrift.Cassandra$Client.describe_cluster_name(Cassandra.java:1089)
at me.prettyprint.cassandra.connection.CassandraHostRetryService.verifyConnection(CassandraHostRetryService.java:214)
at me.prettyprint.cassandra.connection.CassandraHostRetryService.access$100(CassandraHostRetryService.java:24)
at me.prettyprint.cassandra.connection.CassandraHostRetryService$1.run(CassandraHostRetryService.java:75)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
17:12:16,683 INFO [HConnectionManager] Client CassandraClient<localhost:9042-1> released to inactive or dead pool. Closing.
17:12:16,683 DEBUG [HThriftClient] Closing client CassandraClient<localhost:9042-1>
17:12:16,684 ERROR [CassandraWso2Test] Test fallito
me.prettyprint.hector.api.exceptions.HectorException: All host pools marked down. Retry burden pushed out to client.
at me.prettyprint.cassandra.connection.HConnectionManager.getClientFromLBPolicy(HConnectionManager.java:390)
at me.prettyprint.cassandra.connection.HConnectionManager.operateWithFailover(HConnectionManager.java:244)
at me.prettyprint.cassandra.service.AbstractCluster.describeKeyspace(AbstractCluster.java:199)
at it.vige.test.cassandra.CassandraWso2Test.cassandraConnection(CassandraWso2Test.java:46)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:86)
at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:459)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:675)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:382)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:192)
Above how I configure cassandra 2.2.1 in the conf/cassandra.yaml:
#cluster_name: 'Test Cluster'
cluster_name: 'EventPublisher_risultato_cassandra'
and how I start it:
.../bin/cassandra
Above how the output publisher is configured in the cep:
<?xml version="1.0" encoding="UTF-8"?>
<eventPublisher name="risultato_cassandra" statistics="disable"
trace="disable" xmlns="http://wso2.org/carbon/eventpublisher">
<from streamName="gpsspace_entrati" version="1.0.0"/>
<mapping customMapping="disable" type="map"/>
<to eventAdapterType="cassandra">
<property name="key.space.name">seme</property>
<property name="port">9042</property>
<property name="hosts">localhost</property>
<property name="column.family.name">seme</property>
</to>
</eventPublisher>
Here a test code that emulate the error using :
<dependency>
<groupId>org.hectorclient.wso2</groupId>
<artifactId>hector-core</artifactId>
<version>1.1.4.wso2v1</version>
</dependency>
as dependency
package it.vige.test.cassandra;
import static org.junit.Assert.fail;
import static org.slf4j.LoggerFactory.getLogger;
import java.util.HashMap;
import java.util.Map;
import org.junit.Test;
import org.slf4j.Logger;
import me.prettyprint.cassandra.service.CassandraHostConfigurator;
import me.prettyprint.hector.api.Cluster;
import me.prettyprint.hector.api.ddl.KeyspaceDefinition;
import me.prettyprint.hector.api.factory.HFactory;
public class CassandraWso2Test {
private Logger logger = getLogger(getClass());
#Test
public void cassandraConnection() {
try {
Cluster cluster;
// Connect to the cluster and keyspace "seme"
Map<String, String> staticProperties = new HashMap<String, String>();
staticProperties.put("key.space.name", "seme");
staticProperties.put("replication.factor", null);
staticProperties.put("port", "9042");
staticProperties.put("hosts", "localhost");
staticProperties.put("strategy.class", null);
staticProperties.put("user.name", null);
staticProperties.put("indexed.columns", null);
staticProperties.put("column.family.name", "seme");
CassandraHostConfigurator chc = new CassandraHostConfigurator();
chc.setHosts(staticProperties.get("hosts"));
if (staticProperties.get("port") != null) {
chc.setPort(Integer.parseInt(staticProperties.get("port")));
}
cluster = HFactory.createCluster("EventPublisher_risultato_cassandra", chc, null);
String keySpaceName = staticProperties.get("key.space.name");
KeyspaceDefinition existingKeyspaceDefinition = cluster.describeKeyspace(keySpaceName);
logger.info("existingKeyspaceDefinition = " + existingKeyspaceDefinition);
} catch (Exception ex) {
logger.error("Test fallito", ex);
fail();
}
}
}
Enable thrift in cassandra to solve the problem.
nodetool enablethrift
and configure event publisher to connect on 9160 port