I have datastax-cassandra 1.2.5 and I have following settings in .yaml file
storage_port: 7000
ssl_storage_port: 7001
listen_address: localhost
rpc_port: 9160
I keep getting this. I tried to change the storage port once and it worked but than after same thing. I am not able to restart cassandra again
INFO 16:33:02,714 Completed flushing /var/lib/cassandra/data/system/local/system-local-ic-17-Data.db (241 bytes) for commitlog position ReplayPosition(segmentId=1371684781848, position=50142)
ERROR 16:33:02,793 Exception encountered during startup
java.lang.RuntimeException: java.net.BindException: Can't assign requested address
at org.apache.cassandra.net.MessagingService.getServerSocket(MessagingService.java:446)
at org.apache.cassandra.net.MessagingService.listen(MessagingService.java:389)
at org.apache.cassandra.service.StorageService.joinTokenRing(StorageService.java:583)
at org.apache.cassandra.service.StorageService.initServer(StorageService.java:548)
at org.apache.cassandra.service.StorageService.initServer(StorageService.java:445)
at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:325)
at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:413)
at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:456)
Caused by: java.net.BindException: Can't assign requested address
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:344)
at sun.nio.ch.Net.bind(Net.java:336)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:199)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:67)
at org.apache.cassandra.net.MessagingService.getServerSocket(MessagingService.java:436)
... 7 more
java.lang.RuntimeException: java.net.BindException: Can't assign requested address
at org.apache.cassandra.net.MessagingService.getServerSocket(MessagingService.java:446)
at org.apache.cassandra.net.MessagingService.listen(MessagingService.java:389)
at org.apache.cassandra.service.StorageService.joinTokenRing(StorageService.java:583)
at org.apache.cassandra.service.StorageService.initServer(StorageService.java:548)
at org.apache.cassandra.service.StorageService.initServer(StorageService.java:445)
at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:325)
at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:413)
at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:456)
Caused by: java.net.BindException: Can't assign requested address
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:344)
at sun.nio.ch.Net.bind(Net.java:336)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:199)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:67)
at org.apache.cassandra.net.MessagingService.getServerSocket(MessagingService.java:436)
... 7 more
Exception encountered during startup: java.net.BindException: Can't assign requested address
ERROR 16:33:02,798 Exception in thread Thread[StorageServiceShutdownHook,5,main]
java.lang.NullPointerException
at org.apache.cassandra.service.StorageService.stopRPCServer(StorageService.java:321)
at org.apache.cassandra.service.StorageService.shutdownClientServers(StorageService.java:362)
at org.apache.cassandra.service.StorageService.access$000(StorageService.java:88)
at org.apache.cassandra.service.StorageService$1.runMayThrow(StorageService.java:513)
at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
at java.lang.Thread.run(Thread.java:722)
Most likely there's something wrong with network configuration: localhost resolves either to invalid hostname assigned by DHCP (something like 192-168-1-10.local) or to local IPv6 address (::1) and IPv6 is disabled in java.
Check /etc/hosts
Check output of hostname command
Try setting listen_address to 127.0.0.1 or to valid IP address.
Check rpc_address setting in cassandra.yaml. Try setting it to 127.0.0.1
Related
I have installed apache spark by following these instructions. When I get to step 5, or when I have to execute start-master.sh in terminal I get the following output:
21/09/25 12:41:33 WARN Utils: Your hostname, petar-X580VD resolves to a loopback address: 127.0.1.1; using 192.168.0.105 instead (on interface wlp3s0)
21/09/25 12:41:33 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
Exception in thread "main" java.lang.ExceptionInInitializerError
at org.apache.spark.unsafe.array.ByteArrayMethods.<clinit>(ByteArrayMethods.java:54)
at org.apache.spark.internal.config.package$.<init>(package.scala:1095)
at org.apache.spark.internal.config.package$.<clinit>(package.scala)
at org.apache.spark.deploy.SparkSubmitArguments.$anonfun$loadEnvironmentArguments$3(SparkSubmitArguments.scala:157)
at scala.Option.orElse(Option.scala:447)
at org.apache.spark.deploy.SparkSubmitArguments.loadEnvironmentArguments(SparkSubmitArguments.scala:157)
at org.apache.spark.deploy.SparkSubmitArguments.<init>(SparkSubmitArguments.scala:115)
at org.apache.spark.deploy.SparkSubmit$$anon$2$$anon$3.<init>(SparkSubmit.scala:1022)
at org.apache.spark.deploy.SparkSubmit$$anon$2.parseArguments(SparkSubmit.scala:1022)
at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:85)
at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1039)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1048)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.reflect.InaccessibleObjectException: Unable to make private java.nio.DirectByteBuffer(long,int) accessible: module java.base does not "opens java.nio" to unnamed module #4434095f
at java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:357)
at java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:297)
at java.base/java.lang.reflect.Constructor.checkCanSetAccessible(Constructor.java:188)
at java.base/java.lang.reflect.Constructor.setAccessible(Constructor.java:181)
at org.apache.spark.unsafe.Platform.<clinit>(Platform.java:56)
... 13 more
I don't know how to fix this.
As #werner suggested in the comments, changing to java version 11 fixed the problem.
I have already tried setting SPARK_LOCAL_IP to "127.0.0.1" and checking if the port is occupied. Here is the full error text:
Launching java with spark-submit command /usr/hdp/2.4.0.0-
169/spark/bin/spark-submit "sparkr-shell" /tmp/RtmpZo44il/backend_port998540c56917
/usr/hdp/2.4.0.0-169/spark/bin/load-spark-env.sh: line 72: export: `load-spark-env.sh': not a valid identifier
16/06/13 11:28:24 ERROR RBackend: Server shutting down: failed with exception
java.net.BindException: Cannot assign requested address
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:433)
at sun.nio.ch.Net.bind(Net.java:425)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at io.netty.channel.socket.nio.NioServerSocketChannel.doBind(NioServerSocketChannel.java:125)
at io.netty.channel.AbstractChannel$AbstractUnsafe.bind(AbstractChannel.java:485)
at io.netty.channel.DefaultChannelPipeline$HeadContext.bind(DefaultChannelPipeline.java:1089)
at io.netty.channel.AbstractChannelHandlerContext.invokeBind(AbstractChannelHandlerContext.java:430)
at io.netty.channel.AbstractChannelHandlerContext.bind(AbstractChannelHandlerContext.java:415)
at io.netty.channel.DefaultChannelPipeline.bind(DefaultChannelPipeline.java:903)
at io.netty.channel.AbstractChannel.bind(AbstractChannel.java:198)
at io.netty.bootstrap.AbstractBootstrap$2.run(AbstractBootstrap.java:348)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:357)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
at java.lang.Thread.run(Thread.java:745)
Error in SparkR::sparkR.init() : JVM is not ready after 10 seconds
Above error is when launching ./bin/sparkR. Again Spark-shell will execute normally.
Some more information. Spark-shell when launched will automatically search through ports until it has resolved one that doesn't have a bind exception. Even when I set the default SparkR backend port to an unused port it will fail.
I found the issue. Another user had deleted my etc/hosts file. I reconfigured the file with localhost and it seems to run sparkR. I am still curious how spark-shell could run with the file though.
When I'm trying to connect to the cassandra seed node using the datastax connector, I can't.
I have four spark nodes: one master and three workers. This works well on its own. The same machines have cassandra installed on them with the one being the spark master as a seed node. This works on its own as well (I successfully wrote and read from it).
Now, I'm trying to do
val info = spark_context.cassandraTable("files", "metainfo")
println( info.count )
Before, I specify the spark context as follows:
val confStandalone = new SparkConf()
.set("spark.cassandra.connection.host", "10.14.56.156")
.setMaster("spark://10.14.56.156:7077")
.setAppName("Test")
.set("spark.executor.memory", "1g")
.set("spark.eventLog.enabled", "true")
.set("spark.driver.host", "10.14.56.156")
.set("spark.broadcast.factory", "org.apache.spark.broadcast.HttpBroadcastFactory")
val spark_context = new SparkContext( confStandalone )
spark_context.addJar("SOME_PATH/spark-cassandra-connector_2.10-1.2.0-alpha1.jar")
In the cassandra.yaml file I set the rpc_address to 10.14.56.156 and used the standard ports (9160, 9042). Now when I do
sbt run
I get the following error:
15/03/18 16:38:43 INFO LocalNodeFirstLoadBalancingPolicy: Adding host 127.0.0.1 (datacenter1)
15/03/18 16:38:43 INFO LocalNodeFirstLoadBalancingPolicy: Adding host 10.14.56.156 (datacenter1)
15/03/18 16:38:43 INFO LocalNodeFirstLoadBalancingPolicy: Adding host 127.0.0.1 (datacenter1)
15/03/18 16:38:43 ERROR Session: Error creating pool to /127.0.0.1:9042 com.datastax.driver.core.TransportException: [/127.0.0.1:9042] Cannot connect
at com.datastax.driver.core.Connection.<init>(Connection.java:106)
at com.datastax.driver.core.PooledConnection.<init>(PooledConnection.java:35)
at com.datastax.driver.core.Connection$Factory.open(Connection.java:528)
...
Caused by: java.net.ConnectException: Connection refused: /127.0.0.1:9042 at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
...
Now, when I change the rpc_address to 0.0.0.0 as id sometimes advised, I get the same error but with 10.14.56.156 instead of 127.0.0.1 and only the line:
15/03/18 16:38:43 INFO LocalNodeFirstLoadBalancingPolicy: Adding host 10.14.56.156 (datacenter1)
with the one above and the one below (referring to 127.0.0.1) removed.
I didn't set any firewall rules in the iptables, so I don't think that would be an issue. Help appreciated!
Have you looked at what broadcast_rpc_address is set to? The java-driver will derive the ip to connect to from the 'peer' column of system.peers. If rpc_address is set to 0.0.0.0, broadcast_rpc_address must be set.
My guess is that with your rpc_address set to 0.0.0.0, the driver is connecting from the broadcast_rpc_address even though it says [/10.14.56.156:9042] Cannot connect (you may see Connection refused: /127.0.0.1:9042 further in the stack trace).
I'm getting exception while bulk loads with sstableloader. I'm using JDK 1.6.0_25 64bit, Ubuntu 12.04 server. Ipv6 is turned off. Network communication between hosts works correctly. I'm going crazy ;-(
Exception in thread "Streaming to /192.168.219.36:1" java.lang.RuntimeException: java.net.SocketException: Invalid argument or cannot assign requested address
at org.apache.cassandra.utils.FBUtilities.unchecked(FBUtilities.java:628)
at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:34)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
Caused by: java.net.SocketException: Invalid argument or cannot assign requested address
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:351)
at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:213)
at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:200)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:366)
at java.net.Socket.connect(Socket.java:529)
at java.net.Socket.connect(Socket.java:478)
at java.net.Socket.<init>(Socket.java:375)
at java.net.Socket.<init>(Socket.java:276)
at org.apache.cassandra.net.OutboundTcpConnectionPool.newSocket(OutboundTcpConnectionPool.java:96)
at org.apache.cassandra.streaming.FileStreamTask.connectAttempt(FileStreamTask.java:245)
at org.apache.cassandra.streaming.FileStreamTask.runMayThrow(FileStreamTask.java:91)
at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30)
All hosts runs Cassandra 1.1 (datastax edition). Ports 7000,7199,9160 opened. Any ideas ??
Why are you streaming to port 1? It should be the RPC_PORT and the default for that is 9160.
/192.168.219.36: 1
What params are you passing to the bulkloader?
When i am using Cassandra's nodetool to see the ring of a remote host (use IP address), it gives following error, how to make this work?
BTW - I can ping that host using IP address.
root#ServerA:~/cassandra# bin/nodetool -h 172.24.0.10 ring
Error connecting to remote JMX agent!
java.rmi.ConnectException: Connection refused to host: 127.0.1.1; nested exception is:
java.net.ConnectException: Connection refused
at sun.rmi.transport.tcp.TCPEndpoint.newSocket(TCPEndpoint.java:619)
at sun.rmi.transport.tcp.TCPChannel.createConnection(TCPChannel.java:216)
at sun.rmi.transport.tcp.TCPChannel.newConnection(TCPChannel.java:202)
at sun.rmi.server.UnicastRef.invoke(UnicastRef.java:128)
at javax.management.remote.rmi.RMIServerImpl_Stub.newClient(Unknown Source)
at javax.management.remote.rmi.RMIConnector.getConnection(RMIConnector.java:2343)
at javax.management.remote.rmi.RMIConnector.connect(RMIConnector.java:296)
at javax.management.remote.JMXConnectorFactory.connect(JMXConnectorFactory.java:267)
at org.apache.cassandra.tools.NodeProbe.connect(NodeProbe.java:106)
at org.apache.cassandra.tools.NodeProbe.(NodeProbe.java:82)
at org.apache.cassandra.tools.NodeCmd.main(NodeCmd.java:405)
Caused by: java.net.ConnectException: Connection refused
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:310)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:176)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:163)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:384)
at java.net.Socket.connect(Socket.java:546)
at java.net.Socket.connect(Socket.java:495)
at java.net.Socket.(Socket.java:392)
at java.net.Socket.(Socket.java:206)
at sun.rmi.transport.proxy.RMIDirectSocketFactory.createSocket(RMIDirectSocketFactory.java:40)
at sun.rmi.transport.proxy.RMIMasterSocketFactory.createSocket(RMIMasterSocketFactory.java:146)
at sun.rmi.transport.tcp.TCPEndpoint.newSocket(TCPEndpoint.java:613)
... 10 more
You need to enable remote JMX:
To enable monitoring and management
from remote systems, set this system
property when you start the JVM:
com.sun.management.jmxremote.port=portNum