error initialising SparkContext, running Spark on bash on ubuntu on windows - apache-spark

I am attempting to install and configure spark 2.0.1 on bash on ubuntu on Windows. I followed the instructions at Apache Spark - Installation and everything seemed to get installed OK however when I run spark-shell this happens:
16/11/06 11:25:47 ERROR SparkContext: Error initializing SparkContext.
java.net.SocketException: Invalid argument
at sun.nio.ch.Net.listen(Native Method)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:224)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at io.netty.channel.socket.nio.NioServerSocketChannel.doBind(NioServerSocketChannel.java:125)
at io.netty.channel.AbstractChannel$AbstractUnsafe.bind(AbstractChannel.java:485)
at io.netty.channel.DefaultChannelPipeline$HeadContext.bind(DefaultChannelPipeline.java:1089)
at io.netty.channel.AbstractChannelHandlerContext.invokeBind(AbstractChannelHandlerContext.java:430)
at io.netty.channel.AbstractChannelHandlerContext.bind(AbstractChannelHandlerContext.java:415)
at io.netty.channel.DefaultChannelPipeline.bind(DefaultChannelPipeline.java:903)
at io.netty.channel.AbstractChannel.bind(AbstractChannel.java:198)
at io.netty.bootstrap.AbstractBootstrap$2.run(AbstractBootstrap.java:348)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:357)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
at java.lang.Thread.run(Thread.java:745)
Immediately prior to that error I see a warning which may or may not be related:
16/11/06 11:25:47 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/11/06 11:25:47 WARN Utils: Your hostname, DESKTOP-IKGIG97 resolves to a loopback address: 127.0.0.1; using 151.127.0.0 instead (on interface wifi0)
16/11/06 11:25:47 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
I'm a bit of a noob with linux I must admit so am a bit clueless as to what to do next. In case it matters here is the contents of /etc/hosts
127.0.0.1 localhost
127.0.0.1 DESKTOP-IKGIG97
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
Hoping someone here can identify my issue. What do I need to do to investigate and fix this error?

As the error suggests, add the SPARK_LOCAL_IP in the conf/spark-env.sh script in the directory where Spark is installed
http://spark.apache.org/docs/2.0.1/configuration.html
Environment Variables
Certain Spark settings can be configured through environment variables, which are read from the conf/spark-env.sh script in the directory where Spark is installed (or conf/spark-env.cmd on Windows). In Standalone and Mesos modes, this file can give machine specific information such as hostnames. It is also sourced when running local Spark applications or submission scripts.
Note that conf/spark-env.sh does not exist by default when Spark is installed. However, you can copy conf/spark-env.sh.template to create it. Make sure you make the copy executable.
The following variables can be set in spark-env.sh:
SPARK_LOCAL_IP IP address of the machine to bind to.
If it doesn't solve your problem - please share the output executing the following unix command:
ifconfig

Related

Getting error when run spark-shell in CDH 5.7

I am new in Spark and using CDH-5.7 for running Spark, But I am getting these error when I run Spark-shell in terminal , I have run all Cloudera Services including Spark also by Launch Cloudera Express. Plz help.
Using Scala version 2.10.5 (Java HotSpot(TM) 64-Bit Server VM, Java 1.7.0_67)
Type in expressions to have them evaluated. Type :help for more information.
16/07/13 02:14:53 WARN util.Utils: Your hostname, quickstart.cloudera resolves to a loopback address: 127.0.0.1; using 192.168.44.133 instead (on interface eth1)
16/07/13 02:14:53 WARN util.Utils: Set SPARK_LOCAL_IP if you need to bind to another address
16/07/13 02:19:28 ERROR spark.SparkContext: Error initializing SparkContext. org.apache.hadoop.security.AccessControlException: Permission denied: user=cloudera, access=WRITE, inode="/user/spark/applicationHistory":spark:supergroup:drwxr-xr-x
at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkFsPermission(DefaultAuthorizationProvider.java:281)
at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.check(DefaultAuthorizationProvider.java:262)

SparkR and Pyspark throw Java.net.Bindexception on launch, but Spark-Shell does not?

I have already tried setting SPARK_LOCAL_IP to "127.0.0.1" and checking if the port is occupied. Here is the full error text:
Launching java with spark-submit command /usr/hdp/2.4.0.0-
169/spark/bin/spark-submit "sparkr-shell" /tmp/RtmpZo44il/backend_port998540c56917
/usr/hdp/2.4.0.0-169/spark/bin/load-spark-env.sh: line 72: export: `load-spark-env.sh': not a valid identifier
16/06/13 11:28:24 ERROR RBackend: Server shutting down: failed with exception
java.net.BindException: Cannot assign requested address
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:433)
at sun.nio.ch.Net.bind(Net.java:425)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at io.netty.channel.socket.nio.NioServerSocketChannel.doBind(NioServerSocketChannel.java:125)
at io.netty.channel.AbstractChannel$AbstractUnsafe.bind(AbstractChannel.java:485)
at io.netty.channel.DefaultChannelPipeline$HeadContext.bind(DefaultChannelPipeline.java:1089)
at io.netty.channel.AbstractChannelHandlerContext.invokeBind(AbstractChannelHandlerContext.java:430)
at io.netty.channel.AbstractChannelHandlerContext.bind(AbstractChannelHandlerContext.java:415)
at io.netty.channel.DefaultChannelPipeline.bind(DefaultChannelPipeline.java:903)
at io.netty.channel.AbstractChannel.bind(AbstractChannel.java:198)
at io.netty.bootstrap.AbstractBootstrap$2.run(AbstractBootstrap.java:348)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:357)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
at java.lang.Thread.run(Thread.java:745)
Error in SparkR::sparkR.init() : JVM is not ready after 10 seconds
Above error is when launching ./bin/sparkR. Again Spark-shell will execute normally.
Some more information. Spark-shell when launched will automatically search through ports until it has resolved one that doesn't have a bind exception. Even when I set the default SparkR backend port to an unused port it will fail.
I found the issue. Another user had deleted my etc/hosts file. I reconfigured the file with localhost and it seems to run sparkR. I am still curious how spark-shell could run with the file though.

No start of Hadoop components on the slave machine

I am trying to set up a multi-node hadoop cluster using my two laptops using Michael Noll tutorial. The OS on both machines is Ubuntu 14.04.
I managed to set up single-node clusters on each of the two laptops, but when I try to start (after all the necessary modifications as instructed in the tutorial) the multi-node cluster using sbin/start-all.sh on my master the slave does not react at all. All the five components on the master start, but no single one starts on the slave.
My /etc/hosts looks on both PCs like this
127.0.0.1 localhost
192.168.178.01 master
192.168.178.02 slave
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
(Furthermore, in /usr/local/hadoop/etc/hadoop there was no file called master, so I created it using: touch /usr/local/hadoop/etc/hadoop/master)
Then, when I ru sbin/start-all.sh, I see the following:
hduser#master:/usr/local/hadoop$ sbin/start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
15/05/17 21:21:26 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [master]
master: starting namenode, logging to /usr/local/hadoop/logs/hadoop-hduser-namenode-master.out
localhost: starting datanode, logging to /usr/local/hadoop/logs/hadoop-hduser-datanode-master.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-hduser-secondarynamenode-master.out
15/05/17 21:21:46 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
starting yarn daemons
starting resourcemanager, logging to /usr/local/hadoop/logs/yarn-hduser-resourcemanager-master.out
localhost: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-hduser-nodemanager-master.out
hduser#master:/usr/local/hadoop$ jps
3716 DataNode
3915 SecondaryNameNode
4522 Jps
3553 NameNode
4210 NodeManager
4073 ResourceManager
hduser#master:/usr/local/hadoop$
Interesting is here that on the line 6 there is localhost. Shouldn't it be master?
I can connect to the slave using ssh slavepassword-lessly from the master and control the slave machine, but still, sbin/start-all.sh does not start any hadoop components on the slave.
Very intrestingly, if I run sbin/start-all.sh on the slave, it starts NameNode on the master (!!!) and starts NodeManager and ResourceManager on the slave itself.
Can someone help me to properly start the multi-node cluster?
P.S: I looked at this, but in my case the location of hadoop home on both machines are identical
There can be several things:
Check that you can connect with ssh password-less from slave to master. Here is a link that teach us how to do it.
The hostname of each machine is correct?
/etc/hosts file is identical on both, master and slave, alike?
Have you ckecked with ifconfig -a the ip of both machines? Are them the ones that you expected?
Have you changed all the configurations file in slave machine, so instead of localhost, now must say the master's hostname? You should seek for the words localhost and stuff like that, in all your files on your $HADOOP_HOME directory, because there are several files for configurating all sort of things and it's very easy to forget some. Something like this: sudo grep -Ril "localhost" /usr/local/hadoop/etc/hadoop
Check the same as before but in master, so instead of saying localhost, it says the hostname of it.
You should remove the localhost entry, on the /etc/hosts file on slave machine. Sometimes that entry, so typical of the hadoop tutorials, could lead to some problems
In masters and in slaves # slave host, it should say only "slave", and in your master host, in masters file it should say "master" and in your slave file, it should say slave.
You should format your filesystem on both nodes previous to start hadoop.
Those are all the problems that i remember to have when i do the as you are doing right now. Check if some of them help you!

connecting to cassandra using datastax' connector

When I'm trying to connect to the cassandra seed node using the datastax connector, I can't.
I have four spark nodes: one master and three workers. This works well on its own. The same machines have cassandra installed on them with the one being the spark master as a seed node. This works on its own as well (I successfully wrote and read from it).
Now, I'm trying to do
val info = spark_context.cassandraTable("files", "metainfo")
println( info.count )
Before, I specify the spark context as follows:
val confStandalone = new SparkConf()
.set("spark.cassandra.connection.host", "10.14.56.156")
.setMaster("spark://10.14.56.156:7077")
.setAppName("Test")
.set("spark.executor.memory", "1g")
.set("spark.eventLog.enabled", "true")
.set("spark.driver.host", "10.14.56.156")
.set("spark.broadcast.factory", "org.apache.spark.broadcast.HttpBroadcastFactory")
val spark_context = new SparkContext( confStandalone )
spark_context.addJar("SOME_PATH/spark-cassandra-connector_2.10-1.2.0-alpha1.jar")
In the cassandra.yaml file I set the rpc_address to 10.14.56.156 and used the standard ports (9160, 9042). Now when I do
sbt run
I get the following error:
15/03/18 16:38:43 INFO LocalNodeFirstLoadBalancingPolicy: Adding host 127.0.0.1 (datacenter1)
15/03/18 16:38:43 INFO LocalNodeFirstLoadBalancingPolicy: Adding host 10.14.56.156 (datacenter1)
15/03/18 16:38:43 INFO LocalNodeFirstLoadBalancingPolicy: Adding host 127.0.0.1 (datacenter1)
15/03/18 16:38:43 ERROR Session: Error creating pool to /127.0.0.1:9042 com.datastax.driver.core.TransportException: [/127.0.0.1:9042] Cannot connect
at com.datastax.driver.core.Connection.<init>(Connection.java:106)
at com.datastax.driver.core.PooledConnection.<init>(PooledConnection.java:35)
at com.datastax.driver.core.Connection$Factory.open(Connection.java:528)
...
Caused by: java.net.ConnectException: Connection refused: /127.0.0.1:9042 at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
...
Now, when I change the rpc_address to 0.0.0.0 as id sometimes advised, I get the same error but with 10.14.56.156 instead of 127.0.0.1 and only the line:
15/03/18 16:38:43 INFO LocalNodeFirstLoadBalancingPolicy: Adding host 10.14.56.156 (datacenter1)
with the one above and the one below (referring to 127.0.0.1) removed.
I didn't set any firewall rules in the iptables, so I don't think that would be an issue. Help appreciated!
Have you looked at what broadcast_rpc_address is set to? The java-driver will derive the ip to connect to from the 'peer' column of system.peers. If rpc_address is set to 0.0.0.0, broadcast_rpc_address must be set.
My guess is that with your rpc_address set to 0.0.0.0, the driver is connecting from the broadcast_rpc_address even though it says [/10.14.56.156:9042] Cannot connect (you may see Connection refused: /127.0.0.1:9042 further in the stack trace).

Can't access hadoop with ip address

I'm following this guide for installing hadoop in centos.
Everything works normal when I run hadoop and I compare it with the guide also, but when I try to access mine with ip address like 192.168.0.1:50070 then nothing works.
Here is the output when I run had:
bash-4.2$ start-dfs.sh
14/10/15 16:28:30 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [localhost]
localhost: starting namenode, logging to /home/hadoop/hadoop/logs/hadoop-hadoop-namenode-localhost.localdomain.out
localhost: starting datanode, logging to /home/hadoop/hadoop/logs/hadoop-hadoop-datanode-localhost.localdomain.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /home/hadoop/hadoop/logs/hadoop-hadoop-secondarynamenode-localhost.localdomain.out
14/10/15 16:29:01 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Do you think I have to configure IP somewhere to access them? My configuration is exactly the same as above link, even the xml files...
Did you tried to disable the firewall/make an ip tables line for it on the master/slaves?
for centOS6.5, try:
service firewall stop
to disable the firewall. If it works properly, you just need to add the allowance to your iptables.
Also, CentOS has the SELinux. I would advice turning it off and check if it keeps with an error.

Resources