I am trying to connect to spark cluster from remote system. My java code is shown below.
JAVA Code
new SparkConf()
.setAppName("Java API demo")
.setMaster("spark://192.168.XX.XX:7077")
.set("spark.driver.host","192.168.XX.XX")
.set("spark.driver.port","9929");
It gives me this error:
Error Message
16/04/14 16:08:42 ERROR NettyTransport: failed to bind to /192.168.XX.XX:9929, shutting down Netty transport
16/04/14 16:08:42 WARN Utils: Service 'sparkDriver' could not bind on port 9929. Attempting port 9930.
16/04/14 16:08:42 INFO RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon.
First of all, is this possible in spark? I am using Spark 1.4 version. Thanks in advcance..
Related
I am just getting started with the Hadoop ecosystem. I am trying to write a python application that will connect to a Spark instance running on the HortonWorks Data Platform Sandbox.
My host machine is running Windows 10 and Python 3.9.1.
HDP 3.0.1 is running in VirtualBox 6.1
I have tried this:
from pyspark.sql import SparkSession
appName= "hive_pyspark"
master= "spark://localhost:4040"
spark = SparkSession.builder.master(master).appName(appName).enableHiveSupport().getOrCreate()
and I see the following error:
22/07/12 11:16:33 WARN Utils: Service 'SparkUI' could not bind on port 4040. Attempting port 4041.
22/07/12 11:16:34 WARN TransportChannelHandler: Exception in connection from localhost/127.0.0.1:4040
java.lang.IllegalArgumentException: Too large frame: 5211883372140375593
We are using Spark 2.0.2 and Hadoop 3.2.1. I have gotten SSL configured across Hadoop without any trouble. But Spark is having some trouble.
Without SSL, I can launch a job and view the Spark UI, proxied through Yarn.
When I enable SSL, I can still launch a job and run to completion, but I can not access the Spark Web UI. Yarn links to an ApplicationMaster proxy that fails with java.net.ConnectException: Connection refused (Connection refused)
Digging into the job's stdout, I see lines like this:
2020-04-14 16:21:57,258 INFO server.ServerConnector: Started ServerConnector#1484944f{HTTP/1.1}{0.0.0.0:40809}
2020-04-14 16:21:57,303 INFO server.ServerConnector: Started ServerConnector#799e8b3a{SSL-HTTP/1.1}{0.0.0.0:36015}
2020-04-14 16:21:57,303 INFO server.Server: Started #5614ms
2020-04-14 16:21:57,304 INFO util.Utils: Successfully started service 'SparkUI' on port 40809.
2020-04-14 16:21:57,309 INFO ui.SparkUI: Bound SparkUI to 0.0.0.0, and started at https://10.10.75.40:40809
If I call up https://10.10.75.40:40809 I get an error that it isn't SSL.
If I call up http://10.10.75.40:40809 I get served a bogus redirect to port 10.10.75.40:0.
If I call up https://10.10.75.40:36015 and ignore certificate errors, or if I call via the correct hostname, I get redirected to the RM's :8090/proxy/redirect which throws the 500 error again.
My configuration consists of:
spark.ssl.enabled true
spark.ssl.keyStore /etc/sslmate/STAR.mtv.qxxxxxxxxd.com.jks
spark.ssl.keyStorePassword _the password_
spark.ssl.trustStore /etc/sslmate/STAR.mtv.qxxxxxxxxd.com.jks
spark.ssl.trustStorePassword _the password_
spark.ssl.protocol TLSv1.2
I have also experimented with trying to set ports explicitly, to no avail:
spark.ssl.ui.port 4440
spark.ssl.history.port 18480
spark.ui.driver.port 2020
spark.ssl.ui.driver.port 2420
I could not find this documented via Apache, but I found a note from MapR that got me working:
Spark UI SSL is not needed when running Spark on YARN because encryption is provided by the YARN protocol. To disable Spark SSL, add spark.ssl.ui.enabled false to the spark-defaults.conf file on each spark node.
When configured in this manner, the ResourceManager correctly provides the Spark UI via a :8090/proxy/redirect type URL. In the stderr I see output along these lines:
2020-04-14 17:36:38,098 INFO server.ServerConnector: Started ServerConnector#7ef020c8{HTTP/1.1}{0.0.0.0:45951}
2020-04-14 17:36:38,098 INFO server.Server: Started #7871ms
2020-04-14 17:36:38,107 INFO util.Utils: Successfully started service 'SparkUI' on port 45951.
2020-04-14 17:36:38,113 INFO ui.SparkUI: Bound SparkUI to 0.0.0.0, and started at http://10.10.31.230:45951
For the life of me I cannot figure out what is wrong with my PySpark install. I have installed all dependencies, including Hadoop, but PySpark cant find it--am I diagnosing this correctly?
See the full error message below, but it ultimately fails on PySpark SQL
pyspark.sql.utils.IllegalArgumentException: u"Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
nickeleres#Nicks-MBP:~$ pyspark
Python 2.7.10 (default, Feb 7 2017, 00:08:15)
[GCC 4.2.1 Compatible Apple LLVM 8.0.0 (clang-800.0.34)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.apache.hadoop.security.authentication.util.KerberosUtil (file:/opt/spark-2.2.0/jars/hadoop-auth-2.7.3.jar) to method sun.security.krb5.Config.getInstance()
WARNING: Please consider reporting this to the maintainers of org.apache.hadoop.security.authentication.util.KerberosUtil
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
17/10/24 21:21:58 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
17/10/24 21:21:59 WARN Utils: Service 'SparkUI' could not bind on port 4040. Attempting port 4041.
17/10/24 21:21:59 WARN Utils: Service 'SparkUI' could not bind on port 4041. Attempting port 4042.
17/10/24 21:21:59 WARN Utils: Service 'SparkUI' could not bind on port 4042. Attempting port 4043.
Traceback (most recent call last):
File "/opt/spark/python/pyspark/shell.py", line 45, in <module>
spark = SparkSession.builder\
File "/opt/spark/python/pyspark/sql/session.py", line 179, in getOrCreate
session._jsparkSession.sessionState().conf().setConfString(key, value)
File "/opt/spark/python/lib/py4j-0.10.4-src.zip/py4j/java_gateway.py", line 1133, in __call__
File "/opt/spark/python/pyspark/sql/utils.py", line 79, in deco
raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)
pyspark.sql.utils.IllegalArgumentException: u"Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
>>>
tl;dr Close all the other Spark processes and start over.
The following WARN messages say that there is another process (or multiple processes) that holds the ports.
I'm sure that the process(es) are Spark processes, e.g. pyspark sessions or Spark applications.
17/10/24 21:21:59 WARN Utils: Service 'SparkUI' could not bind on port 4040. Attempting port 4041.
17/10/24 21:21:59 WARN Utils: Service 'SparkUI' could not bind on port 4041. Attempting port 4042.
17/10/24 21:21:59 WARN Utils: Service 'SparkUI' could not bind on port 4042. Attempting port 4043.
That's why after Spark/pyspark has found that the port 4044 is free to use for web UI it tried to instantiate HiveSessionStateBuilder and failed.
pyspark failed as you cannot have more than one Spark application up and running that uses the same local Hive metastore.
WHY THIS HAPPENS ?
Because we try to create new session more than once ! on different tabs of browser of jupyter notebook.
Solution :
START NEW SESSION ON SINGLE TAB IN JUPYTER NOTEBOOK AND AVOID TO CREATE NEW SESSION ON DIFFRENT TABS
from pyspark.sql import SparkSession
spark = SparkSession.builder.appName('EXAMPLE').getOrCreate()
We received the same error while trying to create a spark session using Jupyter notebook.
We noticed that in our case user did not have permission to spark scratch directory i.e. directory used against following spark property value "spark.local.dir". We changed the permission of directory so that user has full access to this and issue got resolved. Generally this directory resides on something like "/tmp/user".
Please note that as per spark documentation spark scratch directory is a "Directory to use for "scratch" space in Spark, including map output files and RDDs that get stored on disk. This should be on a fast, local disk in your system. It can also be a comma-separated list of multiple directories on different disks".
Another possible cause is that the spark application failed to start due to minimum machine requirements were not attended.
In the Application history tab:
Diagnostics:Uncaught exception: org.apache.hadoop.yarn.exceptions.InvalidResourceRequestException: Invalid resource request, requested virtual cores < 0, or requested virtual cores > max configured, requestedVirtualCores=5, maxVirtualCores=4
Illustration:
I am new in Spark and using CDH-5.7 for running Spark, But I am getting these error when I run Spark-shell in terminal , I have run all Cloudera Services including Spark also by Launch Cloudera Express. Plz help.
Using Scala version 2.10.5 (Java HotSpot(TM) 64-Bit Server VM, Java 1.7.0_67)
Type in expressions to have them evaluated. Type :help for more information.
16/07/13 02:14:53 WARN util.Utils: Your hostname, quickstart.cloudera resolves to a loopback address: 127.0.0.1; using 192.168.44.133 instead (on interface eth1)
16/07/13 02:14:53 WARN util.Utils: Set SPARK_LOCAL_IP if you need to bind to another address
16/07/13 02:19:28 ERROR spark.SparkContext: Error initializing SparkContext. org.apache.hadoop.security.AccessControlException: Permission denied: user=cloudera, access=WRITE, inode="/user/spark/applicationHistory":spark:supergroup:drwxr-xr-x
at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkFsPermission(DefaultAuthorizationProvider.java:281)
at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.check(DefaultAuthorizationProvider.java:262)
I am getting an error in launching the standalone Spark driver in cluster mode. As per the documentation, it is noted that cluster mode is supported in the Spark 1.2.1 release. However, it is currently not working properly for me. Please help me in fixing the issue(s) that are preventing the proper functioning of Spark.
I have 3 node spark cluster node1 , node2 and node 3
I running below command on node 1 for deploying driver
/usr/local/spark-1.2.1-bin-hadoop2.4/bin/spark-submit --class com.fst.firststep.aggregator.FirstStepMessageProcessor --master spark://ec2-xx-xx-xx-xx.compute-1.amazonaws.com:7077 --deploy-mode cluster --supervise file:///home/xyz/sparkstreaming-0.0.1-SNAPSHOT.jar /home/xyz/config.properties
driver gets launched on node 2 in cluster. but getting exception on node 2 that it is trying to bind to node 1 ip.
2015-02-26 08:47:32 DEBUG AkkaUtils:63 - In createActorSystem, requireCookie is: off
2015-02-26 08:47:32 INFO Slf4jLogger:80 - Slf4jLogger started
2015-02-26 08:47:33 ERROR NettyTransport:65 - failed to bind to ec2-xx.xx.xx.xx.compute-1.amazonaws.com/xx.xx.xx.xx:0, shutting down Netty transport
2015-02-26 08:47:33 WARN Utils:71 - Service 'Driver' could not bind on port 0. Attempting port 1.
2015-02-26 08:47:33 DEBUG AkkaUtils:63 - In createActorSystem, requireCookie is: off
2015-02-26 08:47:33 ERROR Remoting:65 - Remoting error: [Startup failed] [
akka.remote.RemoteTransportException: Startup failed
at akka.remote.Remoting.akka$remote$Remoting$$notifyError(Remoting.scala:136)
at akka.remote.Remoting.start(Remoting.scala:201)
at akka.remote.RemoteActorRefProvider.init(RemoteActorRefProvider.scala:184)
at akka.actor.ActorSystemImpl.liftedTree2$1(ActorSystem.scala:618)
at akka.actor.ActorSystemImpl._start$lzycompute(ActorSystem.scala:615)
at akka.actor.ActorSystemImpl._start(ActorSystem.scala:615)
at akka.actor.ActorSystemImpl.start(ActorSystem.scala:632)
at akka.actor.ActorSystem$.apply(ActorSystem.scala:141)
at akka.actor.ActorSystem$.apply(ActorSystem.scala:118)
at org.apache.spark.util.AkkaUtils$.org$apache$spark$util$AkkaUtils$$doCreateActorSystem(AkkaUtils.scala:121)
at org.apache.spark.util.AkkaUtils$$anonfun$1.apply(AkkaUtils.scala:54)
at org.apache.spark.util.AkkaUtils$$anonfun$1.apply(AkkaUtils.scala:53)
at org.apache.spark.util.Utils$$anonfun$startServiceOnPort$1.apply$mcVI$sp(Utils.scala:1765)
at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141)
at org.apache.spark.util.Utils$.startServiceOnPort(Utils.scala:1756)
at org.apache.spark.util.AkkaUtils$.createActorSystem(AkkaUtils.scala:56)
at org.apache.spark.deploy.worker.DriverWrapper$.main(DriverWrapper.scala:33)
at org.apache.spark.deploy.worker.DriverWrapper.main(DriverWrapper.scala)
Caused by: org.jboss.netty.channel.ChannelException: Failed to bind to: ec2-xx-xx-xx.compute-1.amazonaws.com/xx.xx.xx.xx:0
at org.jboss.netty.bootstrap.ServerBootstrap.bind(ServerBootstrap.java:272)
at akka.remote.transport.netty.NettyTransport$$anonfun$listen$1.apply(NettyTransport.scala:393)
at akka.remote.transport.netty.NettyTransport$$anonfun$listen$1.apply(NettyTransport.scala:389)
at scala.util.Success$$anonfun$map$1.apply(Try.scala:206)
at scala.util.Try$.apply(Try.scala:161)
at scala.util.Success.map(Try.scala:206)
kindly suggest
Thanks`enter code here`
It is not possible to bind to port 0. There is/are errors in your spark configuration. Specifically look at the
spark.webui.port
It is probably set to 0.