What is the port that must be used to connect PySpark to HDP 3.0.1 running in a virtual machine - apache-spark

I am just getting started with the Hadoop ecosystem. I am trying to write a python application that will connect to a Spark instance running on the HortonWorks Data Platform Sandbox.
My host machine is running Windows 10 and Python 3.9.1.
HDP 3.0.1 is running in VirtualBox 6.1
I have tried this:
from pyspark.sql import SparkSession
appName= "hive_pyspark"
master= "spark://localhost:4040"
spark = SparkSession.builder.master(master).appName(appName).enableHiveSupport().getOrCreate()
and I see the following error:
22/07/12 11:16:33 WARN Utils: Service 'SparkUI' could not bind on port 4040. Attempting port 4041.
22/07/12 11:16:34 WARN TransportChannelHandler: Exception in connection from localhost/127.0.0.1:4040
java.lang.IllegalArgumentException: Too large frame: 5211883372140375593

Related

Fail to connect remotely to Spark Master node inside a docker container

I created a spark cluster based in this link.
Everything went smooth but the problem is after the cluster created im trying to use pyspark to connect remotely to the container inside the host from other machine.
I'm receiving a 18/04/04 17:14:48 WARN StandaloneAppClient$ClientEndpoint: Failed to connect to master xxxx.xxxx:7077 even though i can connect through telnet to the 7077 port from that host!
What may i be missing out?

Why does pyspark fail with "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder'"?

For the life of me I cannot figure out what is wrong with my PySpark install. I have installed all dependencies, including Hadoop, but PySpark cant find it--am I diagnosing this correctly?
See the full error message below, but it ultimately fails on PySpark SQL
pyspark.sql.utils.IllegalArgumentException: u"Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
nickeleres#Nicks-MBP:~$ pyspark
Python 2.7.10 (default, Feb 7 2017, 00:08:15)
[GCC 4.2.1 Compatible Apple LLVM 8.0.0 (clang-800.0.34)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.apache.hadoop.security.authentication.util.KerberosUtil (file:/opt/spark-2.2.0/jars/hadoop-auth-2.7.3.jar) to method sun.security.krb5.Config.getInstance()
WARNING: Please consider reporting this to the maintainers of org.apache.hadoop.security.authentication.util.KerberosUtil
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
17/10/24 21:21:58 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
17/10/24 21:21:59 WARN Utils: Service 'SparkUI' could not bind on port 4040. Attempting port 4041.
17/10/24 21:21:59 WARN Utils: Service 'SparkUI' could not bind on port 4041. Attempting port 4042.
17/10/24 21:21:59 WARN Utils: Service 'SparkUI' could not bind on port 4042. Attempting port 4043.
Traceback (most recent call last):
File "/opt/spark/python/pyspark/shell.py", line 45, in <module>
spark = SparkSession.builder\
File "/opt/spark/python/pyspark/sql/session.py", line 179, in getOrCreate
session._jsparkSession.sessionState().conf().setConfString(key, value)
File "/opt/spark/python/lib/py4j-0.10.4-src.zip/py4j/java_gateway.py", line 1133, in __call__
File "/opt/spark/python/pyspark/sql/utils.py", line 79, in deco
raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)
pyspark.sql.utils.IllegalArgumentException: u"Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
>>>
tl;dr Close all the other Spark processes and start over.
The following WARN messages say that there is another process (or multiple processes) that holds the ports.
I'm sure that the process(es) are Spark processes, e.g. pyspark sessions or Spark applications.
17/10/24 21:21:59 WARN Utils: Service 'SparkUI' could not bind on port 4040. Attempting port 4041.
17/10/24 21:21:59 WARN Utils: Service 'SparkUI' could not bind on port 4041. Attempting port 4042.
17/10/24 21:21:59 WARN Utils: Service 'SparkUI' could not bind on port 4042. Attempting port 4043.
That's why after Spark/pyspark has found that the port 4044 is free to use for web UI it tried to instantiate HiveSessionStateBuilder and failed.
pyspark failed as you cannot have more than one Spark application up and running that uses the same local Hive metastore.
WHY THIS HAPPENS ?
Because we try to create new session more than once ! on different tabs of browser of jupyter notebook.
Solution :
START NEW SESSION ON SINGLE TAB IN JUPYTER NOTEBOOK AND AVOID TO CREATE NEW SESSION ON DIFFRENT TABS
from pyspark.sql import SparkSession
spark = SparkSession.builder.appName('EXAMPLE').getOrCreate()
We received the same error while trying to create a spark session using Jupyter notebook.
We noticed that in our case user did not have permission to spark scratch directory i.e. directory used against following spark property value "spark.local.dir". We changed the permission of directory so that user has full access to this and issue got resolved. Generally this directory resides on something like "/tmp/user".
Please note that as per spark documentation spark scratch directory is a "Directory to use for "scratch" space in Spark, including map output files and RDDs that get stored on disk. This should be on a fast, local disk in your system. It can also be a comma-separated list of multiple directories on different disks".
Another possible cause is that the spark application failed to start due to minimum machine requirements were not attended.
In the Application history tab:
Diagnostics:Uncaught exception: org.apache.hadoop.yarn.exceptions.InvalidResourceRequestException: Invalid resource request, requested virtual cores < 0, or requested virtual cores > max configured, requestedVirtualCores=5, maxVirtualCores=4
Illustration:

Getting error when run spark-shell in CDH 5.7

I am new in Spark and using CDH-5.7 for running Spark, But I am getting these error when I run Spark-shell in terminal , I have run all Cloudera Services including Spark also by Launch Cloudera Express. Plz help.
Using Scala version 2.10.5 (Java HotSpot(TM) 64-Bit Server VM, Java 1.7.0_67)
Type in expressions to have them evaluated. Type :help for more information.
16/07/13 02:14:53 WARN util.Utils: Your hostname, quickstart.cloudera resolves to a loopback address: 127.0.0.1; using 192.168.44.133 instead (on interface eth1)
16/07/13 02:14:53 WARN util.Utils: Set SPARK_LOCAL_IP if you need to bind to another address
16/07/13 02:19:28 ERROR spark.SparkContext: Error initializing SparkContext. org.apache.hadoop.security.AccessControlException: Permission denied: user=cloudera, access=WRITE, inode="/user/spark/applicationHistory":spark:supergroup:drwxr-xr-x
at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkFsPermission(DefaultAuthorizationProvider.java:281)
at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.check(DefaultAuthorizationProvider.java:262)

Can we connect to Spark cluster from remote host via java program?

I am trying to connect to spark cluster from remote system. My java code is shown below.
JAVA Code
new SparkConf()
.setAppName("Java API demo")
.setMaster("spark://192.168.XX.XX:7077")
.set("spark.driver.host","192.168.XX.XX")
.set("spark.driver.port","9929");
It gives me this error:
Error Message
16/04/14 16:08:42 ERROR NettyTransport: failed to bind to /192.168.XX.XX:9929, shutting down Netty transport
16/04/14 16:08:42 WARN Utils: Service 'sparkDriver' could not bind on port 9929. Attempting port 9930.
16/04/14 16:08:42 INFO RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon.
First of all, is this possible in spark? I am using Spark 1.4 version. Thanks in advcance..

spark-submit cluster mode is not working

I am getting an error in launching the standalone Spark driver in cluster mode. As per the documentation, it is noted that cluster mode is supported in the Spark 1.2.1 release. However, it is currently not working properly for me. Please help me in fixing the issue(s) that are preventing the proper functioning of Spark.
I have 3 node spark cluster node1 , node2 and node 3
I running below command on node 1 for deploying driver
/usr/local/spark-1.2.1-bin-hadoop2.4/bin/spark-submit --class com.fst.firststep.aggregator.FirstStepMessageProcessor --master spark://ec2-xx-xx-xx-xx.compute-1.amazonaws.com:7077 --deploy-mode cluster --supervise file:///home/xyz/sparkstreaming-0.0.1-SNAPSHOT.jar /home/xyz/config.properties
driver gets launched on node 2 in cluster. but getting exception on node 2 that it is trying to bind to node 1 ip.
2015-02-26 08:47:32 DEBUG AkkaUtils:63 - In createActorSystem, requireCookie is: off
2015-02-26 08:47:32 INFO Slf4jLogger:80 - Slf4jLogger started
2015-02-26 08:47:33 ERROR NettyTransport:65 - failed to bind to ec2-xx.xx.xx.xx.compute-1.amazonaws.com/xx.xx.xx.xx:0, shutting down Netty transport
2015-02-26 08:47:33 WARN Utils:71 - Service 'Driver' could not bind on port 0. Attempting port 1.
2015-02-26 08:47:33 DEBUG AkkaUtils:63 - In createActorSystem, requireCookie is: off
2015-02-26 08:47:33 ERROR Remoting:65 - Remoting error: [Startup failed] [
akka.remote.RemoteTransportException: Startup failed
at akka.remote.Remoting.akka$remote$Remoting$$notifyError(Remoting.scala:136)
at akka.remote.Remoting.start(Remoting.scala:201)
at akka.remote.RemoteActorRefProvider.init(RemoteActorRefProvider.scala:184)
at akka.actor.ActorSystemImpl.liftedTree2$1(ActorSystem.scala:618)
at akka.actor.ActorSystemImpl._start$lzycompute(ActorSystem.scala:615)
at akka.actor.ActorSystemImpl._start(ActorSystem.scala:615)
at akka.actor.ActorSystemImpl.start(ActorSystem.scala:632)
at akka.actor.ActorSystem$.apply(ActorSystem.scala:141)
at akka.actor.ActorSystem$.apply(ActorSystem.scala:118)
at org.apache.spark.util.AkkaUtils$.org$apache$spark$util$AkkaUtils$$doCreateActorSystem(AkkaUtils.scala:121)
at org.apache.spark.util.AkkaUtils$$anonfun$1.apply(AkkaUtils.scala:54)
at org.apache.spark.util.AkkaUtils$$anonfun$1.apply(AkkaUtils.scala:53)
at org.apache.spark.util.Utils$$anonfun$startServiceOnPort$1.apply$mcVI$sp(Utils.scala:1765)
at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141)
at org.apache.spark.util.Utils$.startServiceOnPort(Utils.scala:1756)
at org.apache.spark.util.AkkaUtils$.createActorSystem(AkkaUtils.scala:56)
at org.apache.spark.deploy.worker.DriverWrapper$.main(DriverWrapper.scala:33)
at org.apache.spark.deploy.worker.DriverWrapper.main(DriverWrapper.scala)
Caused by: org.jboss.netty.channel.ChannelException: Failed to bind to: ec2-xx-xx-xx.compute-1.amazonaws.com/xx.xx.xx.xx:0
at org.jboss.netty.bootstrap.ServerBootstrap.bind(ServerBootstrap.java:272)
at akka.remote.transport.netty.NettyTransport$$anonfun$listen$1.apply(NettyTransport.scala:393)
at akka.remote.transport.netty.NettyTransport$$anonfun$listen$1.apply(NettyTransport.scala:389)
at scala.util.Success$$anonfun$map$1.apply(Try.scala:206)
at scala.util.Try$.apply(Try.scala:161)
at scala.util.Success.map(Try.scala:206)
kindly suggest
Thanks`enter code here`
It is not possible to bind to port 0. There is/are errors in your spark configuration. Specifically look at the
spark.webui.port
It is probably set to 0.

Resources