I'm trying to start Spark Thrift Server using
D:\spark\spark-2.3.2-bin-hadoop2.7\bin>spark-class org.apache.spark.deploy.SparkSubmit --class org.apache.spark.sql.hive.thriftserver.HiveThriftServer2 spark-internal
in cmd.
However, after I reach the below line, cmd hangs forever. Does anyone know the reason? Thanks for any advice.
INFO ThriftCLIService:98 - Starting ThriftBinaryCLIService on port 10000 with 5...500 worker threads
The reason is simple - server is started and waits for the connections on port 10000. Try beeline or other JDBC client and connect to jdbc:hive2://localhost:10000 (in different terminal window/tab).
Related
I'm trying to set up a remote Spark 2.4.5 cluster on Ubuntu 18. After I start ./sbin/stat-master.sh WebUI is available at <INSTANCE-IP>:8080 but it shows "Spark Master at spark://spark-master:7077" where spark-master is my hostname on the remote machine.
I'm able to start a worker with ./sbin/start-slave.sh spark://spark-master:7077 only, but <INSTANCE-IP>:4040 doesn't work. When I try ./sbin/start-slave.sh spark://<INSTANCE-IP>:7077 I can see the process but the worker is not visible in WebUI.
As a result, I can not connect to the cluster from my local machine with spark-shell --master spark://<INSTANCE-IP>:7077. The error is:
StandaloneAppClient$ClientEndpoint: Failed to connect to master <INSTANCE-IP>:7077
I created a spark cluster based in this link.
Everything went smooth but the problem is after the cluster created im trying to use pyspark to connect remotely to the container inside the host from other machine.
I'm receiving a 18/04/04 17:14:48 WARN StandaloneAppClient$ClientEndpoint: Failed to connect to master xxxx.xxxx:7077 even though i can connect through telnet to the 7077 port from that host!
What may i be missing out?
We're having a hard time running a python spark job on EMR.
aws emr add-steps --cluster-id j-XXXXXXXX --steps \
Type=CUSTOM_JAR,Name="Spark Program",\
Jar="command-runner.jar",ActionOnFailure=CONTINUE,\
Args=["spark-submit",--deploy-mode,cluster,--master,yarn,s3://XXXXXXX/pi.py,2]
We're running the same pyspark compute pi script as the AWS page suggests
This script runs, but it runs forever calculating pi. On local machine it takes seconds to finish. We've tried client mode as well. On client mode it makes us transfer the files locally.
16/09/20 15:20:32 INFO Client:
client token: N/A
diagnostics: N/A
ApplicationMaster host: N/A
ApplicationMaster RPC port: -1
queue: default
start time: 1474384831795
final status: UNDEFINED
tracking URL: http://XXXXXXX.ec2.internal:20888/proxy/application_1474381572045_0002/
user: hadoop
16/09/20 15:20:33 INFO Client: Application report for application_1474381572045_0002 (state: ACCEPTED)
Repeats this last command over and over...
Does anyone know how to run the example python spark pi script on EMR without it running forever?
When you see the job in ACCEPTED state forever, it means that it is not actually running but rather is waiting for YARN to have enough resources to run the application. Usually this is because you already have some other YARN application running and taking up resources. The easiest way to find out if this is the case is to look at the YARN ResourceManager on port 8088 of the master node. You can also run the command "yarn application -list" if you have ssh'ed to the master node.
I am using Horton Works Cluster (2 Node cluster) to run the spark and flume , So when I am running the job with --master "local[*]" , Flume is able to send the events and Spark is also able to receive and on checking at localhost:4040 I can see the events are being received from the flume. (We are pumping 100 Events/Sec from flume using flume-ng-sql source with an approx size of ~1KB each)
Where as when I run the same example with --master "yarn-client" , I am getting the below error in flume and spark is not getting any events as well.
2015-08-13 18:24:24,927 (SinkRunner-PollingRunner-DefaultSinkProcessor) [ERROR - org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:160)] Unable to deliver event. Exception follows.
org.apache.flume.EventDeliveryException: Failed to send events
at org.apache.flume.sink.AbstractRpcSink.process(AbstractRpcSink.java:403)
at org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68)
at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.flume.FlumeException: NettyAvroRpcClient { host: localhost, port: 55555 }: RPC connection error
at org.apache.flume.api.NettyAvroRpcClient.connect(NettyAvroRpcClient.java:182)
at org.apache.flume.api.NettyAvroRpcClient.connect(NettyAvroRpcClient.java:121)
at org.apache.flume.api.NettyAvroRpcClient.configure(NettyAvroRpcClient.java:638)
at org.apache.flume.api.RpcClientFactory.getInstance(RpcClientFactory.java:88)
at org.apache.flume.sink.AvroSink.initializeRpcClient(AvroSink.java:127)
at org.apache.flume.sink.AbstractRpcSink.createConnection(AbstractRpcSink.java:222)
at org.apache.flume.sink.AbstractRpcSink.verifyConnection(AbstractRpcSink.java:283)
at org.apache.flume.sink.AbstractRpcSink.process(AbstractRpcSink.java:360)
... 3 more
Caused by: java.io.IOException: Error connecting to localhost/127.0.0.1:55555
at org.apache.avro.ipc.NettyTransceiver.getChannel(NettyTransceiver.java:261)
at org.apache.avro.ipc.NettyTransceiver.<init>(NettyTransceiver.java:203)
at org.apache.avro.ipc.NettyTransceiver.<init>(NettyTransceiver.java:152)
at org.apache.flume.api.NettyAvroRpcClient.connect(NettyAvroRpcClient.java:168)
... 10 more
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:744)
at org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.connect(NioClientSocketPipelineSink.java:496)
at org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.processSelectedKeys(NioClientSocketPipelineSink.java:452)
at org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.run(NioClientSocketPipelineSink.java:365)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
... 1 more
^
Also below observation has been observed in cluster:
-- Memory consumption using yarn is pretty much higher than compared to that being used in case of local.
-- Also when I am pumping 100 events per 30 second then Flume and spark are able to connect and process the same using yarn-client as well as local..
Below is the command which I am using for flume and spark.
Flume:
sudo -u hdfs flume-ng agent --conf conf/ -f conf/flume_mysql_spark.conf -n agent1 -Dflume.root.logger=INFO,console > flumelog.txt
Spark:
sudo -u hdfs spark-submit --master "yarn-client" --class "org.paladion.atm.FlumeEventCount" target/atm-1.1-jar-with-dependencies.jar > sparklog.txt
sudo -u hdfs spark-submit --master "local[*]" --class "org.paladion.atm.FlumeEventCount" target/atm-1.1-jar-with-dependencies.jar > sparklog.txt
Kindly l;et me know what could be wrong over here?
It got solves as below:
1 - If running as local give IP of local machine in Flume as well as spark.
2 - If running as cluster (yarn-client or yarn-cluster) give IP of the machine in cluster where you want to send the events (other than the one where you are executing the program so may be give IP of node which is not a master node) machine in Flume as well as spark.
Let me know if I am wrong and this could have worked for some other reason and any better solution is there for the same.
I am using hadoop-2.7.1,hbase-1.0.1.1, and zookeeper-3.4.6 on my linux server to compare HBase performance. My Hadoop, HBase, ZooKeeper are working fine with the below process:
19639 DataNode
19893 SecondaryNameNode
20116 ResourceManager
20530 QuorumPeerMain
20287 NodeManager
23767 Client
20838 HMaster
21015 HRegionServer
24620 Jps
19446 NameNode
In addition, YCSB also working fine. I have checked with BasicDb command './bin/ycsb load basic -P workloads/workloada'. However, while I am trying to run for the HBase with the simplest command './bin/ycsb load hbase -P workloads/workloada -p columnfamily=family'. It is not responding at all. I don't know why I'm having this problem. Could you please help me out problem this problem? Thanks in advance...
It has been solved the problem. conf/hbase-site.xml had problem, it wasn't getting right Zookeeper clientport. Default 2181 is much better to use.