I am running a stand-alone Spark 3.2.1 locally, on my mac, installed via brew. This is for low-cost (free) unit testing purposes. I am starting this instance via pyspark command from terminal and able to access the instance web ui.
I am also trying to run spark-submit locally (from the same mac) to run a pyspark script on the pyspark instance described above. When specifying the --master :7077 I am getting the "connection refused" error. It does not look like the port 7077 is open on my mac.
How do I open the port 7077 on my mac such that I can access it from my mac via spark-submit, but other machines on the same network cannot?
Could someone share clear steps with explanations?
Much appreciated :)
Michael
Check your spark master process is running.
It must be like following output.
jps
$PID Master
$PID Worker
If spark process is not running,
run script $SPARK_HOME/sbin/start-master.sh in your shell first.
also $SPARK_HOME/sbin/start-worker.sh.
and then check if process listen on 7077 port with following command.
sudo lsof -nP -i:7077 | grep LISTEN
Related
I may have a dumb question. I am running spark on a remote EC2 and I would like to use the UI it offers. According to the official doc https://spark.apache.org/docs/latest/spark-standalone.html
I need to run the address http://localhost:8080 on my local browser. But when I do that I have my Airflow UI opening. How do I set it to Spark? Any help is appreciated.
Also according to this doc https://spark.apache.org/docs/latest/monitoring.html, I tried to run http://localhost:18080 but it did not work (I did all the settings to be able to see history server).
edit:
I have also tried the command sc.uiWebUrl in spark, which gives a private DNS 'http://ip-***-**-**-***.ap-northeast-1.compute.internal:4040' . But I am not sure how do use it.
I assumed you ssh-ed into your EC2 instance using this command:
ssh -i /path/my-key-pair.pem my-instance-user-name#my-instance-public-dns-name
To connect to the spark UI, you can add port forwarding option in ssh:
ssh -L 8080:localhost:8080 -i /path/my-key-pair.pem my-instance-user-name#my-instance-public-dns-name
and then you can just open a browser on your local machine and go to http://localhost:8080.
If you need to forward multiple ports, you can chain the -L arguments, e.g.
ssh -L 8080:localhost:8080 -L 8081:localhost:8081 -i /path/my-key-pair.pem my-instance-user-name#my-instance-public-dns-name
Note: check the Spark port number is correct, sometimes it's 4040 and sometimes 8080, depends on how you deployed spark.
Stopping standalone spark master fails with the following message:
$ ./sbin/stop-master.sh
no org.apache.spark.deploy.master.Master to stop
Why? There is one Spark Standalone master up and running.
Spark master was started under different user.
/tmp/Spark-ec2-user-org.apache.spark.deploy.master.Master-1.pid
Was not accessible.Had to login under different user who actually started the stand alone cluster manager master.
In my case, I was able to open the master WebUI page on browser where it clearly mentioned that Spark Master is running on port 7077.
However, while trying to stop using stop-all.sh, was facing no org.apache.spark.deploy.master.Master to stop . So I tried a different method - to find what process is running on port 7077 using below command :
lsof -i :7077
I got the result as java with a PID of 112099
Used the below command to kill that process :
kill 112099
After this when I checked the WebUI, it had stopped working. Successfully killed the Spark Master.
I am running a AWS EMR cluster with Spark (1.3.1) installed via the EMR console dropdown. Spark is current and processing data but I am trying to find which port has been assigned to the WebUI. I've tried port forwarding both 4040 and 8080 with no connection. I'm forwarding like so
ssh -i ~/KEY.pem -L 8080:localhost:8080 hadoop#EMR_DNS
1) How do I find out what the Spark WebUI's assigned port is?
2) How do I verify the Spark WebUI is running?
Spark on EMR is configured for YARN, thus the Spark UI is available by the application url provided by the YARN Resource Manager (http://spark.apache.org/docs/latest/monitoring.html). So the easiest way to get to it is to setup your browser with SOCKS using a port opened by SSH then from the EMR console open Resource Manager and click the Application Master URL provided to the right of the running application. Spark History server is available at the default port 18080.
Example of socks with EMR at http://docs.aws.amazon.com/ElasticMapReduce/latest/DeveloperGuide/emr-web-interfaces.html
Here is an alternative if you don't want to deal with the browser setup with SOCKS as suggested on the EMR docs.
Open a ssh tunnel to the master node with port forwarding to the machine running spark ui
ssh -i path/to/aws.pem -L 4040:SPARK_UI_NODE_URL:4040 hadoop#MASTER_URL
MASTER_URL (EMR_DNS in the question) is the URL of the master node that you can get from EMR Management Console page for the cluster
SPARK_UI_NODE_URL can be seen near the top of the stderr log. The log line will look something like:
16/04/28 21:24:46 INFO SparkUI: Started SparkUI at http://10.2.5.197:4040
Point your browser to localhost:4040
Tried this on EMR 4.6 running Spark 2.6.1
Glad to announce that this feature is finally available on AWS. You won't need to run any special commands (or to configure a SSH tunnel) :
By clicking on the link to the spark history server ui, you'll be able to see the old applications logs, or to access the running spark job's ui :
For more details: https://docs.aws.amazon.com/emr/latest/ManagementGuide/app-history-spark-UI.html
I hope it helps !
Just run the following command:
ssh -i /your-path/aws.pem -N -L 20888:ip-172-31-42-70.your-region.compute.internal:20888 hadoop#ec2-xxx.compute.amazonaws.com.cn
There are 3 places you need to change:
your .pem file
your internal master node IP
your public DNS domain.
Finally, on the Yarn UI you can click your Spark Application Tracking URL, then just replace the url:
"http://your-internal-ip:20888/proxy/application_1558059200084_0002/"
->
"http://localhost:20888/proxy/application_1558059200084_0002/"
It worked for EMR 5.x
Simply use SSH tunnel
On your local machine do:
ssh -i /path/to/pem -L 3000:ec2-xxxxcompute-1.amazonaws.com:8088 hadoop#ec2-xxxxcompute-1.amazonaws.com
On your local machine browser hit:
localhost:3000
I am trying to setup cassandra cluster with 5 nodes. I have installed dse on all nodes and started dse on all the nodes by command.
sudo service dse start
dse is running fine on all nodes.
Now I am trying to configure opscenter following http://www.datastax.com/documentation/opscenter/3.2/webhelp/index.html#opsc/install/../../opsc/install/opscInstallRHEL_t.html
When I execute "sudo service opscenterd start", it starts without any problem and even log doesnt show any problem.
But when I try doing "netstat -a | grep 8888", it doesnt show any listener.
Can anybody please help me in identifying issue?
Thanks,
Jenish
I would first figure out if the service is indeed starting. When you say you checked log, was that /var/log/messages or the opscenter logs? I would check both.
Next I would see if it stays running. You can also check for the process running with
ps -eaf | grep opscenterd
If everything is running but not listening on the right port, you should check your opscenterd.conf file for proper port and interface:
[webserver]
port = 8888
interface = 127.0.0.1
Note that your interface definition may be different - for example, it may be 0.0.0.0 which signifies binding to all interfaces (rather than just localhost as above), but you should validate that it is correct for your environment.
i am facing problem while configuring MPJ Express in cluster mode.
i am following the guide given at
http://mpj-express.org/docs/guides/linuxguide.pdf
the environment variables are successfully Initialized as
1)
Set MPJ_HOME and PATH variables
a. export MPJ_HOME=/path/to/mpj/
b. export PATH=$PATH:$MPJ_HOME/bin
2)
the machines are also added successfully with a machines file as
mpjboot machines
it shows message
starting mpjd...
3)
Next step is to test the HelloWolrd program by
Compile:
javac -cp .:$MPJ_HOME/lib/mpj.jar HelloWorld.java
10. Execute: mpjrun.sh -np 2 –dev niodev HelloWorld
when i do that i get this error
runtime.MPJRuntimeException: Cannot connect to the daemon at machine and port <10000>
The Platform i am using is
Sun Ultra 25 workstations with Solaris 10 OS
i have made a Fast Ethernet cluster of 2 machines
ssh is enabled with root user on each machine
network is established well
any help or solution is appreciated..
thank you
Examine the cluster nodes environment variables.
Try adding vars to .bashrc in each cluster node (ssh to it from main node):
echo 'export MPJ_HOME=/home/<user>/path/to/mpj' >> ~/.bashrc
echo 'export PATH=$PATH:$MPJ_HOME/bin' >> ~/.bashrc
Alternately, turn on logging /conf/wrapper.conf, run and report your findings.
This is the first time i give an answer in stackOverflow.
I set up MPJExpress-v0_42 in my centos6.3 with jdk1.6.0_32, and finally everything seems OK.
Question 1: can not connect to daemon at machine and port
This may happens that the daemon is not running. You can give a try as follows:
you can use sudo netstat -anp |grep port to check whether the port is listening and also get the processId.
In my machine, after I started the daemon using mpjdaemon -boot localhost, I used the sudo netstat -apn |grep 4000 command to check the port and got the followings.
tcp 0 0 :::40000 :::* LISTEN 8766/java
tcp 0 0 :::40001 :::* LISTEN 8766/java
the port is configured in $MPJ_HOME/conf/wrap.conf and my configuration is as follows:
#port number for the daemon.
wrapper.app.parameter.2=40001
#Socket Server Port Number.
wrapper.app.parameter.3=40000
I also use the jps -m command to get the java daemon process, the result is as follows:
8766 MPJDaemon 40001
30850 Jps -m
Here 8766 is the processId as netstat shows, and MPJDaemon is the daemon process, 40001 is the listening port.
In your question, you cannot connect the daemon process. Do the followings:
you should check the port. If you cannot find the port using netstat, it generally means the MPJDaemon is not running.
To be sure, you can also using jps to check the process. Maybe process is running but the port is another one
Do not use the Cluster commands like mpjboot machines or mpjrun.sh -np 2 -dev niodev HelloWorld. You can use mpjdaemon -boot localhost to test current machine, if it is configured OK and you can set the machines file with one item of localhost and run Cluster commands.
Other questions that I met.
Question 2: compile failed using ant
At first I use MPJ-v0_44.zip and jdk1.6.0, but it cannot compile and give an error that ProcessBuilder has no method of "interNIO"(something like that). I analyse the source code and guess may be it is because JDK version is a litter lower. Given the fact that upgrading JDK is a litter complicated, I find another MPJ version of mpj-v0_42.zip and it compiled OK.
Question 3: mpjdaemon -boot localhost with no error message, but MPJDeamon is not running.
After I configure the environment variables like MPJ_HOME from .bash_profile to .bashrc, the question is resolved. I don't know why.
Question 4: command jps -m says "MPJDeamon is running" , but mpjdaemon -status localhost says "mpjdeamon is not running".
I use command (ssh localhost nohup 'jps -m') and it says jps command not found, but i use jps the command works fine. I guess maybe it is the same question as above. The jps's path is not configured in path environment variable of .bashrc. So after adding one line of PATH=/jpspath:$PATH in .bashrc, everything works fine.