Running spark application doesn't show up on spark history server - apache-spark

I am creating a long running spark application. After spark session has been created and application starts to run, I am not able to see it after click on the "show incomplete applications" on the spark history server. However, If I force my application to close, I can see it under the "completed applications" page.
I have spark parameters configured correctly on both client and server, as follow:
spark.eventLog.enabled=true
spark.eventLog.dir=hdfs://10.18.51.117:8020/history/ (a hdfs path on my spark history server)
I also configured the same on server side. So configuration shouldn't be a concern (since completed applications can also show up after I force my application to stop).
Do you guys have any thoughts on this behavior??
I look at the hdfs files on spark history server, I see a very small size .inprogress file associated with my running application (close to empty, see the picture below). It seems that the results get flushed to the file only when the application stops, which is not ideal for my long running application...Is there any way or parameters we can tweak to force flushing the log?
Very small size .inprogress file shown on hdfs during application is running

Related

Spark history-server stays empty

I set up spark alongside Hadoop with YARN as a resource manager. I set both spark.history.fs.logDirectory and spark.eventLog.dir to the same path in my hdfs file system. Also, spark.eventLog.enabled is set to true and I also checked history servers logs, but there are no errors (Only INFO). So I assume my problem isn't caused by permission errors. Also, I verified that application logs are actually created in the correct place, which is indeed the case. History servers logs also indicate that it is looking in the correct folder.
I don't have any idea why there are no application logs shown in the history server. Maybe I'm missing something fundamental.
Here are all important files (if that helps)
Logs: https://pastebin.com/6TGE3NbQ
spark-defaults.conf: https://pastebin.com/ZRv4JWbV
ansible-playbook.yml: https://pastebin.com/dVqsGENk (Important lines: 166 - 192 and 370)
The ansible-playbook is used to set up the whole cluster.
Edit: The history server is even parsing files (see Logs) but it just refuses to display them.
To have a proper History server setup, you need 3 things (also documented here):
Your Spark applications need to write their logs to a certain directory. This can be done with the following configurations:
spark.eventLog.enabled true
spark.eventLog.dir someDirectory
Your History Server needs to be running. This can be done like so:
./$SPARK_HOME/sbin/start-history-server.sh
Your History Server needs to be looking at the correct directory to look at the logs. This can be configured like so:
spark.history.fs.logDirectory sameDirectoryAsTheOneAbove
So in your case, it seems like something is going wrong. There are a few things you can verify:
Are your spark application correctly writing event logs?
Go to spark.eventLog.dir and check whether there are entries in there. You should have an entry per spark application that you ran.
Is my History Server running?
There are multiple ways to check this.
Type jps on the machine on which you're running the History Server. You should see a Java Process called HistoryServer running.
Visit port 18080 on that machine (if local, go to localhost:18080) to see if it's running
If your applications are writing to the correct location, and your history server is running but you still don't see any application entry on port 18080 from the point above, your history server might not be reading from the correct directory. Verify the value of spark.history.fs.logDirectory.

How to send Spark driver logs to history server? (Standalone Mode)

Many of our Spark applications running with cluster mode have failures due to errors from the Driver. I setup a spark history server which read from spark.eventLog.dir, everything works fine, The history server is able to reconstruct application UIs but no driver logs can be found.
Also I check for list of applications listed under spark.eventLog.dir, but everything is started with app-****, I could not find any driver-******.
How can I configure driver logs to be sent to spark.eventLog.dir?

Apache Spark History Server Logs

My Apache Spark application handles giant RDDs and generates EventLogs through the History Server.
How can I export these logs and import them to another computer to view them through History Server UI?
My cluster uses Windows 10 and for some reason, with this OS, the log files don't load if they aren't generated on the machine itself. Using another OS like Ubuntu, I was able to view History Server's logs on the browser.
The spark while running applications writes events to the spark.eventLog.dir (for eg HDFS - hdfs://namenode/shared/spark-logs) as configured in the spark-defaults.conf.
These are then read by the spark history server based on the
spark.history.fs.logDirectory setting.
Both these log directories need to be the same and spark history server process should have permissions to read those files.
So these would be json files in the event log directory for each application. These you can access using appropriate filesystem commands.

Spark 2.1.0 reverse proxy does not work properly

I'm trying to proxy individual spark applications. That means I need to get a single UI per spark application. To achieve that, I use the spark reverse proxy feature. So, if I have my spark master UI running at http://localhost:8080, when I click on one application name from this spark UI, I'm redirected to http://localhost:8080/proxy/{application-id}/jobs/ where application-id is the application id of the spark application I'm trying to access. Everything looks good, I get the spark job UI for this particular application and some other tabs displayed. But when I click on another tab, for instance "Environment" I'm redirected to http://localhost:8080/environment instead of http://localhost/proxy/{application-id}/environment/
This is the single line I add in my spark-defaults.conf file
spark.ui.reverseProxy=true
I use spark 2.1.0 in standalone mode and deploy some sample applications to reproduce the issue. Any clue? How can I make this proxy working without this issue? Thanks.
I had this problem.
Make sure that you correctly supply spark.ui.reverseProxy and spark.ui.reverseProxyUrl properties to all masters, workers and drivers.
In my case I used spark-submit (cluster mode) from remote machine and forgot to update local spark-defaults.conf on the machine I was submitting from.

Spark history for Standalone Cluster mode

I have seen this text on Spark website. I am trying to view Spark logs on the UI even after application ended or killed.
Is there anyway that i could view the logs in Standalone mode?
Spark is run on Mesos or YARN, it is still possible to construct the UI of an application through Spark’s history server, provided that the application’s event logs exist. You can start the history server by executing:
./sbin/start-history-server.sh
This creates a web interface at http://<server-url>:18080 by default, listing incomplete and completed applications and attempts.
When using the file-system provider class (see spark.history.provider below), the base logging directory must be supplied in the spark.history.fs.logDirectory configuration option, and should contain sub-directories that each represents an application’s event logs.
The spark jobs themselves must be configured to log events, and to log them to the same shared, writeable directory. For example, if the server was configured with a log directory of hdfs://namenode/shared/spark-logs, then the client-side options would be:
spark.eventLog.enabled true spark.eventLog.dir hdfs://namenode/shared/spark-logs

Resources