When running spark-submit, you can see the println statements right in the shell. When submitting a spark job to the sparkjobserver, I can't find where the stdout messages go. Anyone knows that?
I found it in
/mnt/var/log/spark-jobserver/spark-job-server.log
Related
I'm working on a spark streaming job which runs on standalone mode. The executors by default append the logs in $SPARK_HOME/work/app_idxxxx/stderr and stdout files. Now the problem comes when app runs for a long time say a month or more and it generates a lot of logs inside stderr file. I would like to rollup the stderr daily for a week and archive(delete) that after that. I changed the log4j.properties with org.apache.log4j.RollingFileAppender and directed the logs to a file instead of stderr but the file doesn't respect the rolling and keeps growing.
Creating a cron job to do that is also not working since spark has a pointer to that specific file and changing the name probably not working.
I could't find any documentations for these specific logs. I really appreciate for any help.
After digging more, I finally found how to resolve the issue and I post it here so that the next person don't go through all this suffer and trial/error.
The setting for those logs are in two different places. One in $SPARK_HOME/conf/spark-default.conf add these three lines below in each executor:
spark.executor.logs.rolling.time.interval daily
spark.executor.logs.rolling.strategy time
spark.executor.logs.rolling.maxRetainedFiles 7
The other file that you need to change in each executor is $SPARK_HOME/conf/spark-env.sh add the following line:
SPARK_WORKER_OPTS="$SPARK_WORKER_OPTS -Dspark.worker.cleanup.enabled=true -Dspark.worker.cleanup.interval=1800
-Dspark.worker.cleanup.appDataTtl=864000
-Dspark.executor.logs.rolling.strategy=time
-Dspark.executor.logs.rolling.time.interval=daily
-Dspark.executor.logs.rolling.maxRetainedFiles=7 "
export SPARK_WORKER_OPTS
After these changes it started working properly. Hope this helps some people :)
if you are in standalone mode, just export an environment is enough:
export SPARK_WORKER_OPTS="-Dspark.executor.logs.rolling.strategy=time -Dspark.executor.logs.rolling.time.interval=daily -Dspark.executor.logs.rolling.maxRetainedFiles=7"
you can also refer to: http://apache-spark-user-list.1001560.n3.nabble.com/Executor-Log-Rotation-Is-Not-Working-td18024.html
I need to print some commands in spark yarn mode. Obviously println(message) doesn't work... I want to find a way to collect the message. Can someone point me to the current method for example using collect?
How to use collect?
Does the below code work?
val c=message.collect()
println (c)
You can achieve this using the below command:
message.foreach(println)
You will find output of above call in executor logs.
I am trying to start a QProcess by
QProcess process= new QProcess();
process.start("javac file.java");
It starts successfully and I can see the output in the Qt Creator's log window. But when I try to read it from the program using process.readAll(), nothing was read. But when I try to do something like
process.start("echo Print this message");
then process.readAll() returns "Print this message".
Can anybody help me why this happens and how can I get that work. I am trying to make a simple IDE with it.
You're reading from the process's standard output channel, but your process outputs on the standard error channel. You need to read both. You also have the option of merging them. See QProcess documentation - read it and make sure you understand it. Edit your question to ask for clarification if anything is unclear.
I'm using the Groovy Grails Tool Suite to practice Groovy. I want to run a Groovy Shell, but when I create a new shell and try to run it, I get this error:
Could not find $jarName on the class path. Please add it manually
What does this mean, and how do I resolve this?
I believe this is happening because JLine can't be found on your classpath. I submitted a PR to make the error message in this case actually useful.
I had a similar problem with this exact same message, but the reason was that I was attempting to run the script without specifying which script to run. Ensure you have the script open in the editing window and trying running it again - that got rid of the message for me.
when using AMQ 5.6 and starting the broker using ./activemq start...where does the stdout/stderr go?
I expected it to show up in the /data/activemq.log file, but it doesn't...is there are way around this with a tweak to the log4j or JavaServiceWrapper config perhaps?
When I start in console mode using ./activemq console, the stdout/stderr messages are displayed as expected. In particular, I need to get output from e.printStackTrace() to show up in the logs when running in this mode.
it seems to just get redirected to /dev/null...I changed the /bin/activemq script to redirect to ../data/start.log instead and sure enough, the stdout/err are there...not sure why this isn't the default behavior to be honest...
When i remember correctly, there is another file called wrapper.log. look out for it in the same dir where wrapper.conf is.