I am using spark-jobserver-0.6.2-spark-1.6.1
(1) export OBSERVER_CONFIG = /custom-spark-jobserver-config.yml
(2)./server_start.sh
Execution of the above start shell file returns without error. However, it created a pid file: spark-jobserver.pid
When I cat spark-jobserver.pid, the pid file shows pid=126633
However when I ran
lsof -i :9999 | grep LISTEN
It shows
java 126634 spark 17u IPv4 189013362 0t0 TCP *:distinct (LISTEN)
I deployed my scala application to job server below, it returned with OK
curl --data-binary #analytics_2.10-1.0.jar myhost:8090/jars/myservice
OK
When I ran the following curl command to test REST service deployed on job server
curl -d "{data.label.index:15, data.label.field:ROOT_CAUSE,input.stri ng:[\"tt: Getting operation. \"]}" 'myhost:8090/jobs? appName=myservice&classPath=com.test.Test&sync=true&timeout=400'
I got the following out of memory returned response
{
"status": "ERROR",
"result": {
"errorClass": "java.lang.RuntimeException",
"cause": "unable to create new native thread",
"stack": ["java.lang.Thread.start0(Native Method)", "java.lang.Thread.start(Thread.java:714)", "org.spark-project.jetty.util.thread.QueuedThreadP ool.startThread(QueuedThreadPool.java:441)", "org.spark-project.jetty.util.thread.QueuedThreadPool.doStart(QueuedThreadPool.java:108)", "org.spark-pr oject.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64)", "org.spark-project.jetty.util.component.AggregateLifeCycle.doStart(Ag gregateLifeCycle.java:81)", "org.spark-project.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:58)", "org.spark-project.jetty.serve r.handler.HandlerWrapper.doStart(HandlerWrapper.java:96)", "org.spark-project.jetty.server.Server.doStart(Server.java:282)", "org.spark-project.jetty .util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64)", "org.apache.spark.ui.JettyUtils$.org$apache$spark$ui$JettyUtils$$connect$1(Jetty Utils.scala:252)", "org.apache.spark.ui.JettyUtils$$anonfun$5.apply(JettyUtils.scala:262)", "org.apache.spark.ui.JettyUtils$$anonfun$5.apply(JettyUti ls.scala:262)", "org.apache.spark.util.Utils$$anonfun$startServiceOnPort$1.apply$mcVI$sp(Utils.scala:1988)", "scala.collection.immutable.Range.foreac h$mVc$sp(Range.scala:141)", "org.apache.spark.util.Utils$.startServiceOnPort(Utils.scala:1979)", "org.apache.spark.ui.JettyUtils$.startJettyServer(Je ttyUtils.scala:262)", "org.apache.spark.ui.WebUI.bind(WebUI.scala:137)", "org.apache.spark.SparkContext$$anonfun$13.apply(SparkContext.scala:481)", " org.apache.spark.SparkContext$$anonfun$13.apply(SparkContext.scala:481)", "scala.Option.foreach(Option.scala:236)", "org.apache.spark.SparkContext.(SparkContext.scala:481)", "spark.jobserver.context.DefaultSparkContextFactory$$anon$1.(SparkContextFactory.scala:53)", "spark.jobserver.co ntext.DefaultSparkContextFactory.makeContext(SparkContextFactory.scala:53)", "spark.jobserver.context.DefaultSparkContextFactory.makeContext(SparkCon textFactory.scala:48)", "spark.jobserver.context.SparkContextFactory$class.makeContext(SparkContextFactory.scala:37)", "spark.jobserver.context.Defau ltSparkContextFactory.makeContext(SparkContextFactory.scala:48)", "spark.jobserver.JobManagerActor.createContextFromConfig(JobManagerActor.scala:378) ", "spark.jobserver.JobManagerActor$$anonfun$wrappedReceive$1.applyOrElse(JobManagerActor.scala:122)", "scala.runtime.AbstractPartialFunction$mcVL$sp .apply$mcVL$sp(AbstractPartialFunction.scala:33)", "scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33)", "scala.ru ntime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25)", "ooyala.common.akka.ActorStack$$anonfun$receive$1.applyOrElse(ActorSt ack.scala:33)", "scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33)", "scala.runtime.AbstractPartialFuncti on$mcVL$sp.apply(AbstractPartialFunction.scala:33)", "scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25)", "ooyala .common.akka.Slf4jLogging$$anonfun$receive$1$$anonfun$applyOrElse$1.apply$mcV$sp(Slf4jLogging.scala:26)", "ooyala.common.akka.Slf4jLogging$class.ooya la$common$akka$Slf4jLogging$$withAkkaSourceLogging(Slf4jLogging.scala:35)", "ooyala.common.akka.Slf4jLogging$$anonfun$receive$1.applyOrElse(Slf4jLogg ing.scala:25)", "scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33)", "scala.runtime.AbstractPartialFuncti on$mcVL$sp.apply(AbstractPartialFunction.scala:33)", "scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25)", "ooyala .common.akka.ActorMetrics$$anonfun$receive$1.applyOrElse(ActorMetrics.scala:24)", "akka.actor.Actor$class.aroundReceive(Actor.scala:467)", "ooyala.co mmon.akka.InstrumentedActor.aroundReceive(InstrumentedActor.scala:8)", "akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)", "akka.actor.ActorC ell.invoke(ActorCell.scala:487)", "akka.dispatch.Mailbox.processMailbox(Mailbox.scala:238)", "akka.dispatch.Mailbox.run(Mailbox.scala:220)", "akka.di spatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:397)", "scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask .java:260)", "scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)", "scala.concurrent.forkjoin.ForkJoinPool.runWorker(Fo rkJoinPool.java:1979)", "scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)"],
"causingClass": "java.lang.OutOfMemoryError",
"message": "java.lang.OutOfMemoryError: unable to create new native thread"
My question
(1) Why processID is different as shown in pid file ? 126633 vs 126634 ?
(2) Why spark-jobserver.pid is created ? Does this mean spark job server is not started properly ?
(3) How to start job server properly ?
(4) What causes out of memory response ? How to resolve it ? Is this because I did not set Heap Size or memory correctly ? How to resolve it ?
Jobserver binds to 8090 and not to 9999, may be you should look for that process id.
Spark jobserver pid is created for tracking purpose. It does not mean that job server is not started properly.
You are starting spark-jobserver properly.
May be try increasing value of JOBSERVER_MEMORY, default is 1G. Did you check on Spark UI whether application started properly?
Android Version:1.2.2
It shows Error:Unable to start the daemon process.
This problem might be caused by incorrect configuration of the daemon.
For example, an unrecognized jvm option is used.
Please refer to the user guide chapter on the daemon at http://gradle.org/docs/2.2.1/userguide/gradle_daemon.html
Please read the following process output to find out more:
Error occurred during initialization of VM
Could not reserve enough space for object heap
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.
Please help me to fix this issue.
Error:Unable to start the daemon process.
This problem might be caused by incorrect configuration of the daemon.
For example, an unrecognized jvm option is used.
Please refer to the user guide chapter on the daemon at http://gradle.org/docs/2.2.1/userguide/gradle_daemon.html
Please read the following process output to find out more:
Any one help please
While trying to start my hybrisserver in debug mode I got the following error messages and hybrisserver stopped. I tried but not able to solve. Any help please.
FATAL ERROR in native method: JDWP No transports initialized, jvmtiError=AGENT_ERROR_TRANSPORT_INIT(197)<br/>
ERROR: transport error 202: bind failed: Permission denied<br/>
ERROR: JDWP Transport dt_socket failed to initialize, TRANSPORT_INIT(510)<br/>
JDWP exit error AGENT_ERROR_TRANSPORT_INIT(197): No transports initialized [../../../src/share/back/debugInit.c:750]<br/>
JVM exited while loading the application.<br/>
Reloading Wrapper configuration...<br/>
Launching a JVM...<br/>
FATAL ERROR in native method: JDWP No transports initialized, jvmtiError=AGENT_ERROR_TRANSPORT_INIT(197)<br/>
ERROR: transport error 202: bind failed: Permission denied<br/>
ERROR: JDWP Transport dt_socket failed to initialize, TRANSPORT_INIT(510)<br/>
JDWP exit error AGENT_ERROR_TRANSPORT_INIT(197): No transports initialized [../../../src/share/back/debugInit.c:750]<br/>
JVM exited while loading the application.<br/>
There were 5 failed launches in a row, each lasting less than 300 seconds. Giving up.
There may be a configuration problem: please check the logs.
<-- Wrapper Stopped<br/>
An error occurred in the process.
#thijsraets is correct. Either you have to check where is the port (8000) has been occupied or you can override the default value to something else in local.properties file.
tomcat.debugjavaoptions=-Xdebug -Xnoagent -Xrunjdwp:transport=dt_socket,server=y,address=8001,suspend=n
Run "ant all". This will configure debug for port 8001.
OR
You can change the JVM parameters in wrapper-debug.conf file...
wrapper.java.additional.23=-Xrunjdwp:transport=dt_socket,server=y,address=8001,suspend=n
People who encounter this problem seem to already have something else bound to the debug port, try changing the port in: tomcat.debugjavaoptions
The same thing happened to me and tried to kill the server and restarted it safely. I followed these steps:
ps aux | grep java . this will help me to find the PID , Process ID
kill -9 PID
If you want to kill all the tomcat processes you can do
pkill -9 -f tomcat
this will restart the server safely.
I'm trying to run Grails 2.3.1 en debug mode either from console or IntelliJ IDEA 12, but I always get the following error:
grails run-app --debug-fork
| Running Grails application
ERROR: transport error 202: bind failed: Address already in use
FATAL ERROR in native method: JDWP No transports initialized, jvmtiError=AGENT_ERROR_TRANSPORT_INIT(197)
ERROR: JDWP Transport dt_socket failed to initialize, TRANSPORT_INIT(510)
JDWP exit error AGENT_ERROR_TRANSPORT_INIT(197): No transports initialized [../../../src/share/back/debugInit.c:750]
First of all I configure my IDEA Debug mode.
For grails <2.4
I'm doing next:
1. grails-debug run-app
It's opening port 5005 and waiting for your IDE to connect
2. Go to IDEA and hit "bug" button
3. Your grails console automatically runs application
For grails 2.4
1. "grails --debug run-app"
2. Hit "bug" button
I solved it as follows
i switched and reswitched the grails version with gvm default ...
i checked a fresh copy
I can use gdb to debug my OpenGL program on the server locally. But when I Debug it remotely. some errors come out in glutCreateWindow() function. And I can run my program remotely. Just can't debug .
freeglut (/home/fshen/samuel/project_self/GLSL-learning/teapotshader/teapotshader):
ERROR: Internal error <FBConfig with necessary capabilities not found> in
function fgOpenWindow
X Error of failed request: BadWindow (invalid Window parameter)
Major opcode of failed request: 4 (X_DestroyWindow)
Resource id in failed request: 0x0
Serial number of failed request: 20
Current serial number in output stream: 23
PS:
First I can't run my program remotely. After setting export LIBGL_ALWAYS_INDIRECT=yes (I put this command in .bash_profile),I can run my project. Just can't debug it remotely. So I think i should add export LIBGL_ALWAYS_INDIRECT=yes into the GDB. But I don't know howto do it?