I found a tutorial that works too.
I have only one problem. The server does not start in autostart.
"V Rising" is a game that unfortunately only runs on Windows Server. My server runs on Linux (Ubuntu 18.04). I have to start it with "Wine" and "xvfb-run". My other game servers run normally with cronjob. V Rising doesn't work with cronjob, or systmctl. "Wine" and "xvfb-run" start as usual. The V Rising Server also starts, but crashes immediately and leaves a log.
FMOD failed to initialize the output device.: "Not enough memory or resources. " (43)
RtlLookupFunctionEntry returned NULL function. Aborting stack walk.
0x000000018050f71c (unityplayer)
0x0000000180514843 (unityplayer)
...
Unable to initalize any audio device (even FMOD nosound device), please check your audio
drivers and/or hardware for malfunction
RtlLookupFunctionEntry returned NULL function. Aborting stack walk.
0x000000018050f71c (unityplayer)
0x0000000180514843 (unityplayer)
...
With this script I start the server.
cd /home/steam/.steam/steamcmd/v-rising
export WINEARCH=win64
xvfb-run --auto-servernum --server-args='-screen 0 640x480x24' wineconsole ./start_server_example.bat
This script works if I start it myself in the terminal. The server starts and works. Unfortunately not in autostart. If I close the terminal (Putty), then the V Rising Server is terminated. Therefore I need the autostart.
Why doesn't this work in autostart?
Related
I'm launching a simple Puppeteer application that scrapes a website. Most of the time it works, but from time to time (randomly), it fails in the middle of the process with the following error:
Error: Unable to launch browser, error message: Failed to launch the browser process!
ERROR:ozone_platform_x11.cc(247)] Missing X server or $DISPLAY
I am running Puppeteer on an Ubuntu machine, and have an X11 display server ready. The value of the $DISPLAY environment variable is also set correctly.
echo $DISPLAY
returns
:1
Which corresponds to the ID of the X server.
So what's going on here?
I am working with Ubuntu 16.04 and I have two shell scripts:
run_roscore.sh : This one fires up a roscore in one terminal.
run_detection_node.sh : This one starts an object detection node in another terminal and should start up once run_roscore.sh has initialized the roscore.
I need both the scripts to execute as soon as the system boots up.
I made both scripts executable and then added the following command to cron:
#reboot /path/to/run_roscore.sh; /path/to/run_detection_node.sh, but it is not running.
I have also tried adding both scripts to the Startup Applications using this command for roscore: sh /path/to/run_roscore.sh and following command for detection node: sh /path/to/run_detection_node.sh. And it still does not work.
How do I get these scripts to run?
EDIT: I used the following command to see the system log for the CRON process: grep CRON /var/log/syslog and got the following output:
CRON[570]: (CRON) info (No MTA installed, discarding output).
So I installed MTA and then systemlog shows:
CRON[597]: (nvidia) CMD (/path/to/run_roscore.sh; /path/to/run_detection_node.sh)
I am still not able to see the output (which is supposed to be a camera stream with detections, as I see it when I run the scripts directly in a terminal). How should I proceed?
Since I got this working eventually, I am gonna answer my own question here.
I did the following steps to get the script running from startup:
Changed the type of the script from shell to bash (extension .bash).
Changed the shebang statement to be #!/bin/bash.
In Startup Applications, give the command bash path/to/script to run the script.
Basically when I changed the shell type from sh to bash, the script starts running as soon as the system boots up.
Note, in case this helps someone: My intention to have run_roscore.bash as a separate script was to run roscore as a background process. One can run it directly from a single script (which is also running the detection node) by having roscore& as a command before the rosnode starts. This command will fire up the master as a background process and leave the same terminal open for following commands to be executed.
If you could install immortal you could use the require option to start in sequence your services, for example, this is could be the run config for /etc/immortal/script1.yml:
cmd: /path/to/script1
log:
file: /var/log/script1.log
wait: 1
require:
- script2
And for /etc/immortal/script2.yml
cmd: /path/to/script2
log:
file: /var/log/script2.log
What this will do it will try to start both scripts on boot time, the first one script1 will wait 1 second before starting and also wait for script2 to be up and running, see more about the wait and require option here: https://immortal.run/post/immortal/
Based on your operating system you will need to configure/setup immortaldir, her is how to do it for Linux: https://immortal.run/post/how-to-install/
Going more deep in the topic of supervisors there are more alternatives here you could find some: https://en.wikipedia.org/wiki/Process_supervision
If you want to make sure that "Roscore" (whatever it is) gets started when your Ubuntu starts up then you should start it as a service (not via cron).
See this question/answer.
I have accidentally delete source code of nodejs application, but this application is running, so how can I get source code back from running app?
I hope source code has been cached in some directory.
I was able to recover the full file by attaching the debugger (as TGrif suggested).
To actually recover the code:
Use setBreakpoint('app.js', 10), where 10 is a line of the code you know will be ran over again in the running process
Say pause, then next until it's paused on the script you want to recover.
Finally, say list(5000), where 5000 is an arbitrarily long number of lines to list.
You will now have your full script printed out, albeit with line numbers at the front, but you can use a site like this to remove them.
Hope this helps anyone who encounters this unique issue in the future, as this took me a couple hours to figure out.
There is maybe a way to retrieve some of your source code with the Nodejs debugger.
Assuming Linux OS, you need to get the process id of your application:
$ ps -e | grep node
Next you entering your app in debug mode with something like that:
$ kill -s USR1 PID
where PID is the pid of your node app.
Then your start the debug console:
$ node debug -p PID
If you have an app console, you'll see:
Starting debugger agent.
Debugger listening on port 5858
In your debug console you should see a debug prompt and you can get available commands with:
debug> help
I am able to show some of the running app source with the list command:
debug> list(NUMBER_OF_LINE)
where NUMBER_OF_LINE is the number of source code line you want to display.
I'm not sure this is a one shot try for you or not because my source code was not deleted.
Hope you can get some results.
I have an arch linux setup and installed neo4j through the arch user repository (yaourt -S neo4j), and I'm able to run the web console fine (sudo neo4j console with seemingly normal output and full functionality), however when trying to start the server (sudo neo4j start), I encounter the following error message:
/usr/share/neo4j/bin/utils: line 345: [: -lt: unary operator expected
Using additional JVM arguments: -server -XX:+DisableExplicitGC -Dorg.neo4j.server.properties=/etc/neo4j/neo4j-server.properties -Djava.util.logging.config.file=/etc/neo4j/logging.properties -Dlog4j.configuration=file:/etc/neo4j/log4j.properties -XX:+UseConcMarkSweepGC -XX:+CMSClassUnloadingEnabled
Starting Neo4j Server...cat: /run/neo4j/neo4j-service.pid: No such file or directory
process []... waiting for server to be ready. Failed to start within 120 seconds.
Neo4j Server may have failed to start, please check the logs.
rm: cannot remove ‘/run/neo4j/neo4j-service.pid’: No such file or directory
There's no delay before the error message is printed, so it seems to be something other than the timeout. I'm quite new to neo4j (I worked through a fair bit of the user manual using the web console, but no development or server config experience), so I'm not really sure what else might be relevant. I tried looking through the utils script and the error appears to be where it attempts to su neo4j, but it also seems to proceed to attempt to start the server. I also tried changing the port it's starting on as in this question, but no change. The only log I can find just has this over and over (with appropriate timestamps):
Oct 15, 2014 1:33:49 AM com.sun.jersey.server.impl.application.WebApplicationImpl _initiate
INFO: Initiating Jersey application, version 'Jersey: 1.9 09/02/2011 11:17 AM'
Any help at all would be appreciated!
EDIT:
The line 345 that it's failing on is the end of this snippet:
if [ $UID == 0 ] ; then
OPEN_FILES=`su $NEO4J_USER -c "ulimit -n"`
else
OPEN_FILES=`ulimit -n`
fi
if [ $OPEN_FILES -lt 40000 ]; then
From doing some echo debugging, it seems that su $NEO4J_USER is failing, probably because $NEO4J_USER is set to neo4j, a user that does not exist on my system. I tried setting that to root in one of the config files, but evidently that's not working properly. Arch is a continual learning experience for me, but I've not had to add a new user before to get software working.
The interesting line here is:
/usr/share/neo4j/bin/utils: line 345: [: -lt: unary operator expected
I assume that is caused by a wrong default shell for the neo4j user. What default is currently set for the neo4j system user? Try to switch that to bash. The startup scripts should work nicely with bash.
I use playframework2.2 and sbt 0.13.1, I can run the sbt and start the server on command line
sbt start
it works ok. but when I run:
nohup sbt start
It run a while and then stop with log error:
(Starting server. Type Ctrl+D to exit logs, the server will remain in background) java.io.IOException: Bad file descriptor
at java.io.FileInputStream.read0(Native Method)
at java.io.FileInputStream.read(FileInputStream.java:210)
at jline.internal.NonBlockingInputStream.read(NonBlockingInputStream.java:248)
at jline.internal.InputStreamReader.read(InputStreamReader.java:261)
at jline.internal.InputStreamReader.read(InputStreamReader.java:198)
at jline.console.ConsoleReader.readCharacter(ConsoleReader.java:2038)
at play.PlayConsoleInteractionMode$$anonfun$waitForKey$1.play$PlayConsoleInteractionMode$$anonfun$$waitEOF$1(PlayInteractionMode.scala:36)
at play.PlayConsoleInteractionMode$$anonfun$waitForKey$1$$anonfun$apply$1.apply$mcV$sp(PlayInteractionMode.scala:45)
at play.PlayConsoleInteractionMode$$anonfun$doWithoutEcho$1.apply(PlayInteractionMode.scala:52)
at play.PlayConsoleInteractionMode$$anonfun$doWithoutEcho$1.apply(PlayInteractionMode.scala:49)
at play.PlayConsoleInteractionMode$.withConsoleReader(PlayInteractionMode.scala:31)
at play.PlayConsoleInteractionMode$.doWithoutEcho(PlayInteractionMode.scala:49)
at play.PlayConsoleInteractionMode$$anonfun$waitForKey$1.apply(PlayInteractionMode.scala:45)
at play.PlayConsoleInteractionMode$$anonfun$waitForKey$1.apply(PlayInteractionMode.scala:34)
at play.PlayConsoleInteractionMode$.withConsoleReader(PlayInteractionMode.scala:31)
at play.PlayConsoleInteractionMode$.waitForKey(PlayInteractionMode.scala:34)
at play.PlayConsoleInteractionMode$.waitForCancel(PlayInteractionMode.scala:55)
at play.PlayRun$$anonfun$24$$anonfun$apply$9.apply(PlayRun.scala:373)
at play.PlayRun$$anonfun$24$$anonfun$apply$9.apply(PlayRun.scala:352)
at scala.util.Either$RightProjection.map(Either.scala:536)
at play.PlayRun$$anonfun$24.apply(PlayRun.scala:352)
at play.PlayRun$$anonfun$24.apply(PlayRun.scala:334)
at sbt.Command$$anonfun$sbt$Command$$apply1$1$$anonfun$apply$6.apply(Command.scala:72)
at sbt.Command$.process(Command.scala:95)
at sbt.MainLoop$$anonfun$1$$anonfun$apply$1.apply(MainLoop.scala:100)
at sbt.MainLoop$$anonfun$1$$anonfun$apply$1.apply(MainLoop.scala:100)
at sbt.State$$anon$1.process(State.scala:179)
at sbt.MainLoop$$anonfun$1.apply(MainLoop.scala:100)
at sbt.MainLoop$$anonfun$1.apply(MainLoop.scala:100)
at sbt.ErrorHandling$.wideConvert(ErrorHandling.scala:18)
at sbt.MainLoop$.next(MainLoop.scala:100)
at sbt.MainLoop$.run(MainLoop.scala:93)
at sbt.MainLoop$$anonfun$runWithNewLog$1.apply(MainLoop.scala:71)
at sbt.MainLoop$$anonfun$runWithNewLog$1.apply(MainLoop.scala:66)
at sbt.Using.apply(Using.scala:25)
at sbt.MainLoop$.runWithNewLog(MainLoop.scala:66)
at sbt.MainLoop$.runAndClearLast(MainLoop.scala:49)
at sbt.MainLoop$.runLoggedLoop(MainLoop.scala:33)
at sbt.MainLoop$.runLogged(MainLoop.scala:25)
at sbt.StandardMain$.runManaged(Main.scala:57)
at sbt.xMain.run(Main.scala:29)
at xsbt.boot.Launch$$anonfun$run$1.apply(Launch.scala:57)
at xsbt.boot.Launch$.withContextLoader(Launch.scala:77)
at xsbt.boot.Launch$.run(Launch.scala:57)
at xsbt.boot.Launch$$anonfun$explicit$1.apply(Launch.scala:45)
at xsbt.boot.Launch$.launch(Launch.scala:65)
at xsbt.boot.Launch$.apply(Launch.scala:16)
at xsbt.boot.Boot$.runImpl(Boot.scala:32)
at xsbt.boot.Boot$.main(Boot.scala:21)
at xsbt.boot.Boot.main(Boot.scala)
error[0m] [0mjava.io.IOException: Bad file descriptor[0m
error[0m] [0mUse 'last' for the full log.[0m
Any one know which file is Bad file descriptor. And How to solve this problem.
The error happens because standard input get redirected from /dev/null by nohup - you get the same error if you do play start < /dev/null. The sbt process starts the actual server in a separate process, the sets itself up to display logs and wait for you to type Ctrl-D or Ctrl-C. It uses JLine to wait for user input, which attempts to attach to the standard input as a terminal. /dev/null can't be used in this way, so it dies complaining of a bad file descriptor. However, the background server process continues running.
If you want to start Play non-interactively, you need to use the stage task. See Using the stage task in the Play documentation.