I am trying to run both a server and client to run from a makefile:
target:
./server&
./client
The problem is that server& never returns the control back even though I assume it should run in the background. It keeps listening for client which is never invoked as the makefile does not seem to get the control back from server. How can I solve this issue?. without writing any additional target or scripts?
You should be able to do this by combining the commands on a single line:
target:
./server& ./client
Make hands commandlines to the shell ($(SHELL)) one line at a time.
Alternatively, you could define two independent targets:
target: run_server run_client
run_server:
./server
run_client:
./client
and run make with the -j option to make it parallelize build steps:
make -j2
This would not appear the most natural solution for launching your program (e.g. for test) but works best when you have large numbers of build rules that can be partly built in parallel. (For a little more control on make-s parallellization of targets, see also
.NOTPARALLEL
If .NOTPARALLEL is mentioned as a target, then this invocation of make will be run serially, even if the ‘-j’ option is given. Any recursively invoked make command will still run recipes in parallel (unless its makefile also contains this target). Any prerequisites on this target are ignored.
server runs in background. You may put in foreground using command fg. And then kill it with Ctrl-C
Or maybe this method: killall server
Related
I'm developing a search engine that should compile both under Visual Studio and to WebAssembly using Emscripten ( cengine ). The core code works very well. The WebAssembly module runs within a web worker, and commands issued using the HTML input of the page are sent as a message to this worker, which has a Javascript message handler that invokes the necessary funcion of the WebAssembly module using cwrap.
However I'm having problem when I want to use an extra thread in the WebAssembly code, so that during search the worker remains responsive to incoming commands.
My build command is this:
emcc src/attack.cpp src/bitboard.cpp src/cengine.cpp src/matein4.cpp src/piece.cpp src/search.cpp src/square.cpp src/state.cpp -s WASM=1 -s TOTAL_MEMORY=30MB -s ALLOW_MEMORY_GROWTH=1 -s USE_PTHREADS=1 -s PTHREAD_POOL_SIZE=1 -D WASM -o wasmbuild/cengine.html -s EXPORTED_FUNCTIONS="['_main','_execute_uci_command']" -s EXTRA_EXPORTED_RUNTIME_METHODS="['cwrap']"
This compiles without problem. The web worker runs, and when the worker tries to invoke a the search thread, instead of executing the code, I get a message in my message handler like:
{cmd: "run", start_routine: 11, arg: 0, threadInfoStruct: 25501264, selfThreadId: 25501264, …}
arg: 0
cmd: "run"
parentThreadId: 2295120
selfThreadId: 25501264
stackBase: 23404096
stackSize: 2097152
start_routine: 11
threadInfoStruct: 25501264
time: 3276.1050000008254
__proto__: Object
So normally when the module is loaded, Emscripten adds a web worker to start a thread, generates a separate output file for this, but I already use the module as a web worker and I get the message that this Emscripten generated code should.
I'm lost what to do.
Solved.
I used the code in a worker for responsiveness in the first place, as a worker runs in a true thread and does not block the application main thread.
However if you use Emscripten pthread, then this is taken care for you, so you can move back the code to the app instead spawning it in a worker. So you include the glue code that Emscripten generates in the index.html ( which is the default by the way ).
What happened is that I used a make shift solution to a problem, for which a proper solution exists, and once you use the proper solution, this conflicts with your make shift solution. Emscripten essentially handles the worker creation by itself if you direct it in the build command by switches -s USE_PTHREADS=1 -s PTHREAD_POOL_SIZE=2.
I am working with Ubuntu 16.04 and I have two shell scripts:
run_roscore.sh : This one fires up a roscore in one terminal.
run_detection_node.sh : This one starts an object detection node in another terminal and should start up once run_roscore.sh has initialized the roscore.
I need both the scripts to execute as soon as the system boots up.
I made both scripts executable and then added the following command to cron:
#reboot /path/to/run_roscore.sh; /path/to/run_detection_node.sh, but it is not running.
I have also tried adding both scripts to the Startup Applications using this command for roscore: sh /path/to/run_roscore.sh and following command for detection node: sh /path/to/run_detection_node.sh. And it still does not work.
How do I get these scripts to run?
EDIT: I used the following command to see the system log for the CRON process: grep CRON /var/log/syslog and got the following output:
CRON[570]: (CRON) info (No MTA installed, discarding output).
So I installed MTA and then systemlog shows:
CRON[597]: (nvidia) CMD (/path/to/run_roscore.sh; /path/to/run_detection_node.sh)
I am still not able to see the output (which is supposed to be a camera stream with detections, as I see it when I run the scripts directly in a terminal). How should I proceed?
Since I got this working eventually, I am gonna answer my own question here.
I did the following steps to get the script running from startup:
Changed the type of the script from shell to bash (extension .bash).
Changed the shebang statement to be #!/bin/bash.
In Startup Applications, give the command bash path/to/script to run the script.
Basically when I changed the shell type from sh to bash, the script starts running as soon as the system boots up.
Note, in case this helps someone: My intention to have run_roscore.bash as a separate script was to run roscore as a background process. One can run it directly from a single script (which is also running the detection node) by having roscore& as a command before the rosnode starts. This command will fire up the master as a background process and leave the same terminal open for following commands to be executed.
If you could install immortal you could use the require option to start in sequence your services, for example, this is could be the run config for /etc/immortal/script1.yml:
cmd: /path/to/script1
log:
file: /var/log/script1.log
wait: 1
require:
- script2
And for /etc/immortal/script2.yml
cmd: /path/to/script2
log:
file: /var/log/script2.log
What this will do it will try to start both scripts on boot time, the first one script1 will wait 1 second before starting and also wait for script2 to be up and running, see more about the wait and require option here: https://immortal.run/post/immortal/
Based on your operating system you will need to configure/setup immortaldir, her is how to do it for Linux: https://immortal.run/post/how-to-install/
Going more deep in the topic of supervisors there are more alternatives here you could find some: https://en.wikipedia.org/wiki/Process_supervision
If you want to make sure that "Roscore" (whatever it is) gets started when your Ubuntu starts up then you should start it as a service (not via cron).
See this question/answer.
I am building some Docker Spark images and I am a little puzzled on how to pass environment (ENV) variables defined in the DockerFile all the way down into the container via "run -e", on into the supervisord and and then into the spark-submit shell without having to hard-code them again in the supervisord.conf file (as seems to be the suggestion in something somewhat similar here: supervisord environment variables setting up application ).
To help explain, imagine the following components:
DockerFile (contains about 20 environment variables "ENV FOO1 bar1", etc.)
run.sh (docker run -d -e my_spark_program)
conf/supervisord.conf ([program:my_spark_program] command=sh /opt/spark/sbin/submit_my_spark_program.sh etc.)
submit_my_spark_program.sh (contains a spark-submit of the jar I want to run - probably also needs something like --files
•--conf 'spark.executor.extraJavaOptions=-Dconfig.resource=app'
•--conf 'spark.driver.extraJavaOptions=-Dconfig.resource=app' but this doesn't quite seem right?)
I guess I would like to define my ENV variables once in the DockerFile and only once, and I think it should be possible to pass them into the container via the run.sh using the "-e" switch, but I can't seem to figure out how to pass them from there to the supervisord and beyond into the spark-submit shell (submit_my_spark_program.sh) so that they are ultimately available to my spark-submitted jar file. This seems a little over-engineered, so maybe I am missing something here...?
Apparently the answer (or at least the workaround) in this case is to not use System.Property(name, default) to get the Docker ENV variables through the supervisor, but instead use the somewhat less useful System.getenv(name) - as this seems to work.
I was hoping to be able to use System.Property(name, default) to get the Docker ENV variables, since this offers an easy way to supply default values, but apparently this does not work in this case. If someone can improve on this answer by providing a way to use System.Property, then by all means join in. Thanks!
On a Unix server, I am running an application engine via the process scheduler.
In it, I am attempting to use a "zip" Unix command from within an "Exec" pplcode function.
However, I only get the error
PS_Exec(P): Error executing batch command with reason: No such file or directory (2)
I have tried it several ways. The most logical approach I thought was to change directory back to the root, then change to the specified directory so that I could easily use the zip command, such as the following...
Exec("cd / && cd /opt/psfin/pt850/dat/PSFIN1/PYMNT && zip INVREND INVREND.XML");
1643 12.20.34 0.000048 72: Exec("cd /opt/psfin/pt850/dat/PSFIN1/PYMNT");
1644 12.20.34 0.001343 PS_Exec(P): Error executing batch command with reason: No such file or directory (2)
I've even tried the following....just to see if anything works from within an Exec...
Exec("ls");
Sure enough, it gave the same error.
Now, some of you may be wondering, does the account that is associated with the process scheduler actually have authority on this particular directory path on the server ? Well, I was able to create the xml file given in the previous command with no problems.
I just cannot seem to be able to modify it with the Exec issuance of Unix commands.
I'm wondering if this is an error of rights and permissions from the unix server with regards to the operator id that the process scheduler is running from. However, given that it can create and write to a file there, I cannot understand why the Exec command would be met with any resistance....Just my gut shot in the dark...
Any help would be GREATLY appreciated!!!
Thanks,
Flynn
Not sure if you're still having an issue, but in your Exec code, adding the optional %FilePath_Absolute constant should help. When that constant is left off, PS automatically prefixes all commands with <PS_HOME>. You'll have to specify absolute paths with this flag on though. I've changed the command to something that should work.
Exec("zip /opt/psfin/pt850/dat/PSFIN1/PYMNT/INVREND /opt/psfin/pt850/dat/PSFIN1/PYMNT/INVREND.XML", %FilePath_Absolute);
The documentation at PeopleBooks is a little confusing sometimes, but it explains it fairly well in this case.
You can always store the absolute location in a variable and prefix that to your commands so you don't have to keep typing out /opt/psfin/pt850/dat/PSFIN1/PYMNT/.
IntelliJ IDEA 13 has really excellent support for Mocha tests through the Node.js plugin: https://www.jetbrains.com/idea/webhelp/running-mocha-unit-tests.html
The problem is, while I edit code on my local machine, I have a VM (vagrant) in which I run and test the code, so it's as production-like as possible.
I wrote a small bash script to run my tests remotely on this VM whenever I invoke "Run" from within IntelliJ, and the results pop up in the console well enough, however I'd love to use the excellent interface that appears whenever the Mocha test runner is invoked.
Any ideas?
Update: There's a much better way to do this now. See https://github.com/TechnologyAdvice/fake-mocha
Success!!
Here's how I did it. This is specific to connecting back to vagrant, but can be tweaked for any remote server to which you have key-based SSH privileges.
Somewhere on your remote machine, or even within your codebase, store the NodeJS plugin's mocha reporter (6 .js files at the time of this writing). These are found in NodeJS/js/mocha under your main IntelliJ config folder, which on OSX is ~/Library/Application Support/IntelliJIdea13. Know the absolute path to where you put them.
Edit your 'Run Configurations'
Add a new one using 'Mocha'
Set 'Node interpreter' to the full path to your ssh executable. On my machine, it's /usr/bin/ssh.
Set the 'Node options' to this behemoth, tweaking as necessary for your own configuration:
-i /Users/USERNAME/.vagrant.d/insecure_private_key vagrant#MACHINE_IP "cd /vagrant; node_modules/mocha/bin/_mocha --recursive --timeout 2000 --ui bdd --reporter /vagrant/tools/mocha_intellij/mochaIntellijReporter.js test" #
REMEMBER! The # at the end is IMPORTANT, as it will cancel out everything else the Mocha run config adds to this command. Also, remember to use an absolute path everywhere that I have one.
Set 'Working directory', 'Mocha package', and 'Test directory' to exactly what they should be if you were running mocha tests locally. These will not impact the test execution, but this interface WILL check to make sure these are valid paths.
Name it, save, and run!
Fully integrated, remote testing bliss.
1) In Webstorm, create a "Remote Debug" configuration, using port 5858.
2) Make sure that port is open on your server or VM.
3) On the remote server, execute Mocha with the --debug-brk option: mocha test --debug-brk
4) Back in Webstorm, start the remote-debug you created in Step 1, and and execution should pause on set breakpoints.