I am trying to launch a local webserver instance of PhantomJs on a Azure Web (or Worker) Role to use with HighCharts for rendering server side charting images.
PhantomJs comes as just a simple .exe that can be launched as a webserver with the following command:
phantomjs highcharts-convert.js -host 127.0.0.1 -port 3003
... and then local HTTP POST requests can be made against it.
I have included this command in a startup.cmd batch script that is configured to execute with my Azure Web Role when published via ServiceDefinition.csdef:
<Startup>
<Task commandLine="startup.cmd" executionContext="elevated" taskType="background" />
</Startup>
startup.cmd:
START phantomjs highcharts-convert.js -host 127.0.0.1 -port 3003
EXIT /B 0
From what I can tell, this appears to execute fine on startup, however, the process does not stay running. It just executes and closes. For example, I can remote into the box, find the deployment and execute startup.cmd manually (in which a command window opens and stays open), and everything works fine.
How do I execute the PhantomJs exe webserver upon instance startup to where it continues running and does not close?
I have tried setting taskType to simple and background in the ServiceDefinition.csdef declaration, yet it doesn't seem to change anything.
It could be a timing issue if it is executing. You could add something like:
ping 1.1.1.1 -n 1 -w 300000 > nul
before you execute the script.
You could also pipe out to a log file so if its executing you can see what it is doing. >> log.txt.
If its not executing I would probably look at the path given its executing in the background and not interactively.
Turns out that I did not have the supporting .js files set to "Copy To Output Directory", so the startup.cmd could not find them.
Thanks to Steve for the output logging suggestion, which allowed me to see the error that it could not find "highcharts-convert.js".
Related
I start a nodejs app using pm2, the log out-file config setting is default.
Now, i find the log is too huge, so i need to redirect the log to /dev/null without restarting the process and without using pm2 -logrotate.
Is there any way around this issue?
You will have to restart the process if you want a fresh redirection. If restarting is okay, then it could be like:
pm2 'command' > /dev/null
If not, then you can possibly write a very basic shell script and schedule it via cron. It would just clean your logs (pm2 flush), or compress and dump (whichever you prefer).
I have setup auto scaling policy to launch an windows 2016 AMI EC2 instance once certain thresholds are crossed.
After windows is booted up, I want to open command prompt, change to a particular directory and start my node http server.
I have specified the following command in user data while setting up launch configuration.
start cmd /k cd c:\pizza-luvrs-master|npm start
My instances are getting launched but the commands are not getting executed!
the problem is in lauching command window itself. rest of the command is fine.
any solutions?
Assuming that the image you use in your auto scaling group is working correctly, you can copy your command into a .cmd file and then add it to the Task Scheduler. You should set the trigger to "At Startup".
We have a web app the that executes a small command from CMD with
require('child_process').execSync
All worked perfectly when I was running the service with npm start but the moment that we moved it to iisnode it stopped working. For example:p4 depots doesn't work anymore.
IIS is run by Admin user.
If I run the command from cmd directly it works, but when I call it from the iisnode it doesn't.
The error:
{"Error":true,"Message":"Error executing p4 CMD","Origmsg":{"killed":false,"code":1,"signal":null,"cmd":"C:\Windows\system32\cmd.exe /s /c \"p4 depots\""}}
Did anyone had the same issue in the past?
Looks like the problem was only with perforce.
The solution was to do p4 set -s P4PORT + user + password.
The "-s" option saves the setting to all of the users on the current machine.
I set up an Ubuntu Server in my home for the purpose of hosting a web application served by nodejs. I have a connect app on my server. When I ssh in and just do something like
node app.js &> server.log &
logout
Then after I logout the server is like put on hold, and it will not serve any requests, but when I ssh back in it starts to serve requests again.
So it looks like the forever package is designed to solve this problem. So installed forever and am doing this:
forever start -al forever.log -ao serverout.log -ae servererror.log app.js
I get the same results from this command. My server will serve requests while I'm ssh'ed in, but once I logout my server stops serving requests. What else can I do to troubleshoot this?
Consider using cron. You can run crontab in bash with no options to get the cron configuration for your account. You may need root access, and then have to specify the user account using the -u option. See man crontab for more information about your distribution's implementation.
I'm sure there are better out there, but this is a decent tutorial on cron's grammar.
I wouldn't use Forever for this but production-ready tools like Supervisord with Monit, see Running and managing nodejs applications on single server.
For each of your application just create a Supervisor configuration file like this:
[program:myapp]
command=node myapp.js ; the program (relative uses PATH, can take args)
directory=/www/app/ ; directory to cwd to before exec (def no cwd)
process_name=myapp ; process_name expr (default %(program_name)s)
autorestart=true ; whether/when to restart (default: unexpected)
startsecs=1 ; number of secs prog must stay running (def. 1)
stopwaitsecs=10 ; max num secs to wait b4 SIGKILL (default 10)
stdout_logfile=/var/log/myapp.log ; stdout log path, NONE for none default AUTO
stderr_logfile=/var/log/myapp.err.log ; stderr log path, NONE for none default AUTO
More informations: NodeJS process management at Brin.gr.
I have just gotten a VPS to bring my first node.js project online, but I am wondering where do I place the node files like app.js if I want it to be accessible at http://www.mywebsite.com:3000?
Right now, to host a website, I am using WHM to create a cPanel account, which creates /home/cpanelusername and my HTML/PHP files all go into /home/cpanelusername/public_html. Where does node.js files go to? Or did I get this step wrong as well?
On my Mac where I developed the node app, I simply cd into the directory containing the node file and run node app.js
You have to execute app.js file using the node binary, just like you do in local development. That means that you should probably make that execution a service call, the details of which depend on your linux distro. If it's not a service call, then executing it in ssh will mean that the app stops working once you log out of ssh.
For example, in Ubuntu server (which I use) I have an Upstart script which automatically runs my node.js app automatically on system start and log to /var/log. An example of the file, named /etc/init/myapp.js.conf is:
description "myapp server"
author "Me"
# used to be: start on startup
# until we found some mounts weren't ready yet while booting:
start on started mountall
stop on shutdown
script
# We found $HOME is needed. Without it we ran into problems
export HOME="/root"
exec node /home/me/myapp/myapp.js 2>&1 >> /var/log/myapp.log
end script
Replace names, etc. as necessary.
Edit to add: You can then start and stop your service by running:
sudo start myapp.js or sudo stop myapp.js