I have one perl CGI script processing local data based on user request through IIS. User can start the CGI script with arguments by http. For one-request-one-instance scenario, this works fine.
The next scenario is that multiple requests happens simultaneously. The IIS will start each CGI script for each request. This generates concurrency issue. The question is whether IIS can handle this concurrency situation, ideally no-blocking fashion?
Thanks,
Use FastCGI to speed up the CGI and Catalyst for Web Framework.
Related
My aim is to start/stop services(like httpd, sshd, iptables, etc) from a Perl CGI Script.
#!/usr/bin/perl
use strict;
use warnings;
print "content-type:text/html\n\n";
print <<EOM;
<html>
<body>
EOM
`/etc/init.d/httpd stop`;
my $res=`/etc/init.d/httpd status`;
print <<EOM;
<h3>$res</h3>
</body>
</html>
EOM
Here the first command inside back tics isn't working, whereas the second command which is assigned to $res is working.
Output on the browser is as follows:
httpd (pid 15657) is running...
I suggest displaying the output from the stop command. I strongly suspect that you will see an error indicating that you do not have permission to stop the service.
A correctly configured web server process will be owned by a user that has almost no permissions on the system. This is a security feature. CGI programs on your web server can be run by anyone who can access the web server. For that reason, the web server user is usually configured to only run a very limited set of programs.
Starting and stopping your web server is something that you will usually need root permissions for. Your web server process will not have root permissions (for, hopefully, obvious reasons). But it's entirely possible that every user on the system (including the web server user) will have permissions to get the status of the web server process. This is why your httpd status command works, while the httpd stop command doesn't.
You could give the web server user temporary permission to start or stop services, using sudo or something like that. But you would need to do it very carefully - perhaps requiring a password on the web page (and transmitting that password securely over https).
But it's probably a better idea to reconsider this approach completely.
Can I also point that it's a bad idea to use backticks to run external commands that you don't want to collect the output from. In cases like that, it will be more efficient to use the system() function.
I also note, that you are loading the CGI module, but not using any of its functionality. You even manually create your Content-Type header, ignoring the module's header() function.
And here's my traditional admonition that writing new CGI programs in 2017 is a terrible idea. Please read CGI::Alternatives and consider a PSGI-based approach instead.
You should not even think of having a CGI script which has the privileges to start/stop services on a computer. There are, of course, valid reasons to want to have remote control over servers, but if giving the HTTP daemon super user privileges is the only way you can think of achieving that end, you need to realize that you ought not to be the person implementing that functionality.
what you think is the best way to synchronize PHP code between two servers. One server i production and second is testing server. They have linux and are separated (have another IP). How would you solve synchronize PHP code, when you want send a PHP code from testing server to production?
Thanks for your answer
I'm wondering how the Common Unix Printing System "CUPS" handels the user actions and affects the configuration files, from my humble background, a webpage only can access/edit files when there is some web server and a serverside script, so how it works without installing web server?
does it work through some shell script? if yes, how that occurs?
It is not the web frontend that alters the configuration files. At least not if you compare it to the 'typical' setup: http server, scripting engine, script.
CUPS itself contains a daemon, this also acts as a minimal web server. That deamon has control over the configuration files. And it is free to accept commands from some web client it serves. So no magic here.
Turned that around you could also setup a system running a 'normal' http server with such rights that is is able to alter all system configuration files. That's all a question of how that server/daemon is setup and started. It breaks down to simple rights management. You certainly do not want to do that, though ;-)
Does anyone know a way to have a JavaScript file or set of files always running under IISNode without the need for a client request? The idea would be to have scripts that behave as services, but have them running under IISNode.
Thanks!
csh3
How about trying node-windows, it allows Node.js applications to run as a windows service. A nice feature is that it also exposes a way to write to the EventLog.
It probably fits your scenario better considering that you need any of the IIS features other than the long running aspect of it.
Hope this points you in more applicable direction.
I guess you have some reason to use iisnode, but you are trying to run a service in iis which is not a good idea, if you want to run as service then run as service. how?
if you still insist to use iisnode then options are
use Application Initialization for IIS
Or write a scheduled job that pings your iisnode page
Or use pingdom like service to ping your iisnode application.
I'm looking at an existing website, deployed on an NFS server. I'd like to rewrite some portions of it to run on nodejs. As far as I can tell, nodejs isn't supported by the NFS folk, but I am constrained to using their servers.
So, is there a way to shoe-horn nodejs onto a nearlyfreespeech server? Has anyone tried this successfully?
As of 24/September/2014 NFS now support persistent processes:
Intro and overview - More power, more control, more insight, less cost
Official example - How-To: Django on NearlyFreeSpeech.NET
3rd party example - Run node.js on NearlyFreeSpeech.Net
To summarise the process described in mopsled.com's third-party example:
1) In NFS.N's admin UI, select your site's domain shortname under Sites, then change that site's "Server Type" to "Custom" instead of PHP / Apache.
2) Put your Node server code somewhere in /home/protected/
3) Create a shell script (eg run.sh) file somewhere in /home/protected/ that contains the command(s) to start your server (eg npm run start or node server.js). NFS.N will automatically run this script as a continuous process using a "Daemon", which we'll configure in the next step.
4) Select "Daemons" in your site's NFS.N admin UI, and enter your server's startup shell script path in the "command line" field. Complete the other fields as you see fit.
5) NFS.N will now ensure that your custom server process will run indefinitely. Your web server will now be available at the port your server listens at. However, NFS.N doesn't give root access for your server to communicate through the normal "low-level" internet ports (eg :80 and :443), so if you want to serve those, you must use NFS.N's "Proxy" feature described in the next step.
6) If you need to listen on low-level ports: select "Add a Proxy" in your site's NFS.N admin UI and enter the relevant settings, checking the "Bypass Apache entirely" option and giving the port your server is listening on for the "Target Port" option.
That's it! You can now stop/restart the server's continuous process (the shell script that the Daemon is maintaining) in the Daemon's configuration page.
NFS.net have a new "NFGI" architecture that may open the possibility to this:
NFGI can be made to work with other languages as well, making them first-class citizens of our service, just as fast and integrated as PHP currently is. This paves the way for making all sorts of frameworks viable that have traditionally been too slow when run through CGI. Rails. Catalyst. Django. We also believe it can be leveraged to make node.js work on our service, but we’re not 100% sure about that.
(Source: http://blog.nearlyfreespeech.net/2013/09/21/cgissh-upgrades/)
If you want this feature you can vote for it in their feature request system at https://members.nearlyfreespeech.net/support/voting
Although to be honest, I concur with earlier answers, using Node via CGI would lose some of the benefit...but would not be without its charms. Something like http://larsjung.de/node-cgi/ for NFS.net would be an interesting JavaScript replacement for PHP.
The problem is not that NFS.net will not support NodeJS. The thing is that you can't have "long running processes", i.e. servers. Since you can't run servers, you can't run Node.
In fact, the only way you can have anything dynamic there is by using CGI. There's no reason why Javascript engine could not be used to generate pages in response to requests, but I am not sure that can be done with node.