Does anyone know a way to have a JavaScript file or set of files always running under IISNode without the need for a client request? The idea would be to have scripts that behave as services, but have them running under IISNode.
Thanks!
csh3
How about trying node-windows, it allows Node.js applications to run as a windows service. A nice feature is that it also exposes a way to write to the EventLog.
It probably fits your scenario better considering that you need any of the IIS features other than the long running aspect of it.
Hope this points you in more applicable direction.
I guess you have some reason to use iisnode, but you are trying to run a service in iis which is not a good idea, if you want to run as service then run as service. how?
if you still insist to use iisnode then options are
use Application Initialization for IIS
Or write a scheduled job that pings your iisnode page
Or use pingdom like service to ping your iisnode application.
Related
I've been developing a Next.js website locally and now want to set it up on my Apache server (with cPanel). However, I'm very new to Next.js and Node apps and not too sure how to go about it.
Has anyone done this successfully? Can you list the required steps and what files should be on the server?
Also, can this be done on a subdomain?
Thank you!
To start with some clear terms just so we're on the same page, there are two or three very different things people mean when they say "server":
A Server Machine is a computer that is connected to the internet that you intend to use to serve something to people on the internet.
A Server Program is some software you run on your Server Machine. The job of the Server Program is to actually calculate the responses to various requests.
A Server as a Service is a webapp provided by a company that stores your code and then puts it onto Server Machines with the right Server Program as needed.
While we're here, let's also define:
A Programming Language is the language your website is written in. Some sites have no language (and are just raw HTML/CSS files that are meant to be returned directly to the user). Many sites, though, have some code that should be run on the server and then the result of that code should be returned to the user.
In your case, you have a Machine whose condition we don't know other than that it is running the Program Apache (or probably "Apache HTTP Server"). Apache HTTP server is very old and proven and pretty good at serving raw files back to users. It can also run some Programming Languages like PHP and return the result.
However, Next.JS is built on top of the Programming Language Javascript, which Apache does not have the ability to run. Next.JS instead wants its Server Program to be Node.
So the problem here is basically that you have a hammer, but only screws. You can't use the tool you have, Apache, to solve the problem you need solved, running Node code and returning the result. To get around this you have two options:
First, you can find a way to access the Server Machine that is currently running Apache and tell it, instead, to run Node pointed at your Next.JS code whenever it starts up. This might not be possible, depending on who owns this machine and how they've set it up.
Second, and probably easier, is to abandon this Machine and instead use a Server as a Service. Heroku, AWS, and Netlify all support Next.JS and have a free tier. The easiest solution, though, is probably to just deploy it on Vercel, which is a Server as a Service run by the same team that makes Next.JS and which has a very generous free tier for you to get started with.
The good news, though, is that yes next.js does totally support being hosted from a subdomain.
Next.JS allows you to build fully functional Node Applications, as well as simple statically-generated sites like Jeckyl or Docpad. If your use case is a simple statically generated site look here: https://nextjs.org/docs/advanced-features/static-html-export
In particular the next build && next export command will create all the HTML and assets necessary to host a site directly via an HTTP server like Apache or Ngnix. Contents will be outputed to an out directory that could serve as the server root.
Pay very close attention to what features are not supported via this approach.
Currently I am running NodeJs server as background process in the host machine to achieve sender side rendering for my angular app.
On Linux for e.g. npm rum server & (ampersand is to put process in background)
But I am looking for solution like Apache Server that manages it start/stop with host reboot.
I think the best way for you to acheive what you're looking for is to use a management solution like PM2 or Forvever. These will pretty easily manage your solution in the background for you.
Rather than apache/nginx managing start stop of your node application, you may create service to run your node application. It will run without any manual intervention.
Start your Node app on some other port than 80, as your main web server might be running on that port.
Create a service file in /etc/init to start your Node app
Configure apache/nginx with reverse proxy to the node applicaiton
Start both of the services: 'service start nodeapp.conf' and 'service start apache2'
This would make your life for handling these services pretty easy.
Yes. You should be able to.
Start by creating a proper deployment directory - https://angular.io/guide/deployment
Then copy/ftp/whatever to the web server.
The tricky part is in your route controllers, etc. and getting all of the paths right if you end up deploying into a different directory than what you developed for.
You need to use Apache/Ngnix with NodeJS using proxy.
Look into this link, if this helps:
https://blog.daudr.me/painless-angular-ssr/
I get the impression, though it's not explicitly stated anywhere, that using pserve on my Pyramid app when it's deployed to production is not the best idea. I don't know that it deals with concurrency, for example -- and I suspect it doesn't at all. I don't know if paster is right, either.
For context: I have a Pyramid app with a PasteDeploy configuration file, which I can serve up using a command like pserve config.ini. So, in production, would I just run that command as a daemon and reverse-proxy it through nginx?
What's the best practice here?
pserve is just an application loader and server runner. It's capable of launching many different WSGI servers (one of which you need to select for deployment). There are few WSGI servers that cannot be run via pserve (the main one coming to mind is Apache's mod_wsgi).
As far as production, the main thing you want is reliability, which supervisor can greatly help with. You'll want to look at the nginx deployment recipe, but the cookbook actually has recipes for several different deployment scenarios which you will need to evaluate based on your current infrastructure.
I have a Windows application that does some calculations and is called from command line. On my Windows machine, I have a PHP script running under Apache that executes the application and shows the output.
Is there any hosting solution that I can use to do the same? I can't figure out if EC2 or Azure are the right solutions. Basically, I need a web server + ability to execute my application.
Suggestions? Thanks.
You can host your application on AppHarbor, the .NET Platform-as-a-Service. You can either port your web frontend to .NET or try to get your PHP stuff working with Phalanger. AppHarbor is working on Background Tasks, which might be a good match for your workload.
I would just run the PHP script you already have under IIS in a Windows Azure web role.
If it is a Windows Application and you have the source code I would go with an Azure Worker Role. The advantage of using a PaaS (as Azure) instead of an IaaS (as Amazon) is that you wont have to bother of keeping the server up to date.
The real investment in time will be when you rewrite your application to make it work as a Worker Role. The time needed to do this work depends on how your application works right now. If is uses a lot of disc access it might be difficult and perhaps an Amazon server would be better. But if it only crunches numbers in memory an Azure Worker Role is a very good candidate.
The real advantage of using an Amazon server is that you probably wont need to do any work at all. Except maintaining the server.
As described in the question both Azure and EC2 will do the job very well. This is the kind of task both systems are designed for.
So the question becomes really: which is best? That depends on two things: what the application needs to do and your own experience and preference.
As it's a Windows application there should probably be a leaning towards Azure. While EC2 supports Windows, the tooling and support resources for Azure are probably deeper at this point.
If cost is a factor then a (somewhat outdated) resource is here: http://blog.mccrory.me/2010/10/30/public-cloud-hourly-cost-comparison/ -- the conclusion is that, by and large, Azure and Amazon are roughly similar for compute charges.
Steve Marx has a blog post that describes how to run another web server (i.e not IIS) on Azure
This potentially has everything you need - you can deploy Apache and your executable and run it in exactly the same way.
Alternatively - you can deploy your executable along side a bit of code in a worker role that would run that application periodically, all depending on your exact requirements
I'm looking at an existing website, deployed on an NFS server. I'd like to rewrite some portions of it to run on nodejs. As far as I can tell, nodejs isn't supported by the NFS folk, but I am constrained to using their servers.
So, is there a way to shoe-horn nodejs onto a nearlyfreespeech server? Has anyone tried this successfully?
As of 24/September/2014 NFS now support persistent processes:
Intro and overview - More power, more control, more insight, less cost
Official example - How-To: Django on NearlyFreeSpeech.NET
3rd party example - Run node.js on NearlyFreeSpeech.Net
To summarise the process described in mopsled.com's third-party example:
1) In NFS.N's admin UI, select your site's domain shortname under Sites, then change that site's "Server Type" to "Custom" instead of PHP / Apache.
2) Put your Node server code somewhere in /home/protected/
3) Create a shell script (eg run.sh) file somewhere in /home/protected/ that contains the command(s) to start your server (eg npm run start or node server.js). NFS.N will automatically run this script as a continuous process using a "Daemon", which we'll configure in the next step.
4) Select "Daemons" in your site's NFS.N admin UI, and enter your server's startup shell script path in the "command line" field. Complete the other fields as you see fit.
5) NFS.N will now ensure that your custom server process will run indefinitely. Your web server will now be available at the port your server listens at. However, NFS.N doesn't give root access for your server to communicate through the normal "low-level" internet ports (eg :80 and :443), so if you want to serve those, you must use NFS.N's "Proxy" feature described in the next step.
6) If you need to listen on low-level ports: select "Add a Proxy" in your site's NFS.N admin UI and enter the relevant settings, checking the "Bypass Apache entirely" option and giving the port your server is listening on for the "Target Port" option.
That's it! You can now stop/restart the server's continuous process (the shell script that the Daemon is maintaining) in the Daemon's configuration page.
NFS.net have a new "NFGI" architecture that may open the possibility to this:
NFGI can be made to work with other languages as well, making them first-class citizens of our service, just as fast and integrated as PHP currently is. This paves the way for making all sorts of frameworks viable that have traditionally been too slow when run through CGI. Rails. Catalyst. Django. We also believe it can be leveraged to make node.js work on our service, but we’re not 100% sure about that.
(Source: http://blog.nearlyfreespeech.net/2013/09/21/cgissh-upgrades/)
If you want this feature you can vote for it in their feature request system at https://members.nearlyfreespeech.net/support/voting
Although to be honest, I concur with earlier answers, using Node via CGI would lose some of the benefit...but would not be without its charms. Something like http://larsjung.de/node-cgi/ for NFS.net would be an interesting JavaScript replacement for PHP.
The problem is not that NFS.net will not support NodeJS. The thing is that you can't have "long running processes", i.e. servers. Since you can't run servers, you can't run Node.
In fact, the only way you can have anything dynamic there is by using CGI. There's no reason why Javascript engine could not be used to generate pages in response to requests, but I am not sure that can be done with node.