Would like to set up Scalatra to run on a box running nginx.
Already have nginx set up correctly, and am able to serve static html pages, however, I now wish to point it to a Scalatra app. All of the documents available appear to assume that the server used will be Jetty: for example - http://blog.everythings-beta.com/?p=430
I assume that I cannot simply point it to just the folder because the jetty configuration requires, in addition to this, a class name and a servlet mapping.
How to I configure nginx to point to a Scalatra app?
Thanks!
Additional Info:
Ubuntu 12.04 is my operating system - so answers may either be specific to this, or anything that would generally work on Linux.
Bonus:
Throw in MongoDB as well - i.e. how to set up nginx with Scalatra and MongoDB - into your answer for extra points!!
Setting up with nginx is easy check out the docs on deploying java servers.
Once you are up and running install casbah and you are up and running.
Related
I installed Node on my GoDaddy's Linux shared hosting plan via SSH. Running Node on a specific port number isn't a problem.
(I'm not at all clear about this ecosystem, which is why my question might sound stupid)
The primary (WordPress) website runs on Apache and I cannot disturb that permanently (that is, I cannot stop that website from running and enable Node on all ports)
Is there any way to run Node on one specific URL of the website, instead of a port number?
I read a few answers and a few articles and tried to modify the .htaccess file and enable reverse proxy, but I can't get it to work especially since I'm not even sure about how it works.
I have a VPS that runs apache on with many domains (all same IP), but I want to start running nodeJS/feathersJS for some of my sites.
I can't seem to figure out how to run multiple instances other than with different ports. However, all the feathers sites are available to all the others provided you add the port (even sites running an apache site).
Is there an easy way to limit domain-1.com to show feathersjs site, and domain-2.com to still use apache?
IS there something I am missing?
I am new to node and transitioning from a PHP person to a nodeJS person... so please forgive my ignorance.
I found some non-feathersJS modules that I could figure out to use, but there has to be an easier way then modifying feathersJS... no?
*Edit I found the apache proxy solution already and implemented it. However, now I need to make sure that the port that runs node isn't used on my other domains.
Example.com now using apache proxy stuff run localhost:3030 but so does anotherexample.com:3030
Is there a way to limit this?
I found some libraries that do this for node, but none that seem to be nicely implemented in feathersJS.
*edit again I think the mentioned vHost feathers thing is what I am looking for, will update when i test this.
There is different ways to go about it but one way would be to use mod_proxy for Apache. In your domain configuration you'd then point to the port where the application you want is running on:
ProxyPass / http://www.example.com:8001/
ProxyPassReverse / http://www.example.com:8001/
While putting an Apache or NginX proxy in front of a Node application (and to serve static content) is usually a good idea for higher traffic sites, for smaller projects you can also just use Node without having to worry about Apache. To host different apps on different domains, you can use the vhost Express middleware. An example how to set it up with Feathers can be found here.
I currently run a Red Hat Linux server with Plesk to host a hundred or so domains. For multiple reasons I'd like to transition away from Plesk and to Docker containers with each virtual host as one or more containers. I'm unclear from what I've read so far what would be the best approach to this.
A typical site includes the doc root file area and one or two MySQL databases. We run PHP on all the sites. Some sites may have constraints on the version of PHP they can run. Some of the sites use SSL. I don't believe there are any constraints on the MySQL versions, but it's of course possible that future MySQL versions could deprecate some feature that is needed. I don't believe there's any dependency on the Apache version, but I do rely on some specific Apache modules being installed. There may be a site or two that have dependencies outside of their doc root and not part of the basic virtual host setup, but I don't believe any require a specific version of Linux.
I would like the containers to have maximum portability so that I can have flexibility in moving sites to whatever server or cloud service I choose. Part of my goal is to retire the current server and move sites to servers which best fit them.
I would also like to try upgrading the PHP version after the containers are created.
So would a single container include the entire doc root file system, including the data directories where users can upload/ftp files? Would it include the MySQL database, or would that be separate? I assume I would include the current version of PHP so that I could upgrade each one when I was ready. Would it include Apache when specific Apache modules are required? Is there a reason to include Apache and/or MySQL in all containers?
One last piece. I'm looking into using CoreOS which utilizes Docker as an integral part.
Any and all inputs are appreciated.
The whole idea of Docker is running processes/components isolated, to keep them easily upgradable. I have tinkered with this in the past and have come up with the following.
Create four containers per instance (customer):
Apache or nginx
php-fpm
MySQL
Busybox (as a data container)
Link all of them together and set volumes to all data that should persist in the data container. MySQL data and /var/www plus site config files for example.
This way you can always switch out one of the components while keeping the others. It's questionable though if Docker is a solution to a full virtual server though, as Docker containers do not have a full init system and you'll have to resort to bending things quite a bit to resemble a full virtual machine. Think more of it as "application containers", hence the idea with the separation of concerns.
Update:
Newer Docker versions come with the docker-compose tool which greatly eases this task.
I am trying to solve the same issues with cPanel instead of Plesk.
We can try and accomplish this using the plugins for cpanel or plesk however we have to worry about few things.
and we have to create some premade template images for containers that our clients can use.. it cannot be just any container from dockerhub,user Dockerfiles,etc Because cPanel/Plesk will look for specific log files available on specifc locations for bw calculations, disk quota,etc.
Biggest advantage with this solution is that we can provide CloudLinux kind of isolation and easy resource allocation/ fair sharing. However it is not as easy.
To answer your question:
Every container will be nearly a complete system so you will need to have less clients per host, because each container might be like 1G and by default have to run its own webserver/php and hence more ram foot print.
Its painful to run a Mysql inside each container and it is better to use mysql on the host or 1 dedicated container and share it. this way the Plesk's tools will help.
You may also have to use the standard apache and then reverse proxy it to each container after ssl termination so Plesk's standard tools are used but then I think containers will have to run its own webserver itself or we may have to do some trickery with php-fpm to allow host's apache to talk to each container's php-fpm processes . This is more painful than allowing each container to just run its own Nginx but possible.
It doesnt prevent users from installing their own Mysql server within their container if they need.
This kind of stuff is easy for someone from cPanel or Plesk to do.. but for others it will need a lot of Dedicated development time + testing to make sure all this works.
I was going to invest some time in creating this kind of plugin for cPanel but still undediced. I may try this if I can rope in some investors.
You can see amount of interest , CPanel shows on this issue : http://features.cpanel.net/responses/dockerio-support
I will leave you to decide
Also as an alternative solution:
so Instead of playing to the Cpanel's tune I created this . https://github.com/paimpozhil/WhatPanel
Here every site runs in its own container ( and its own VM if needed.).
Migration is simple as exporting/importing a container with tools like : on github.com /paimpozhil/docker-volume-backup & acaranta/docker-backuper
I didnt complete the migrator/ php upgrade tools ,etc here but will do when i have free time.
I use Nginx to serve my php applications for dev purposes.
On Ubuntu it works out of the box.
I want to do the same for Node.js apps.
Is this possible without doing nodejs app.js before?
How to achieve this in a single Nginx conf file?
PHP and node.js are oil and water. PHP requires a web server to run the .php files, however node.js typically creates its own web server. Since you are creating your own web server, in many cases you wouldn't find it necessary to serve your application from Nginx, however, if you truly insisted on "serving" it from Nginx, you would need to proxy it.
This is not possible without doing nodejs app.js before, due to the way node.js works.
This question best answers your question regarding proxy'ing via Nginx.
As a closing remark, its good to remember that node.js does in fact (in most cases) implement its own web server, and PHP does not.
So I thought about giving node.js a try seeing the possibilities it has for a little test chat project (with mysql) I'm doing.
But what I couldn't find out is where to run the file from and whats most common.
What I currently have:
A FreeBSD server with latest Node and PHP 5.3.x
A vhost
some tutorials on how to start with node (which I looked through and got exited about)
knowledge on how to run it from terminal without having to keep my terminal open (screen)
So far so good.
What I need:
Some basic information of where to put the (lets say:) chat.js file.
Most logical port to run it on
So the web root (www) runs on a user (not root obviously). And the webroot has an underlying folder where I could put the script (away from visitors grabby little hands). This seems to me to be the most safe place to put it seeing nobody can get to it, which is probably what I want seeing I'm going to connect to a db and don't want my DB login data out there (I don't know how this works yet but I'll figure out db connect with node later, no answer required).
But if a file is not in the webroot, it seems to me a connection cannot be made from outside. Cause my webroot is configured to only allow 80 (or ssl on 443) incomming traffic, which is logical. Outgoing obviously has no problems.
All the examples that I found don't really help me. They just do everything on a local machine, which sucks for me cause I don't want to do that.
So what I would like to is the best practice for:
Where to put the file
port to run it on.
What is Node.js?
A lot of the confusion for newcomers to Node is misunderstanding exactly what it is. The description on nodejs.org definitely doesn't help.
An important thing to realize is that Node is not a webserver. By itself it doesn't do anything. It doesn't work like Apache. There is no config file where you point it to you HTML files. If you want it to be a HTTP server, you have to write an HTTP server (with the help of its built-in libraries). Node.js is just another way to execute code on your computer. It is simply a JavaScript runtime.
A nice tutorial How to Deploy Node JS Applications, With Examples
You'll need to have your non-node application on port 9000 (for
Apache, this will be in /etc/apache2/ports.conf and in your
sites-available file for your site), and you'll need your node
application to listen on 8080. You'll also need to set up DNS 'A'
records for the different hostnames you'll be using for your servers.
Companies like Heroku allow for automated deployment of apps from the desktop to the cloud.
Nodejitsu provides a tool called jitsu that makes deploying an Node.js application super simple. You can install jitsu with npm.
npm install jitsu -g
Heroku How To
Getting started with jitsu
Use monit, forever, upstart or systemd to start your node server. Use Varnish or HAProxy or Nginx (Nginx not work with websockets).
Ultimately you can stick it anywhere you want. I recommend running your application using Forever or similar instead of directly with Node. I usually keep my apps in /var/ and let each one run under a unique user. I do not recommend keeping them in your http root, as your .js files are should NOT be interpreted by Apache, php, etc.
To be clear - you do NOT need a traditional webserver, nor do you need php,mySQL or anything else. Node is all you need. It'll serve content directly - it IS the webserver.
Often times you'll have each app use a high port number (3000+) and use NGINX to proxy them all to different hostnames off of port 80 (allowing you to easily have multiple apps on a single machine). If you aren't using some sort of proxy, then 3000 is very default, but there is no correct or incorrect port, so long as you don't use a reserved port.