How to run server-side Javascript within a shared server service? - node.js

I know there is Node.js and Rhino, amongst other platforms to run server side JS. Though, we can only afford a shared server, since a VPS is much more expensive, and normally shared servers do not provide such tools. We need to run some cron jobs which by default are run by the server, and our core functions are purely JS without interaction with the browser/client.
Is there then a simple way of running server side JS, without the need for installing server side specific SW?

1) Go to Node.js download page and get the link for the Linux Binaries (.tar.gz) (right click-> copy link address).
2) Then (thanks to user niutech) create the following php file, namely install_node.php
<?php
//Download and extract the latest node
exec('curl http://the_URL_you_copied | tar xz');
//Rename the folder for simplicity, adapt accordingly
exec('mv node-v#bla_bla-linux node');
?>
3) run the php file from the unix terminal
$php -q install_node.php
4) you may then run the node executable file on ./node/bin/node

Related

Serving Node.js files from a LXC Turnkey container -Apache configuration needed?

I hope that I will not waste everyone's time, nor embarrass myself, but please hear/read my problem. I am new to this, so please bear with me.
Someone at work wrote a crude code in Node.js and I can see the .html files by having localhost: 8080 as the URL in the browser, while having the VisualStudio starting the npm with npm start command. Am I explaining this clear enough?
The webpages are displayed and all, but now comes the hurdle.
How can i have those pages served from a a Linux server?
If by analogy, I put some.html page inside the /var/www/ in a Apache server, pointing to the server's IP/somepage.html i can visualise it, what needs to be set up on a similar Node.js server?
Where do I have to put those files, inside what directory and what configuration is needed?
I thought to create a small LXC container and have those files and services saved as a template, but first I need to set this up correctly. Can Apache serve those files, do I have to make another configuration first?
I have those files served from a Windows machine from local host, and put the same files in a /node ,/opt/ www directory in a Linux machine, but no dice.

watch for file changes on remote server

I want to create a node.JS app to watch changes on a remote server, e.g. //10.9.8.7/files_to_watch/*.
All I need is watch if some new files/folders appear. I have found an npm package named chokidar and I managed to watch local files but I don't know how to watch remote disc (which is protected by password).
How should I do that? Should I first connect to that server (ftp node package) and then run watcher or what? And what is with optimization? What if there will be a lot of files and changes? Is it a good way I'm thinking?

How can I use (Node) Livereload on a development server in my network

Background: My PHP projects (CakePHP, Wordpress) run on an Ubuntu server in my network, I access them through a development TLD (.dev for example) setup through a local DNS server and I edit the files through a Samba share.
I would like to utilize Livereload for my development, preferably have it running on the server itself. I have basic Node/Gulp knowledge, but haven't been able to get this running.
Livereload (or a middleware server) should proxy the 'real' URLs, making sure all websites run as they would normally and Livereload should be available over the network (so not just localhost, because that runs on the development server)
Desired result:
Livereload runs on my dev server (IP: 10.0.0.1), my project is called helloworld.dev, I browse to 10.0.0.1:3000 on my machine and see helloworld.dev proxied through Livereload. I now edit a CSS file over the Samba share and the CSS is reloaded without a refresh.
I've tried using a few NPM packages, gulp-livereload, livereload, node-livereload, with their provided examples that come with the packages, but haven't been able to get the desired result. They all expect you to run in locally, don't support access to the Livereload URL over the network, cannot proxy the 'real' URLs or require static content.
Can anyone provide an example or 'proof of concept' code of my wish, so I can see where to start?
I found the answer: http://nitoyon.github.io/livereloadx/
This does EXACTLY what I need.
I can run
livereloadx -y http://helloworld.dev -l
open
http://serverip:35729
and I'm ready to roll.
The -y option creates the proxy to the 'real' URL and the -l makes it serve files from local filesystem instead of through its proxy.

Minimal http server for testing cgi-bin

I have developed and maintain a web application which acts as a front end for some scrips in cgi-bin which in turn call C programs on the server. The web server is Apache2, hosted both on my office Linux box for testing and on Amazon ECC for the real deployment.
My problem is that I'm off travelling, mostly without any internet connection and with only a small portable linux machine, yet I want to develop the next release of the web pages, scripts, data sets and programs. Testing static web pages is no issue but testing pages which call server-side cgi-bin scripts is always problematic, so my idea is to put a minimal http server on the portable linux box (ubuntu 14.04) which will allow the server and client to be on the same machine without any internet (and maybe with just a socket) in-between.
Of course I can and do test scripts and programs directly, but this does not exercise features such as handling top-bit set characters in $POST_DATA or setting and retrieving cookies so would inevitably result in some divergence of code-base.
So:
Is this way sensible or is there a better or simpler means to do what I want?
If it is sensible, what hppt server would you recommend? I thought of miniWeb but have no experience of it.
PS: I'm expert in the the (maths of the) server-side programs but have much less experience as an apache sysadmin.
For many things this is sufficient:
python3 -mhttp.server --cgi
Unfortunately, it's so minimal, that it doesn't support stuff like setting the HTTP Status: https://bugs.python.org/issue10487
I'm not using lighttpd because I don't want to have to write a configuration file. Another minimal server that can be used is mini-httpd:
sudo apt install mini-httpd
/usr/sbin/mini_httpd -D -p 8000 -c 'cgi-bin/*'
The -D option keeps the server in the foreground instead of daemonizing it. The -p option is the port and -c is a pattern for my cgi scripts.
I also found that the built-in webserver of busybox can handle cgi scripts just fine:
busybox httpd -p 8000 -f

Pulling remote 'client uploaded assets' into a local website development environment

This is an issue we come up against again and again, how to get live website assets uploaded by a client into the local development environment. The options are:
Download all the things (can be a lengthy process and has to be repeated often)
Write some insane script that automates #1
Some uber clever thing which maps a local folder to a remote URL
I would really love to know how to achieve #3, have some kind of alias/folder in my local environment which is ignored by Git but means that when testing changes locally I will see client uploaded assets where they should be rather than broken images (and/or other things).
I do wonder if this might be possible using Panic Transmit and the 'Transmit disk' feature.
Update
Ok thus far I have managed to get a remote server folder mapped to my local machine as a drive (of sorts) using the 'Web Disk' option in cPanel and then the 'Connect to server' option in OS X.
However although I can then browse the folder contents in a safe read only fashion on my local machine when I alias that drive to a folder in /sites Apache just prompts me to download the alias file rather that follow it as a symlink might... :/
KISS: I'd go with #2.
I usally put a small script like update_assets.sh in the project's folder which uses rsync to download the files:
rsync --recursive --stats --progress -aze user#site.net:~/path/to/remote/files local/path
I wouldn't call that insane :) I prefer to have all the files locally so that I can work with them when I'm offline or on a slow connection.
rsync is quite fast and maybe you also want to check out the --delete flag to delete local files when they were removed from remote.

Resources