How do I support multiple versions in Node.js - node.js

I have a web-site that needs to be up all the time. I also, of course, need to do new releases. Each page tends to be very long-lived, with lots of JavaScript doing AJAX calls to the server.
What I do is build a new WAR file and put it in Tomcat's webapps directory, which ends up looking like this:
20110701-7f077d 20110711-aa8db4 20110715-6f4a12
20110701-7f077d.war 20110711-aa8db4.war 20110715-6f4a12.war live
The war file is named after the date of its release and the first few characters of its GIT commit-id, just so I can keep track of everything. Tomcat automatically unpacks the war file into a directory of the same name. The live directly just contains a file giving the name of the "live" version.
This way, each user can continue using the version of the back-end that works with the version of the front-end that he has loaded into his browser. And obviously, version upgrade and reversion is painless.
Now, I'm switching to node.js and I want to do the same thing. I am reliably informed that node.js doesn't support independent applications in one instance. So, what to do?
The only thing I can thing of is to designate n slots (where n is some small number like 10 or 100), and each slot corresponds to a port (i.e., slot 1 is 8001 and so on), put Apache in front of several node.js instances, each representing a slot, and Apache would use mod_proxy or mod_redirect to proxy requests like '/slot01' to port 8081. "live" would point to the current slot.
This would be clumsy and error prone and require an otherwise useless Apache instance and most of all I cannot believe that node.js doesn't have a good solution to what seems like a near-universal problem.

You can use node-http-proxy and write some code to monitor your 'deployment directory' for new versions and when such versions are found you can start the corresponding script and proxy it under the directory name (to make myself clear if you find a new directory 'version-11-today' your parent node-http-proxy script could start the new script assigning it a port passed as a parameter and then proxy to the new app under the path '/version-11-today').
A similar solution could be done with nginx only in this case you could write a script to monitory the deployment directory and generate some new nginx configuration when new apps are found.
If you are afraid you might run out of ports I believe both node.js and nginx can run on and proxy unix sockets besides inet sockets.
An advantage of the above is that each app runs in its own process protecting the other apps from crashes and enabling individual app restarts.
A third solution if you are not afraid some bug will crash your app is to have a parent script that loads all the app versions in the same process and maps them under different paths depending on the directory they were found in. You can still restart your server without downtime such as in this example http://codegremlins.com/28/Graceful-restart-without-downtime

Related

How do I update a live Node.js Heroku App remotely?

I have a Node.js app that is live on Heroku
The Node.js folders/files that I uploaded on Heroku also reside on my computer
Whenever I update my Node.js folders/files on my computer, I want these updates to also be applied to the folders/files that are live on Heroku.
I want to be able to do that without having to stop, update and restart my Heroku app every time.
What I'm describing is basically a setup equivalent to that of the standard ftp connection that we all use whenever we make a local to remote update of static files of some standard website.
The git support that apparently Heroku offers doesn't do that. It requires for the app to be stopped (by running the appropriate commands on the terminal), then I need to make a git push (using the terminal) that updates all of the files (which takes forever) and not just the ones that need to be updated, and then the app needs to be restarted (again using the terminal). This is extremely frustrating for an app that is still in development, requires constant updates and cannot be tested locally (for a number of reasons).
Whenever a Node.js app is tested locally, the app can be started by calling supervisor app.js instead of node app.js.
What this does is it allows for the app to be updated and as soon as that happens (i.e. as soon as I hit "save") supervisor automatically restarts the app locally.
I'm looking for something similar to the above, i.e. linking my local app folder to my remote app folder and starting my remote app (on Heroku) using some supervisor mode so that as soon as my local folder is updated, my remote folder is also changed and the app automatically restarted.
It's extremely frustrating trying to test a Heroku app (that obviously needs constant updates) currently.
Testing it locally and then publishing it on Heroku (for good) will not do because some apps simply cannot be tested on localhost.
Any help would be much appreciated!
The git support that apparently Heroku offers doesn't do that. It
requires for the app to be stopped (by running the appropriate
commands on the terminal), then I need to make a git push (using the
terminal) that updates all of the files (which takes forever) and not
just the ones that need to be updated, and then the app needs to be
restarted (again using the terminal). This is extremely frustrating
for an app that is still in development, requires constant updates and
cannot be tested locally (for a number of reasons).
First, you don't need to stop your app before running git push heroku master. Just push, and the platform will build, and then restart your app with the new code, automatically. Second, git uses a diffing algorithm, so you aren't pushing all of the files - you're in fact just pushing the differences (assuming you're using git correctly). Third, you don't need to do that final, manual restart - the platform has already done this for you on push. Finally, I would advise that if your app is impossible to test locally, you might want to reconsider the architecture of that app. It sounds very un-portable. Perhaps refer to 12factor.net for a good architecture checklist.
Testing it locally and then publishing it on Heroku (for good) will not do because some apps simply cannot be tested on localhost.
What type of app are you building that would be impossible to test outside of a production environment?
In any case, the closest thing I'm aware of to what you're looking for is Dropbox Sync:
https://devcenter.heroku.com/articles/dropbox-sync

How to setup IIS Express from a script the way Visual Studio does it?

When we configure a web application to run in IIS Express there are certain things VS does, like:
Creating the application host configuration file in the IISExpress subfolder of the user documents folder.
Creating a dedicated site section for each web application in the solution, including ours.
Maybe more things are done, which I am unaware of.
I would like to replicate the same process from a script, so that running the web application from the script would be equivalent to running it from VS. Including for the very first time.
Right now I start IISExpress with the /port and /path flags, because this is how I used to run Cassini. However, Cassini supported an additional flag - /vpath. They removed it from IISExpress, meaning I have to use another set of flags - /config, /site, /siteid. But I suspect it must be done in conjunction with the Appcmd.exe utility.
This second approach is still something I haven't managed to master. So, my question is this - suppose I am given the port, path and vpath of a web application (i.e. no need to read them from the web application's csproj file, like VS does). What command sets up the right application host configuration file and how do I run IISExpress to take advantage of it?

Launch a local file with default file handler from chrome packaged app (or extension)

I'm building a launcher for internal use with a Chrome packaged app which includes links to internal resources (databases, web links, etc.).
The problem is with local files. I want them to launch using whatever program is the default handler for them. For example, access databases open in Access, etc.
I've tried:
Creating a file link file:///. Nothing happens in this scenario on click and the link is not followed.
I found an extension (locallinks) here: https://code.google.com/p/locallinks/, which will open local file links. I've tried borrowing from that extension and passing the file link to the background script in my packaged app which would then open a new window with that url. Unfortunately, that results in a file not found, even for simple types such as text files. So obviously the local filesystem is sandboxed. Not surprising.
I thought maybe it would work to pass the link to an extension to open, but in that case, the file would be opened in Chrome and if Chrome does not support it, it would attempt to download the file locally.
The reason I'm using Chrome Packaged Apps is:
1. This will be updated often and the Chrome Web Store update feature would make it easy to keep clients updated without having to build our own update mechanism.
2. We can restrict installation of the app through CWS to internal users.
3. The app would be used in a Windows, Linux and Mac environment. Obviously the file paths here would be different but since they would point to a samba share, and mount points and network share drive's are known this is an easy problem to overcome.
4. There is additional functionality we will be building into the Chrome app in the future other than the launcher which fits very well with how Chrome Apps are designed.
My thoughts are:
Native Client? I have read a bit about these, but I think I would end up with the same limitations where the native client app would be sandboxed and may not actually have any better way of launching a local file.
Sockets? Maybe a simple Qt app listening on a socket to launch apps? Since the Qt app would be run with user permissions, and the socket would only accept connections from localhost, I guess the socket could in theory be used by a non-privileged app to launch something with user-level permissions. Is there a way for me to limit connections through the socket to only be accessible from my extension?
The sockets solution isn't ideal but may work since the app would not be updated often (if ever) since functionality is so simple.
Am I missing an obvious way of doing this that wouldn't require another component (a Qt app?)
Relating to your thought #2, not sure what local installation footprint you are willing to tolerate, but you may consider:
Hosting a miniscule local web server, or Qt app as you mention, which can also launch local programs (any of those lightweight web server frameworks). Have your packaged app, or your own chrome extension rewrite links such that they point at your web server along with the url of the original link, which can easily launch whatever program. Downsides: this may cause bypassing some browser security screening of the original links in some forms of implementation.
You may also look at this stackoverflow question if it helps.
You can limit access by confirming the requests originate from the local machine, or by embedding a key or hash inside your chrome extension. You may generate the key upon installation so that it's unique per machine. None of this will pass very proper security scrutiny so it depends on your risk profile. You will have a hard time justifying how each part is secure and clean of exploitation attack potential.
It seems you will need both a chrome extension and a local miniscule web server to make this work. Maybe it's easier to let users just download the files and click them...
Sorry if this isn't help enough, but basically you are trying to do something that is by design not made possible in Chrome, so at this state of affairs there would likely not be a simple solution.

Deployment race condition causing CDN to cache old or broken files

Our current deploy process goes something like this:
Use grunt to create production assets.
Create a datestamp and point files at our CDN (eg /scripts/20140324142354/app.min.js).
Sidenote: I've heard this process called "versioning" before but I'm not sure if it's the proper term.
Commit build to github.
Run git pull on the web servers to retrieve the new code from github.
This is a node.js site and we are using forever -w to watch for file changes and update the site accordingly.
We have a route setup in our app to serve the latest version of the app via /scripts/*/app.min.js.
The reason we version like this is because our CDN is set to cache JavaScript files indefinitely and this purposely creates a cache miss so that the code is updated on the CDN (and also in our users' browsers).
This works fine most of the time. But where it breaks down is if one of the servers lags a bit in checking out the new code.
Sometimes a client hits the page while a deploy is in progress and tries to retrieve the new JavaScript code from the CDN. The CDN tries to retrieve it but hits a server that isn't finished checking out the new code yet and caches an old or partially downloaded file causing all sorts of problems.
This problem is exacerbated by the fact that our CDN has many edge locations and so the problem isn't always immediately visible to us from our office. Some edge locations may have pulled down old/bad code while others may have pulled down new/good code.
Is there a better way to do these deployments that will avoid this issue?
As a general rule of thumb:
Don't do live upgrades. (unless the language supports it, but even then think twice)
Pulling code using git pull and then waiting for the app to notice changes to files sounds a lot like the 90's: uploading php files to an apache web server using ftp (or sftp if you are cool) and waiting for apache to notice that they were updated. It can't happen atomically, so of course there is a race condition. Some users WILL get a half built and broken site.
I recommend only upgrading your live and running application while no one is using it. Hopefully you have a pool of servers behind a load balancer of some sort, which will allow you to remove them one at a time and upgrade them.
This will mean that users will be able to use both the old and the new site at the same time depending on how and when they access it, but that is much better then not being able to access it at all.
Ideally you would be able to spin up copies of each of the web servers that you have running with the new version of the site. Check that the new version does work, and then atomically update the load balancer so that everyone gets bumped to the new site at the same time. And only once everything is verified to be working perfectly the old machines are shut down and decommissioned, or reused.
step 4 in your procedure should be:
git archive --remote $yourgithubrepo --prefix=$timestamp/ | tar -xf -
stop-server
ln -sf $timestamp current
start-server
your server would use the current directory (well, a symlink) at all times. no matter how long the deploy takes, your application is in a consistent state.
I'll go ahead and post our far-from-ideal monkey-patch that we're using right now.
We deploy once which may or may not go as planned, once we're sure the code is deployed on all the servers we do another build where the only thing that changes is the version number.
Then we deploy again server by server.
The race condition still exists but because the application code between the two versions is the same this masks the issue since no matter which server the CDN hits it gets the "latest" code.

How can I test whether jmx-console.war is being used in JBoss 4.2.2?

There is a file within the .\jboss-4.2.2.GA\server\default\deploy folder, named "jmx-console.war". I am getting a security vulnerability dealing with this module. How can I tell if our application is using this module. I implemented an open source tool, but I'm not sure how to test whether it's being used.
Nessus vulnerability of High Severity:
JBoss JMX Console Unrestricted Access
http://www.tenable.com/plugins/index.php?view=single&id=23842
If you see that war file in the deploy folder, then most likely your application is using it. That is to say, it is most likely being loaded. It should be fairly easy to test for, assuming you know the HTTP port the JBoss instance is listening on. By default, it is 8080 so point your browser to http://[your jboss host]:8080/jmx-console and see if the console comes up, keeping in mind that it might be password protected, and your HTTP port might not be 8080.
You should also see something like this in the server.log or configured equivalent:
11:52:30,165 INFO main [TomcatDeployer] deploy, ctxPath=/jmx-console,
warUrl=.../deploy/jmx-console.war/
Having said that, there's a couple of ways I can think of that would indicate or cause the jmx-console to not be deployed:
The folder you referenced is in the default server directory. This is only one instance out of 3 (default, all, minimal) and you may be running one of the others, or even a custom configured server. That is to say, if you were running the minimal server instance, or one that did not contain the jmx-console.war, then the presence of that file in the default server's deploy directory would not cause it to be deployed in another server's instance. (that all sounds more complicated than it really is)
War files in the deploy directory depend on another directory called jboss-web.deployer which actually deploys war files. If that directory is not there, my guess is that war deployment has been disabled. Highly unlikely though, as there are easier ways of doing this, and if someone went to the trouble of removing this folder, they probably would have removed the wars too.
Bottom line is, the easiest way would be to find the http port, then hit the jmx-console URL and see if it responds, or check the log file. It is conceivable that someone could rename jmx-console.war to something else (in an ill-conceived attempt to hide it perhaps ?) in which case, you would need to execute a battery of http request scans and try and find a jmx-console signature, but that's out of my (otherwise quite large...) area of expertise.

Resources