Node.js Hosting: low bandwidth, high cpu - node.js

I've been looking around for a Node.js hosting service that suits my (probably rather exceptional) needs. It's basically a web app for CMS editors to preview CMS pages. (The actual website is statically hosted.)
So it handles only few requests, but all page requests (*.html) trigger quite a series of actions, to simplify let's say it rebuilds a good part of the website.
What I need is a service that delivers high performance on few occasions. Also it should support continuous deployment, so that when we update something, the app stays always on. (also the simpler the better: I'm a frontend developer, not a dev ops).
I've tried Google Cloud: painfully slow update mechanism, rather complex, but stable and fast, but only if you pay a lot.
Heroku is very simple, but their plans are for standard web apps, focusing on many requests, high bandwidth etc. Still, the $250 plan is rather ok in terms of performance. But again: pricey.
Jelastic would offer this flexible vertical scaling. But it's hard to do Continuous deployment and I have not yet figured out how to update an app without interruption of service.
I also though about renting a virtual private server, but again I would not know how to provide continuous delivery. Also, I'd rather have a dedicated service.
It feels like there must be a simple service that I've just missed. I'm grateful for any help or hints!

I have finally found an answer to my search: Google Cloud Run
You'll only pay by usage, and it still offers very good performance. For my scenario it decreases cost by a crazy factor: we'll pay less than 1% of the previous solution (driven by Google App Engine).
I've also tried AWS Lambda, but I had many issues with the rather extensive node app (like dynamic requires).

Related

Node.js - I need to know my resource usage before deploying to google cloud

EDIT: STILL NOT ANSWERED. I appreciate the advice I have received so far, but I still have not found a proper way to test the amount of resources my server is using. I decided to use GCE instead of GAE but I still want to measure the resource usage.
I have searched all over google as well as SA and can't seem to figure this one out.
I would like to deploy my (very small) node.js server to either Google App Engine or Google Compute Engine (not sure which to use yet).
I see that they charge based on how many resources you use, but how can I check this before I make my decision? Basically what I would like to do is find a way to analyse my server and see what CPU/DISK/NETWORK/RAM/Etc it uses, and then possibly make some refinements to my code to get the usage down as low as possible.
I am a hobbyist programmer and this server is just for personal stuff so I don't need anything fancy. I just want to get it hosted on google and not my home server. My real fear is that, since I am not a professional, my code might be doing some crazy background stuff repeatedly that would rack my usage up for nothing.
Quick rundown on what my server does:
Basic node.js express template that IntelliJ made me, then I added my code to sit and listen to a Firebase. When the firebase gets a message (once or twice a day maybe, text message equivalent size) the server sends a quick GCM/FCM message to a few devices. Extremely simple server, very little code. Nothing crazy.
As a little bonus for me, if you have a suggestion as to which platform I should use, I am all-ears.
If you do not need this server to run 24x7, use App Engine. It stops an instance if it is not being used for 15 minutes. The startup time for new instances depends on your code, but for Node.js instances it should not be long.
Generally speaking it is easier to run an app on App Engine than Compute Engine, but if you use a single instance and don't change code often the difference is negligible.
App Engine has a generous free quota. You may end up paying nothing until the usage gets over a certain threshold.
You can run some diagnostic tools on your existing server, but even then you will get an approximation - a server with a different combination of resources sitting on a different network may use resources differently. You may be able to get a rather accurate estimate of memory usage, though.
If this is a small app with not too many users, even a small instance should be able to handle it. There is no harm in trying - start with the smallest instance, test, go to the next instance up if tests fail. Your key concern should be to have enough memory to handle a small number of requests.
As for the number of requests your server can handle, you can configure automatic scaling. It is a default option in App Engine and can be enabled for flexible runtime. Then you can have the smallest instance (i.e. your server does not crash due to the lack of memory) running, and another instance will be added if and when that small instance is not enough.
Well, after over a month I figure I might as well answer this myself.
What I ended up doing was creating a basic instance on Computer Engine (the micro. Smallest one available) and letting it just sit there for a few weeks. I looked back at the data to see what some good baselines were and took note.
Then I took my server code and ran it on the server. I left if there for a few days, changed it, updated it, etc. Just tried to simulate the things I would be doing. Sent messages on my client app (that's what this server is doing after all is said and done) and I let this go on for a few more weeks.
The rest is history. I looked at the baseline then looked at my new memory, CPU, network and disk usage and there we go. Good to go. My free trial still isn't even over so it was a free experiment.
The good news is that my server is more 'lightweight' than I thought.

Extremely high latency on Azure Web App

We currently self-host our website, but we've had a few downtime incidents outside of our control and we're looking at moving it into Azure. It's an ASP.NET website using Umbraco as the CMS.
Yesterday I signed up for an Azure trial, migrated a copy of our database onto an Azure SQL Server instance, spun up a new Web App and used Web Deploy to upload the app. This was my first experience with Azure, and I was pleasantly surprised at how easy it was. There were a few issues working out how to hook up my new app to my new database but overall it was a simple process.
But the performance is awful. The database is a Standard S2 and I initially created the web app on the Free tier. I was experiencing both poor download speed and latency. The first thing I tried was bumping up the Web App's scale, so I took it to Standard Medium. This seems to have fixed the download speed, but the latency is still impressively bad.
I'm using Google Chrome's network panel to test the speed. Here's what I get downloading an image from our server:
Obviously this is going to be fast as it's going over our local network, but this does at least show that the application is not the issue.
Here's what I get with Standard S2 hosted on Australia East:
The speed once the download has started is not too bad, but having a 41.92s TTFB is insane! It's not consistent, sometimes I get as low as 8s, but that's still unacceptable.
I don't have this issue when visiting other sites, so my internet is not the issue. I've tried using Small S2 and Large S2 with no change in results.
Am I doing something wrong? I find it difficult to believe that every Azure customer experiences this level of performance.
EDIT: Here's what we've learned in the comments so far:
Setting Always On does not help.
Using the Azure CDN is just as slow.
I also had enormous performance problems within the Azure environment. The cause was the activation of Applications Insights. After I deactivated it, the response times were again in the millisecond range and no longer 2-3 seconds.
This was an issue with my own network's configuration. I'm not sure how to resolve it, but I can't reproduce this issue when using my phone's internet so it's clearly not an Azure problem.

Using Nodejs for writing a web application

I am considering developing a web site which has many characteristics of a social networking site. The website, I am considering will have a lot of apps, which will interact with the database, and also, scraping other websites for information and a multiuser chat. Also, it will feature a forum, blog, and other similar CRUD applications. The key things I am looking at is
Response time
Max number of developers may be 1 to 3 during the initial stages
I expect the website to scale up to around 1000 concurrent users in a year, and then hopefully an exponential growth.
The users are expected to spend a lot of time, in the site.
With this requirements in mind, I looked at Django, and Web2Py, since I am knowledgable in Python. It fits the bill mostly, but, I am concerned about the scalability, and as it scales, I will require more servers to be added. This means, additional cost, and I don't have any ideas to monetize the app in the near future for various reasons. So, I have to be satisfied with a limited amount of resources.
Can you kindly advice me?
Thx
Ik
From what you had described, Node.js is perfect. Not only does it have a low memory footprint and can it handle thousands of concurrent clients out of the box, but you can definitely use it for scraping websites (see this and this), creating chats (check nodechat and this other nice tutorial)
The respond time depends on your application, but if you code the right way (don't block the event loop of Node.js, keep you 'heavy-lifting' outside the server process) Node.js is really fast.
This depends on you, but consider Node.js is JavaScript on the server-side, so there is already a great pool of developers that already know JS and could learn Node.js specific things fast.
There were some official benchmarks on the nodejs blog some weeks ago, look here: http://blog.nodejs.org/2011/11/05/node-v0-6-0/ A simple server with Node.js can handle 5-6 thousands of requests per second, so you can imagine that's really something.
Spending a lot of time on the site means that they'll be making many requests, so look at my point above 3).
http://highscalability.com/blog/2011/2/22/is-nodejs-becoming-a-part-of-the-stack-simplegeo-says-yes.html
Scaling node.js

What are good ways to create real-time stats for high-load webservers?

Say I have a bunch of webservers each serving 100's of requests/s, and I want to see real time stats like:
Request rate over last 5s, 60s, 5 min etc
Number of unique users seen again per time window
Or in general for a bunch of timestamped events, I want to see real-time derived statistics - what's the best way to go about it?
I've considered having each GET request update a global counter somewhere, then sampling that at various intervals, but at the event rates I'm seeing it's hard to get a distributed counter that's fast enough.
Any ideas welcome!
Added: Servers are Linux running Apache/mod_wsgi, with a Python (Django) stack.
Added: To give a sense of the event rates I want to track stats for, they're coming in at over 10K events/s. Even incrementing a distributed counter at that rate is a challenge.
You might like to help us try out the beta of our agent for application performance monitoring in Python web applications.
http://newrelic.com
It delves more into the application performance rather than just the web server, but since any bottlenecks aren't generate going to be the web server, but your application then that is going to be more useful anyway.
Disclaimer. I work for New Relic and this is the project I am working on. It is a paid product, but the beta means it is free for now with all features. Later when that changes, if you didn't want to pay for it, their is still a Lite subscription level which is free and which gives you basic web metrics reporting which still covers some of what you are after. Anyway, right now would be a great opportunity to make use of it to debug your performance while you can.
Virtually all good servers provide this kind of functionality out of the box. For example, Apache has the mod_status module and Glassfish supports JMX. Furthermore, there are many commercial packages for monitoring clusters, such as Hyperic and Zenoss.
What web or application server are you using? It is difficult to provide a solution without that information.
Look at using WebSockets, their overhead is much smaller than a HTTP request, they are very well suited to real-time web applications. See: http://nodeknockout.com/ for Node based websocket examples.
http://en.wikipedia.org/wiki/WebSocket
You will need to run a daemon if you want to run it on your apache server.
Also take a look at:
http://kaazing.com/ if you wan't less hassle, but are willing to fork out some cash.
On the Windows side, Perfmonance monitor is the tool you should investigate.
As Jared O'Connor said, you should precise what kind of web server you want to monitor.

hardware infrastructure for public web application

I'd like to start a free budget/personal finance site and will need plenty of horsepower and storage. I'm definitely a nubee, so how does one get started in terms of hardware infrastructure? Do I need to get a dedicated IP from my ISP and obtain my own servers? Do I go with amazon or Sql Server Data Services/Azure or something like that? Is the latter services free or a discount offering available to non-profit/free services such as the budget/personal finance site I'm looking to start?
If you don't mind writing your web application in python, then I's suggest using Google App Engine. See: What Is Google App Engine?
What I like to do when I have new ideas for a site is to find an inexpensive hosting solution ($10 per month). This allows me to test the idea and see if the site is going to be successful. If it is a flop, I haven't wasted much money and if it is successful I can upgrade to better hosting (dedicated server).
There are many hosting options available and several of them have great tools such as an online SQL Server management studio. Your other option would be to host it yourself if you are prepared to deal with firewall issues, backups, storage, etc.
Whether it is feasible to DIY varies a lot by country...if you have a decent broadband connection with a fixed IP this can be the cheapest route to play around with first, especially if you need an awful lot of storage.
Note however that many fast broadband connections are only fast for downloads - when you're running a server, the speed your users will see is the upload speed, which is usually a lot less. Also, you'll need to do your own admin and backup etc.
Apart from this most hosting options have a price tag on top, varying from virtual hosts (sharing a real machine), to colocation (your machine in somebody's data center), to cloud services like amazon et al (which have a good scaling ability)- and you will need to shop around for the software stack and hardware features you really need.
There's really two ways to answer this question, what differentiates them is budget.
One is to properly design this solution, prototype it, benchmark the prototype, extrapolate anticipated user load, add overhead and scale accordingly. This takes time, costs but gives you a supportable solution that serves your customers well.
The other is to just give something, anything a go and fix the problems as they come along. This is quicker and cheaper but might be a headache for a while and might p*** off your customers.
Basically it comes down to budget.
Best of luck.

Resources