Node.js - I need to know my resource usage before deploying to google cloud - node.js

EDIT: STILL NOT ANSWERED. I appreciate the advice I have received so far, but I still have not found a proper way to test the amount of resources my server is using. I decided to use GCE instead of GAE but I still want to measure the resource usage.
I have searched all over google as well as SA and can't seem to figure this one out.
I would like to deploy my (very small) node.js server to either Google App Engine or Google Compute Engine (not sure which to use yet).
I see that they charge based on how many resources you use, but how can I check this before I make my decision? Basically what I would like to do is find a way to analyse my server and see what CPU/DISK/NETWORK/RAM/Etc it uses, and then possibly make some refinements to my code to get the usage down as low as possible.
I am a hobbyist programmer and this server is just for personal stuff so I don't need anything fancy. I just want to get it hosted on google and not my home server. My real fear is that, since I am not a professional, my code might be doing some crazy background stuff repeatedly that would rack my usage up for nothing.
Quick rundown on what my server does:
Basic node.js express template that IntelliJ made me, then I added my code to sit and listen to a Firebase. When the firebase gets a message (once or twice a day maybe, text message equivalent size) the server sends a quick GCM/FCM message to a few devices. Extremely simple server, very little code. Nothing crazy.
As a little bonus for me, if you have a suggestion as to which platform I should use, I am all-ears.

If you do not need this server to run 24x7, use App Engine. It stops an instance if it is not being used for 15 minutes. The startup time for new instances depends on your code, but for Node.js instances it should not be long.
Generally speaking it is easier to run an app on App Engine than Compute Engine, but if you use a single instance and don't change code often the difference is negligible.
App Engine has a generous free quota. You may end up paying nothing until the usage gets over a certain threshold.
You can run some diagnostic tools on your existing server, but even then you will get an approximation - a server with a different combination of resources sitting on a different network may use resources differently. You may be able to get a rather accurate estimate of memory usage, though.
If this is a small app with not too many users, even a small instance should be able to handle it. There is no harm in trying - start with the smallest instance, test, go to the next instance up if tests fail. Your key concern should be to have enough memory to handle a small number of requests.
As for the number of requests your server can handle, you can configure automatic scaling. It is a default option in App Engine and can be enabled for flexible runtime. Then you can have the smallest instance (i.e. your server does not crash due to the lack of memory) running, and another instance will be added if and when that small instance is not enough.

Well, after over a month I figure I might as well answer this myself.
What I ended up doing was creating a basic instance on Computer Engine (the micro. Smallest one available) and letting it just sit there for a few weeks. I looked back at the data to see what some good baselines were and took note.
Then I took my server code and ran it on the server. I left if there for a few days, changed it, updated it, etc. Just tried to simulate the things I would be doing. Sent messages on my client app (that's what this server is doing after all is said and done) and I let this go on for a few more weeks.
The rest is history. I looked at the baseline then looked at my new memory, CPU, network and disk usage and there we go. Good to go. My free trial still isn't even over so it was a free experiment.
The good news is that my server is more 'lightweight' than I thought.

Related

Node.js Hosting: low bandwidth, high cpu

I've been looking around for a Node.js hosting service that suits my (probably rather exceptional) needs. It's basically a web app for CMS editors to preview CMS pages. (The actual website is statically hosted.)
So it handles only few requests, but all page requests (*.html) trigger quite a series of actions, to simplify let's say it rebuilds a good part of the website.
What I need is a service that delivers high performance on few occasions. Also it should support continuous deployment, so that when we update something, the app stays always on. (also the simpler the better: I'm a frontend developer, not a dev ops).
I've tried Google Cloud: painfully slow update mechanism, rather complex, but stable and fast, but only if you pay a lot.
Heroku is very simple, but their plans are for standard web apps, focusing on many requests, high bandwidth etc. Still, the $250 plan is rather ok in terms of performance. But again: pricey.
Jelastic would offer this flexible vertical scaling. But it's hard to do Continuous deployment and I have not yet figured out how to update an app without interruption of service.
I also though about renting a virtual private server, but again I would not know how to provide continuous delivery. Also, I'd rather have a dedicated service.
It feels like there must be a simple service that I've just missed. I'm grateful for any help or hints!
I have finally found an answer to my search: Google Cloud Run
You'll only pay by usage, and it still offers very good performance. For my scenario it decreases cost by a crazy factor: we'll pay less than 1% of the previous solution (driven by Google App Engine).
I've also tried AWS Lambda, but I had many issues with the rather extensive node app (like dynamic requires).

How to affordably release a webapp for my job company

First of all, I'm not really sure if this question goes here in stackoverflow or if I should ask it on another place. Please if that's the case, indicate me in the right way :)
So, for context, this is an app that I was asked to develop for my job. At first I thought in doing a webapp and host it inside the company servers and domain (intranet), but it isn't possible due to external issues that I can't control.
Is there another way to achieve this? The app must have a database and should be accessible for a bunch of users at the same time.
Of course we want to spend the least amount of money possible to make this happen. Also, using a workstation of our own to host everything is not possible either.
Edit: I didn't finish developing, but for now I'm developing it in Python Flask.
The number of users is small really, just up to five people.
OK - I guess a lot of what you'll get in response to this is your definition is too vague. Things such as scale, number of users, programming languages used to create the web app etc are important when talking about hosting.
However, for me, there are three very good options out there for free hosting, up to a certain amount of traffic.
1.) Heroku - Heroku.com
A world known web hosting platform. You can publish code through GitHub, and it has some extensive coverage for different types of web apps. Definitely worth a look.
2.) Netlify - netlify.com
Similar to Heroku, but used by some major companies. Allows you to host for free to a point, and is relatively simple to get started with.
3.) Vercel - vercel.com
A bit more technical in my opinion - but again, very similar to the above two and has a free tier.
All three are great options, and I'd recommend looking into them in more detail to see what option is best for you. Can't go wrong with any of them.
I had a similar problem: A Python-Flask-SQLite app for me and my office pals to use together.
The solution was creating one .exe file with pyinstaller, hosting this and the database files in a network drive (one that everyone that will use the app has access). As everybody (~10 people) sees the same db, things works fine!

Where to host my NodeJS powered socket.io API?

I'm currently working on an api for an app that I'm developing, which I don't want to tell too much about. I'm a solo dev with no patents so it's probably a good idea to keep it anonymous. It uses socket.io together with node.js to interact with clients and the other way around, which I might be swapping out sometime later for elixir and it's sockets, but that isn't relevant for now. Now I'm trying to look into cloud hosting, but I'm having a rough time finding a good service to use.
These are my requirements:
24/7 uptime
Low memory and performance necessary (at least to start with). About 1+ gig with 2+ cores will most likely suffice (need 2 threads or more for node to handle async programming well)
Preferably free for like maybe even a year, or just really cheap, but that might be munch to ask
Must somehow be able to run some sort of database. Haven't really settled on this yet, but I want to implement a custom currency at some point, and probably have the ability to add some cooldowns. So it can be fairly simple and small. If anybody has any tips on what database I should use, that would also be very welcome. I was thinking of Cassandra because of the blazing fast performance and expandability. But I also wanna look into remote databases, especially if I'm gonna go international with the product
Ability to keep open socket.io connections, as you've probably guessed :P
Low ping decently high bandwith internet. The socket.io connections are lightweight and not a lot of data has to be sent. Mostly packets of a few kilobytes every now and then for all of the clients.
If this information is too vague or you want to know some other requirements I haven't thought of, let me know.
Check out Heroku (PaaS), they have a free version to start with

Using Nodejs for writing a web application

I am considering developing a web site which has many characteristics of a social networking site. The website, I am considering will have a lot of apps, which will interact with the database, and also, scraping other websites for information and a multiuser chat. Also, it will feature a forum, blog, and other similar CRUD applications. The key things I am looking at is
Response time
Max number of developers may be 1 to 3 during the initial stages
I expect the website to scale up to around 1000 concurrent users in a year, and then hopefully an exponential growth.
The users are expected to spend a lot of time, in the site.
With this requirements in mind, I looked at Django, and Web2Py, since I am knowledgable in Python. It fits the bill mostly, but, I am concerned about the scalability, and as it scales, I will require more servers to be added. This means, additional cost, and I don't have any ideas to monetize the app in the near future for various reasons. So, I have to be satisfied with a limited amount of resources.
Can you kindly advice me?
Thx
Ik
From what you had described, Node.js is perfect. Not only does it have a low memory footprint and can it handle thousands of concurrent clients out of the box, but you can definitely use it for scraping websites (see this and this), creating chats (check nodechat and this other nice tutorial)
The respond time depends on your application, but if you code the right way (don't block the event loop of Node.js, keep you 'heavy-lifting' outside the server process) Node.js is really fast.
This depends on you, but consider Node.js is JavaScript on the server-side, so there is already a great pool of developers that already know JS and could learn Node.js specific things fast.
There were some official benchmarks on the nodejs blog some weeks ago, look here: http://blog.nodejs.org/2011/11/05/node-v0-6-0/ A simple server with Node.js can handle 5-6 thousands of requests per second, so you can imagine that's really something.
Spending a lot of time on the site means that they'll be making many requests, so look at my point above 3).
http://highscalability.com/blog/2011/2/22/is-nodejs-becoming-a-part-of-the-stack-simplegeo-says-yes.html
Scaling node.js

What are good ways to create real-time stats for high-load webservers?

Say I have a bunch of webservers each serving 100's of requests/s, and I want to see real time stats like:
Request rate over last 5s, 60s, 5 min etc
Number of unique users seen again per time window
Or in general for a bunch of timestamped events, I want to see real-time derived statistics - what's the best way to go about it?
I've considered having each GET request update a global counter somewhere, then sampling that at various intervals, but at the event rates I'm seeing it's hard to get a distributed counter that's fast enough.
Any ideas welcome!
Added: Servers are Linux running Apache/mod_wsgi, with a Python (Django) stack.
Added: To give a sense of the event rates I want to track stats for, they're coming in at over 10K events/s. Even incrementing a distributed counter at that rate is a challenge.
You might like to help us try out the beta of our agent for application performance monitoring in Python web applications.
http://newrelic.com
It delves more into the application performance rather than just the web server, but since any bottlenecks aren't generate going to be the web server, but your application then that is going to be more useful anyway.
Disclaimer. I work for New Relic and this is the project I am working on. It is a paid product, but the beta means it is free for now with all features. Later when that changes, if you didn't want to pay for it, their is still a Lite subscription level which is free and which gives you basic web metrics reporting which still covers some of what you are after. Anyway, right now would be a great opportunity to make use of it to debug your performance while you can.
Virtually all good servers provide this kind of functionality out of the box. For example, Apache has the mod_status module and Glassfish supports JMX. Furthermore, there are many commercial packages for monitoring clusters, such as Hyperic and Zenoss.
What web or application server are you using? It is difficult to provide a solution without that information.
Look at using WebSockets, their overhead is much smaller than a HTTP request, they are very well suited to real-time web applications. See: http://nodeknockout.com/ for Node based websocket examples.
http://en.wikipedia.org/wiki/WebSocket
You will need to run a daemon if you want to run it on your apache server.
Also take a look at:
http://kaazing.com/ if you wan't less hassle, but are willing to fork out some cash.
On the Windows side, Perfmonance monitor is the tool you should investigate.
As Jared O'Connor said, you should precise what kind of web server you want to monitor.

Resources