Nodejs Desktop App vs Web based app performance - node.js

My employer is in a crossroads now:
We've got an offer to create app for a large multinational company, interested in monitoring of a large fleet of vehicles simultaneously on a map. I'm talking about 5000 at the time. We tried to do that in our current web based app and it chokes due to quanity of objects, despite our efforts to optimize code. My question is: can we gain some performance boost if we convert our web based app into a desktop one via nodejs`s modules like node-webkit or atom-shell. Does a desktop app has a better access to a system resources? Web page is frozen beyond help and even gives me a message to mercy kill it, because processing is taken too long, but in a task manager it only uses about 18% of CPU and 2 GB of ram out of 16 GB.

No that wont help. Your code still runs in a webkit browser.
The trick is to not show all 5000 objects at a time.
Showing 5000 pins on a map is not useful to the user anyways, group markers that are close together (https://developers.google.com/maps/articles/toomanymarkers?hl=en);
as the user zooms in you can then show a more and more detailed view.

Related

Determining cause of CPU spike in azure

I am relatively new to Azure. I have a website that has been running for a couple of months with not too much traffic...when users are on the system, the various dashboard monitors go up and then flat line the rest of the time. This week, the CPU time when way up when there were no requests and data going in or out of the site. Is there a way to determine the cause of this CPU activity when the site is not active? It doesn't make sense to me that I should have CPU activity being assigned to my site when there is to site activity.
If your website has significant processing at application start, it is possible your VM got rebooted or your app pool recycled and your onstart handler got executed again (which would cause CPU to spike without any request).
You can analyze this by adding application logs to your Application_Start event (but after initializing trace). There is another comment detailing how to enable logging, but you can also consult this link.
You need to collect data to understand what's going on. So first thing I would say is:
1. Go to Azure management portal -> your website (assuming you are using Azure websites) -> dashboard -> operation logs. Try to see whether there is any suspicious activity going on.
download the logs for your site using any ftp client and analyze what's happening. If there is not much data, I would suggest adding more logging in your application to see what is happening or which module is spinning.
A great way to detect CPU spikes and even determine slow running areas of your application is to use a profiler like New Relic. It's a free add on for Azure that collects data and provides you with a dashboard of data. You might find it useful to determine the exact cause of the CPU spike.
We regularly use it to monitor the performance of our applications. I would recommend it.

How does memory usage, cpu time, data out, and file system storage apply to my website?

Pardon my ignorance but I have a few questions that I can not seem to get the answers by searching here or google. These questions will seem completely dumb but I honestly need help with them.
On my Azure website portal I have a few things I am curious of.
How does CPU-Time apply to my website? I am unaware how I am using CPU unless this applies to hosting some type of application? I am using this "site" as a form to submit data to my database.
What exactly does "data out" mean? I am allowed 165mb per day.
What exactly is file system storage? Is this the actual space available on my Azure server to store my project and any other things I might directly host on it?
Last question is, how does memory usage apply in this scenario as well? I am allowed 1024mb per hour.
I know what CPU-Time is in desktop computing as well as memory usage but I am not exactly sure how this applies to my website. I do not know how I will be able to project if I will go over any of these limits so that I can upgrade my site.
How does CPU-Time apply to my website? I am unaware how I am using CPU
unless this applies to hosting some type of application? I am using
this "site" as a form to submit data to my database.
This is CPU time used by your code. If you use a WebSite project (in ASP.NET) you may want to do PreCompilation for your WebSite proejct before deploying to Azure Website (read about PreCompilations here). Compiling your code is one side of the things. Rest is executing your code. Each web request that goes to a server handler/mapper/controller/aspx page/etc. uses some CPU time. Especially writing to database and so on. All these actions count toward CPU time.
But how exactly the CPU time is measured, it is not documented.
What exactly does "data out" mean? I am allowed 165mb per day.
Every single HTTP request to your site generates a response. All the data that goes out from your website is counted as "data out". Basically all and any data that goes out of the Data Center where your WebSite is located counts as data out. This also includes any outgoing HTTP/Web Request your code might be performing against remote sources. This also is the Data that goes out if you are using Azure SQL Database that is not in the same Data Center as your WebSite.
What exactly is file system storage? Is this the actual space
available on my Azure server to store my project and any other things
I might directly host on it?
Exactly - your project + anything you upload to it (if you allow for example file uploads) + server logs.
Last question is, how does memory usage apply in this scenario as
well? I am allowed 1024mb per hour.
Memory is same as CPU cycles. However my guess is that this is much easier to gauge. Your application lives in its own .NET App Domain (check this SO question on AppDomain). It is relatively easy to measure memory usage for the App Domain.

deploying CPU intensive web service on cloud

I have an application which I want to expose as a web service (SaaS). The application is CPU intensive and is a multithreaded application which takes good amount of time for the execution(on an average 15-20secs). Since, I want to expose it as a SaaS and want to use existing cloud services available in the market like Amazon, Google App Engine etc. so that the cost involved and the work involved while scaling my service is not much. I have couple of questions in my mind like:
1.) Since the application is multithreaded and the number of threads invoked depends on the number of results thrown by the service(so basically number of threads is a dynamic entity). Right now I have a 6 core processor so I have kept the threadpool size to be 6 but since I am moving onto the cloud, how can I optimally use the cloud infrastructure?
2.) Do the cloud service providers(which?) give the option to select number of CPU cores required for each request (or something similar to serve my purpose)?
3.) What changes are needed in the code (related to the threads)?
4.) Any other specific area which I should give a sight for moving to the cloud?
In Amazon EC2 you are basically paying for different types of instances - you are free to pick one with only single core and one with sixteen. You get what you pay for.
how can I optimally use the cloud infrastructure?
Your approach is fine, if your task is CPU-intenstive, have a thread pool with the same number of threads as CPU cores/CPUs.
select number of CPU cores required for each request
No, at least not Amazon. You run your application on a given instance and that's all you get. You have to pick instance type in advance, but of course you are free to switch between them, add new, etc. at any time. The cloud!
In Google App Engine you can't create threads, so it's a no-option for you. See also: Why does Google App Engine support a single thread of execution only?
3.) What changes are needed in the code (related to the threads)?
None. It's a standard PC, after all.
4.) Any other specific area which I should give a sight for moving to the cloud?
Well, see above, some services are completely useless for you, like GAE. Make some research before you actually pay for something.

What are the most important statistics to look at when deploying a Node.js web-application?

First - a little bit about my background: I have been programming for some time (10 years at this point) and am fairly competent when it comes to coding ideas up. I started working on web-application programming just over a year ago, and thankfully discovered nodeJS, which made web-app creation feel a lot more like traditional programming. Now, I have a node.js app that I've been developing for some time that is now running in production on the web. My main confusion stems from the fact that I am very new to the world of the web development, and don't really know what's important and what isn't when it comes to monitoring my application.
I am using a Joyent SmartMachine, and looking at the analytics options that they provide is a little overwhelming. There are so many different options and configurations, and I have no clue what purpose each analytic really serves. For the questions below, I'd appreciate any answer, whether it's specific to Joyent's Cloud Analytics or completely general.
QUESTION ONE
Right now, my main concern is to figure out how my application is utilizing the server that I have it running on. I want to know if my application has the right amount of resources allocated to it. Does the number of requests that it receives make the server it's on overkill, or does it warrant extra resources? What analytics are important to look at for a NodeJS app for that purpose? (using both MongoDB and Redis on separate servers if that makes a difference)
QUESTION TWO
What other statistics are generally really important to look at when managing a server that's in production? I'm used to programs that run once to do something specific (e.g. a raytracer that finishes running once it has computed an image), as opposed to web-apps which are continuously running and interacting with many clients. I'm sure there are many things that are obvious to long-time server administrators that aren't to newbies like me.
QUESTION THREE
What's important to look at when dealing with NodeJS specifically? What are statistics/analytics that become particularly critical when dealing with the single-threaded event loop of NodeJS versus more standard server systems?
I have other questions about how databases play into the equation, but I think this is enough for now...
We have been running node.js in production nearly an year starting from 0.4 and currenty 0.8 series. Web app is express 2 and 3 based with mongo, redis and memcached.
Few facts.
node can not handle large v8 heap, when it grows over 200mb you will start seeing increased cpu usage
node always seem to leak memory, or at least grow large heap size without actually using it. I suspect memory fragmentation, as v8 profiling or valgrind shows no leaks in js space nor resident heap. Early 0.8 was awful in this respect, rss could be 1GB with 50MB heap.
hanging requests are hard to track. We wrote our middleware to monitor these especially as our app is long poll based
My suggestions.
use multiple instances per machine, at least 1 per cpu. Balance with haproxy, nginx or such with session affinity
write midleware to report hanged connections, ie ones that code never responded or latency was over threshold
restart instances often, at least weekly
write poller that prints out memory stats with process module one per minute
Use supervisord and fabric for easy process management
Monitor cpu, reported memory stats and restart on threshold
Whichever the type of web app, NodeJS or otherwise, load testing will answer whether your application has the right amount of server resources. A good website I recently found for this is Load Impact.
The real question to answer is WHEN does the load time begin to increase as the number of concurrent users increase? A tipping point is reached when you get to a certain number of concurrent users, after which the server performance will start to degrade. So load test according to how many users you expect to reach your website in the near future.
How can you estimate the amount of users you expect?
Installing Google Analytics or another analytics package on your pages is a must! This way you will be able to see how many daily users are visiting your website, and what is the growth of your visits from month-to-month which can help in predicting future expected visits and therefore expected load on your server.
Even if I know the number of users, how can I estimate actual load?
The answer is in the F12 Development Tools available in all browsers. Open up your website in any browser and push F12 (or for Opera Ctrl+Shift+I), which should open up the browser's development tools. On Firefox make sure you have Firebug installed, on Chrome and Internet Explorer it should work out of the box. Go to the Net or Network tab and then refresh your page. This will show you the number of HTTP requests, bandwidth usage per page load!
So the formula to work out daily server load is simple:
Number of HTTP requests per page load X the average number of pages load per user per day X Expected number of concurrent users = Total HTTP Requests to Server per Day
And...
Number of MBs transferred per page load X the average number of pages load per user per day X Expected number of concurrent users = Total Bandwidth Required per Day
I've always found it easier to calculate these figures on a daily basis and then extrapolate it to weeks and months.
Node.js is single threaded so you should definitely start a process for every cpu your machine has. Cluster is by far the best way to do this and has the added benefit of being able to restart died workers and to detect unresponsive workers.
You also want to do load testing until your requests start timing out or exceed what you consider a reasonable response time. This will give you a good idea of the upper limit your server can handle. Blitz is one of the many options to have a look at.
I have never used Joyent's statistics, but NodeFly and their node-nodefly-gcinfo is a great tools to monitor node processes.

Number of instances needed for windows azure application

I'm fairly new to Windows Azure and want to host a survey application that will be filled out by appr. 30.000 users simultaniously.
The application consists of 1 .aspx page that will be sent to the client once, asks 25 questions and will give a wrap-up of the given answers at the end. When the user has given the answer and hits the 'next question' buttons the given answer will be send via an .ashx handler to the server. The response is the next question and answers. The wrap-up is sent to the client after a full postback.
The answer is saved in an Azure Table that is partitioned so that each partition can hold a max of 450 users.
I would like to ask if someone can give an estimated guess about how many web-role instances we need to start in order to have this application keep running. (If that is too hard to say, is it more likely to start 5, 50 or 500 instances?)
What is a better way to go: 20 small instances or 5 large instances?
Thanks for your help!
The most obvious answer: you would be best served by testing this yourself and see how your application holds up. You can easily get performance counters and other diagnostics out of Windows Azure; for instance, you can connect Microsoft SCOM (System Center Operations Manager) to monitor your environment during test. Site Hammer is a simple load testing tool for Windows Azure (on MSDN code gallery).
Apart from this very obvious answer, I will share some guesstimates: given the type of load, you are probably better of with more small instances as opposed to a lower number of large ones, especially since you already have your storage partitioned. If you are really going to have 30K visitors simultaneously and give them a ~15 second interval between reading the questions & posting their answers you are looking at 2,000 requests per second. 10 nodes should be more than enough to handle that load. Remember that this is just a simple estimate, lacking any form of insight in your architecture, etc. For these types of loads, caching is a very good idea; it will dramatically increase the load each node can handle.
However, the best advice I can give you is to make sure that you are actively monitoring. It takes less than 30 minutes to spin up additional instances, so if you monitor your environment and/or make sure that you are notified whenever it starts to choke, you can easily upgrade your setup. Keep in mind that you do need to contact customer support to be able to go over 20 instances (this is a default limit, in place to protect you from over-spending).
Aside from the sage advice tijmenvdk gave you, let me add my opinion on instance size. In general, go with the smallest size that will support your app, and then scale out to handle increased traffic. This way, when you scale back down, your minimum compute cost is kept low. If you ran, say, a pair of extra-large instances as your baseline (since you always want minimum two instances to get the uptime SLA), your cost footprint starts at 0.12 x 8 x 2 = $1.92 per hour, even during low-traffic times. If you go with small instances, you'd be at 0.12 x 1 x 2 = $0.24 per hour.
Each VM size as associated CPU, memory, and local 9non-durable) disk storage, so pick the smallest size unit that your app works efficiently in.
For load/performance-testing, you might also want to consider a hosted solution such as Loadstorm.
How simultaneous are the requests in reality?
Will they all type the address in at exactly the same time?
That said, profile your app locally, this will enable you to estimate CPU, Network and Memory usage on Azure. Then, rather than looking at how many instances you need, look at how you can reduce the requirement! Apply these tips, and profile locally again.
Most performance tips have a tradeoff between cpu, memory or bandwith usage, the idea is to ensure that they scale equally. If you're application runs out of memory, but you have loads of CPU and network, dont
For a single page survey, ensure your html, css & js is minified, ensure its cacheable.
Combine them if possible, and to get really scaleable, push static files (css,js & images) to a CDN. This all reduces the number of requests the webserver has to deal with, and therefore reduces the number of webroles you will need = less network.
How does the ashx return the response? i.e. is it sending html, xml or json?
personally, I'd get it to return JSON, as this will require less network bandwidth, and most likely less server side processing = less mem and network.
Use Asyncronous API's to access azure storage (this uses IO completion ports to free up the iis thread to handle more requests until azure storage comes back = enabling cpu to scale)
tijmenvdk has already mentioned using queues to write. Do the list of questions change? if not, cache them, so that the app only has to read from table storage once on start-up and once for each client for the final wrap-up = saves network and cpu at the expense of memory.
All of these tips are equally applicable to a normal web application, on a single server or web-farm environment.
The point I'm trying to make is that what you can't measure, you cant improve, and measurement, improvement and cost all go hand in hand. Dynamic scaling will reduce costs, but fundamentally if your application hasn't been measured and resource usage optimised, asking how many instances you need is pointless.

Resources