Remote Performance Monitor - Browser Based - node.js

I work with an ARM based embedded system with a Linux kernel and a fairly large filesystem image(~1GB). The kernel and filesystem are under my control, so I can add modules and re-build if necessary.
The system has Node.js and on top of this Node-Red for an IOT application. I want to leverage the simple server capability of Node-Red to serve a web page showing graphically the system performance statistics.
I am considering building collectd for the target system and using it to write performance data to the filesystem. Then, I will use Node-Red/Node.js to present this information as a web page. This approach seems straightforward enough to be doable.
My question is: is there an alternative, established way of implementing such a remote system performance viewer? Or can anyone suggest a lightweight performance monitor and a method of displaying the statistics graphically on a web page?

I started node-spm for that logging to Sematext Logsene and writing custom metrics to SPM. I collected also process and OS metrics. It's in an early stage and I will do in the next few weeks more on that.

Related

How to monitor performance counters of HP Web Tour sample application apache server installed on local machine

how to monitor Performance counters of HP Web Tour sample application apache server installed locally in system using jvisualvm or any other utility.
Looking into Monitoring Your Server and 9 Key Apache Web Server Performance Metrics to Monitor it appears that:
You need to keep an eye on Apache error log
You need to consider Apache specific metrics like requests per second, bytes per second, bytes per request. You should be also able to extract these metrics from your performance testing tool, normally they must report these kind of stats.
You need to consider infrastructure metrics like CPU, RAM, Disk, Network, Swap usage on the machine where you're running this sample application. The majority of operating system come with built-in monitoring tools i.e. Windows Performance Monitor or number of command-line utilities for Linux or a 3rd-party cross-platform monitoring solution like PerfMon or Zabbix

check the load of an application with synthetic users

I am looking to find out the response times that takes for my nodejs application with (lets say) when 1000 users uses it simultaneously. I believe this is called stress testing. How Can I acheive this ?
I am new to testing area and yet to acquire knowledge on tools that will be used.
Edit: I need to know how to have virtual users for the application.
In order to understand the response times for an application under load, you need two components:
A load driver
There's a number of available tools that will simulate users making HTTP requests. A simple open source option is (Apache JMeter. The following page in the documentation shows you how to create a web test plan, including adding virtual users:
http://jmeter.apache.org/usermanual/build-web-test-plan.html
A performance monitoring tool
In order to understand how your application is performing under the load, and measure application responsiveness, you need a performance monitoring tool. One set of options is to track the performance of 'http' monitoring events using Node Application Metrics, either using the API directly, or using an open source monitoring stack like Elasticsearch/Kibana using the ELK Connector for Node Application Metrics. There's a getting started guide to monitoring Node.js application performance with Elasticsearch/Kibana here:
https://developer.ibm.com/open/2015/09/07/using-elasticsearch-and-kibana-with-node-application-metrics/

Remote monitoring of system stats with node.js

We have implemented a monitoring solution in node.js, which does some basic checks for database integrity and API up-time. We want to expand this system to collect basic system stats of our Linux servers like CPU and disc usage. Some of these servers are behind a firewall which is out of our control, with only some very basic ports open (ssh,ftp,http,https).
How can I gather the system information of these servers in node.js. Are there monitoring systems which expose these information through a (secured) RESTful API?
I've had a lot of success with this ssh client written in javascript:
https://github.com/mscdex/ssh2
So there tons of available solutions for monitoring system stats: Nagios, Zabbix, Scout, Cacti. There are even some hosted ones like ServerDensity.
All of these systems should cover the top-level stats: CPU, RAM, Disk IO & Network. They all have a plug-in infrastructure so that you can send custom stats (API uptime, DB availability) and send them along with the regular stats.
If you're running on a cloud infrastructure somewhere, many of these provide information "out of the box", generally in your account dashboard (see guys like Joyent or Azure).
Big question here is "what else do you need"?
Use NRPE from Nagios as a client on the box you want to monitor. It's fairly simple to set up and it's API is documentented. http://exchange.nagios.org/directory/Addons/Monitoring-Agents/NRPE--2D-Nagios-Remote-Plugin-Executor/details

Monitoring Node.js application running into Windows Azure

Is there a way to enable Performance Counters to monitor Node.js application performance in Windows Azure?
I haven't experimented with it myself yet, but there is node-perfmon which is a wrapper around typeperf. That says it allows you to write performance counters, as well as simple memory / cpu monitoring. Is this the sort of monitoring you were looking for?
Just adding more to above answers..
For application stats monitoring on Node.js you can use Hummingbird. It supports status over http so you can integrate the code in your node.js app add one port to get the monitoring data over HTTP. No need to use Azure Storage Diagnostics and all info in real time in same machine. It's still in pre-alfa, but is handling with few tasks really well.
http://projects.nuttnet.net/hummingbird/
I know about the node.js "monitor" plugin which is the best for Linux machines for system specific performance and also use HTTP to provide system specific data. I am not sure if that can be ported to Windows Server but if can that is one great choice. Read more about monitor usage here:
http://www.sys-con.com/node/2275314
You may want to also look at these, they aren't directly using perfmon, but allow you to monitor the performance of your Node.js server:
http://search.npmjs.org/#/Probes.js
http://search.npmjs.org/#/nodetime
The NPM registry is a great tool for finding Node.js packages.

CPU usage of Oracle installed Database machine

I am using oracle 11g and i have an application which is coded in Spring framework. Once i configure the database on Sun fire 4170 installed with Linux the machine's CPU utilization is around 80-100% and, however, when i shift the same database to Sun M3000 server installed with Unix OS (supposedly more powerful machine) the application performance goes down and CPU utilization remains 90-100%. I can't figure out if its the application which is making the such utilization or its the database design.
It is added that the database is not relational; things are handled by the application.
Well you certainly can find some interesting opinions on the intertubes.
Oracle does not have a true server
architecture (others have it).
Rather than performing classic server
tasks, such as multi-threading,
caching of data pages, parallel
processing (split a query across many
devices) etc. within itself, it uses
the o/s to do all that. That means for
each user process (PL/SQL connection)
there is one unix process; 1000 users
means 1000 unix processes, all
competing for the same resources.
You might note that Oracle has had
a connection pooling architecture (multi-threaded server) since version 7 (1992).
a cache for data pages (known helpfully as the buffer cache) since forever
parallel query (splitting a query across many processes) since version 7.1 (1993)
splitting queries across multiple servers since OPS (version 6) or across distributed databases (version 5)
It's also noteworthy that even if all that was said was correct rather than incorrect it doesn't actually help you in determining root cause.
Especially noteworthy, because it uses
file system files (not raw
partitions), and the "caching" is
outside, it relies heavily on (and is
very sensitive to) the file system
cache that you have set up. likewise,
Oracle needs a massive amount of
memory for these processes.
Oracle certainly can use raw partitions again dating back to the last millenium, moreover if you wish to cache within the database - using the buffer cache that PerformanceDBA has forgotten about - and bypass the filesystem cache this feature is available on all current filesystems. Oracle also supplies it's own combined filesystem/volume manager in ASM which you can use if you wish.
Oracle is also rather well instrumented (and if you have access to dtrace so is solaris) and can certainly tell you what sessions, processes etc are using the CPU, what the time the application spends in the database is consumed by (down to individual block read times if you care) and so is very susceptible to profiling. I'd recommend that you check out Thinking Clearly about Performance available at http://www.method-r.com/downloads/cat_view/38-papers-and-articles and written by one of the top Oracle Performance experts in the world. If you have access to the Oracle Diagnostics pack then checking out first of all ADDM reports and secondly AWR reports would be profitable.
Trying to avoid a flame war here.
I should probably have separated out the "how to find out" part of my response more clearly from my responses to the comments about server architecture from PerformanceDBA. I share Stephanie's suspicions about the spring framework, but without properly scoped measurement evidence there is no point in blaming any particular attribute of the environment, that would be just particular bias. Fortunately the instrumentation built into the oracle kernel allows you to trace and then profile the slow sessions to determine exactly where the issue lies. So I would do the following:
1) enable tracing for a representative session (you can use the dbms_monitor package for that).
2) also gather an execution plan for the statement(s) involved with the gather_plan_statistics hint.
3) profile the trace file by time using an appropriate profile (tkprof,orasrp,method-r profiler)
Investigate the problem statements in contribution to response time order.
If you can't carry out the above, then you can use ADDM and/or AWR if licenced as I originally suggested or statspack if not licensed for the diagnostics pack. ADDM naturally concentrates on time consumers, I suggest if you are forced down the statspack route you do the same.
The M3000 is certainly a more powerful machine, but it is more suitable for true servers. The X4170 with hyper-threads is more suited for file servers.
I'm not so certain about that. Have any data to support that claim?
An M3000 has one SPARC64 VII processor with 4 cores (tech specs) while a X4170 has 1 or 2 Intel 5500 "Nehalem-EP" processors each with 4 cores (tech specs). I know that I would expect much more from even a single processor Nehalem-EP system, than the M3000. Obviously data will vary slightly with the workload, but I know where I'd put my money.

Resources