HP LoadRunner - measurements to watch outbound active TCP ports - windows-server-2003

I am trying to find out if there are any relevant measurements that LoadRunner can track, when running a load test - where it can allow me to monitor the number of active outbound ports from a given windows 2003 box.
i am seeing that there are various measurements specific to CLR and IIS - such as current connections, but i am looking for something that can show the active outbound ports, at any given time, during the load test.
thank you.

The Analysis (v9.52) help file states that:
The Connections graph shows the number
of open TCP/IP connections (y-axis) at
each point in time of the load test
scenario (x-axis). One HTML page may
cause the browser to open several
connections, when links on the page go
to different Web addresses. Two
connections are opened for each Web
server.
This is probably as close as you can get with the standard graphs in the analysis tool.

Let's take LoadRunner out of the picture for a moment. All LoadRunner does is leverage the native capabilities of whatever operating system or application it is monitoring. So, with the removal of LoadRunner, how would you monitor these items using perfmon or other utilities in Window. Once you have that path set you should be able to educate LoadRunner to examine the same items.
Try this:
Run Performance Manager (Perfmon.exe)
Hit the '+' sign to add a measurement
Object = .net CLT Networking
Metric, Connections established
Similarly there are other counters present related to the network interface that may have some items of interest to you. Once you have the counters you want, simply add them to LoadRunner's native capability to inquiry a Windows host or (preferred) set up a monitor group in SiteScope that LoadRunner Leverages. Current releases of Loadrunner (at least since 8.x) ship with a 500 point SitScope license for use as a monitoring foundation.

Related

check the load of an application with synthetic users

I am looking to find out the response times that takes for my nodejs application with (lets say) when 1000 users uses it simultaneously. I believe this is called stress testing. How Can I acheive this ?
I am new to testing area and yet to acquire knowledge on tools that will be used.
Edit: I need to know how to have virtual users for the application.
In order to understand the response times for an application under load, you need two components:
A load driver
There's a number of available tools that will simulate users making HTTP requests. A simple open source option is (Apache JMeter. The following page in the documentation shows you how to create a web test plan, including adding virtual users:
http://jmeter.apache.org/usermanual/build-web-test-plan.html
A performance monitoring tool
In order to understand how your application is performing under the load, and measure application responsiveness, you need a performance monitoring tool. One set of options is to track the performance of 'http' monitoring events using Node Application Metrics, either using the API directly, or using an open source monitoring stack like Elasticsearch/Kibana using the ELK Connector for Node Application Metrics. There's a getting started guide to monitoring Node.js application performance with Elasticsearch/Kibana here:
https://developer.ibm.com/open/2015/09/07/using-elasticsearch-and-kibana-with-node-application-metrics/

Azure - IPSec VPN Network Speed

We have a Microsoft DC R2 server running only an Interbase database application, all works fine and we can access this application via both Point to Site and Site to Site VPN.
Our transfer speeds for files is coming in at about 5Mbps which is fantastic.
When we access our software (locally) which pulls data from the server (Azure) we're seeing it clock speeds of about 125KBps.
This results in a 3-6 second wait before the dataset appears on screen within our application.
In a local environment this is done within 0.5 seconds.
I'm trying to get to the bottom of the issue as the Internet connection we are on is a 100Mbps feed.
I look at the Draytek router and assumed that this was the problem, however we have tested from multiple sites and ISPs and can't seem to get any improvement on application DB access speeds. SMB speeds remain impressive.
We're not too experienced in the Azure area but we can't work out any way of improving those speeds, if anybody has any suggestions that would be fantastic.
FYI We're using an A2 Windows deployment (approx 4Gb).
Regards,
Pottre11

Remote monitoring of system stats with node.js

We have implemented a monitoring solution in node.js, which does some basic checks for database integrity and API up-time. We want to expand this system to collect basic system stats of our Linux servers like CPU and disc usage. Some of these servers are behind a firewall which is out of our control, with only some very basic ports open (ssh,ftp,http,https).
How can I gather the system information of these servers in node.js. Are there monitoring systems which expose these information through a (secured) RESTful API?
I've had a lot of success with this ssh client written in javascript:
https://github.com/mscdex/ssh2
So there tons of available solutions for monitoring system stats: Nagios, Zabbix, Scout, Cacti. There are even some hosted ones like ServerDensity.
All of these systems should cover the top-level stats: CPU, RAM, Disk IO & Network. They all have a plug-in infrastructure so that you can send custom stats (API uptime, DB availability) and send them along with the regular stats.
If you're running on a cloud infrastructure somewhere, many of these provide information "out of the box", generally in your account dashboard (see guys like Joyent or Azure).
Big question here is "what else do you need"?
Use NRPE from Nagios as a client on the box you want to monitor. It's fairly simple to set up and it's API is documentented. http://exchange.nagios.org/directory/Addons/Monitoring-Agents/NRPE--2D-Nagios-Remote-Plugin-Executor/details

Linux vs Win runtime timings

I have an application which was ported from Windows to Linux. Now the same code compiles on VS C++ and g++, but there is a difference in performance when it's running on Win and when it's running on Linux. The scope of this application is caching. It's a node between a server and a client, and it's caching client requests and server response in a list, so that any other client which makes requests that was already processed by the server, this node will response instead of forwarding it to server.
When this node runs on Windows, the client gets all it needs in about 7 seconds. But when same node is running on Linux (Ubuntu 9.04), the client starts up in 35 seconds. Every test is from scratch. I'm trying to understand why is this timing difference. A weird scenario is when the node is running on Linux but in a Virtual Machine, hosted by Win. In this case, load time is around 7 seconds, just like it was running Win natively. So, my impression is that there is a problem with networking.
This node is using UDP protocol for sending and receiving network data, and it's using boost::asio as implementation. I tried to change all supported socket flags, changed buffer size, but nothing.
Does someone know why is this happening, or any network settings related with UDP that might influence the performance?
Thanks.
If you suspect a network problem take a network capture (Wireshark is great for this kind of problem) and look at the traffic.
Find out where the time is being spent, either based on the network capture or based on the output of a profiler.
Once you know that you're half way to a solution.
These timing differences can depend on many factors, but the first one coming to mind is that you are using a modern Windows version. XP already had features to keep recently used applications in memory, but in Vista this was much better optimized. For each application you load, a special load file is created that is equal to how it looks in memory. Next time you load your application, it should go a lot faster.
I don't know about Linux, but it is very well possible that it needs to load your app completely each time. You can test the difference in performance between the two systems much better if you compare performance when running. Leave your application open (if it is possible with your design) and compare again.
These differences in how the system optimizes memory are backed up by your scenario using the VM approach.
Basically, if you rule out other running applications and if you run your application in high priority mode, the performance should be close to equal, but it depends on whether you use operating system specific code, how you access the file system, how you you use the UDP protocol etc etc.

IIS7: View resources used by specific website

Is there a way to view the resources (RAM, CPU, etc) used by specific websites hosted in IIS? Something like task manager, but specifically for IIS.
Launch IIS manager and select your server name. In the features view, select 'worker processes', underneath the 'iis' section. You can see all currently running app pools; if you double click on an app pool, you can see the current activity as well (nothing will show up if the site is under low load as it's processing it too fast to be material).
If you want more information, or you want to collect historical data, you're going to have to use performance monitor.

Resources