I need to measure the response time for a file export using Trueclient protocol in loadrunner - performance-testing

I need to measure the response time for a file export using Trueclient protocol in loadrunner.After i click on the export button the file will get downloaded. But i am not able to measure the time for the download accurately.

Pull that data from the HTTP request log, which will show the download request, and, if the w3c time-taken value is included in the log, the time required to fulfill the download.
You can process the log at the end of the test for the response time data. If you need to, you cam import a set of datapoints into analysis for representation with the rest of your data. You might want to consider a normalized value for your download, instead of a raw response time. I imagine that the files are of different sizes, so naturally they will have different download times. However, if you divide download bytes with time (in seconds), then you will have a normalized measurement of bytes per second which then allows you to compare one download to the next for consistent operation.
Also, keep in mind that since you are downloading a file, writing to a local disk, for (presumably) multiple users on a host, you will face the risk of turning your local file system into a bottleneck. You can see this same effect if you turn up logging on all users to the highest level and run your test. The wait for lock and wait for write, plus the actual writing of data, becomes a drag anchor to the performance of your virtual user. This is why the recommended log level is "log on error" or send the error to the output window of the controller via lr_output_message() or lr_vuser_status_message(). Consider a control load generator of the same hardware definition as the others with only a single virtual user of this type on it. If the control group and global group degrade together then you have an app issue. If your control user does not degrade, but your other users do, then you have a test bed induced influence on your results.
These are all issues independent of the tool you are using for the test.

Related

How to get the NET response of single system during performance testing using Load runner or other tools

There are multiple systems involved in my application in order to get the proper response to final system.
Above image shows first the request will go from system 1 to system 2 and so on to system n. The final response will be visible in system 1. Here I want to know how we can get the "NET response time of one request from system 2 to system 3 or system 1 to system 2 and so on". I am a beginner in performance testing. Please let me know how we can achieve this.
Thank you so much!
APM tool Integration at System n through 1 (assuming your systems are supported by your APM tool), or log analysis of messages with timestamps indicating start and completion of an event. As long as you have a unique correlating key in the logs you can then reconstruct timing records for various event types which are key to your understanding of performance in your ~n~ tier model

Objective C For loops with #autorelease and ARC

As part of an app that allows auditors to create findings and associate photos to them (Saved as Base64 strings due to a limitation on the web service) I have to loop through all findings and their photos within an audit and set their sync value to true.
Whilst I perform this loop I see a memory spike jumping from around 40MB up to 500MB (for roughly 350 photos and 255 findings) and this number never goes down. On average our users are creating around 1000 findings and 500-700 photos before attempting to use this feature. I have attempted to use #autorelease pools to keep the memory down but it never seems to get released.
for (Finding * __autoreleasing f in self.audit.findings){
#autoreleasepool {
[f setToSync:#YES];
NSLog(#"%#", f.idFinding);
for (FindingPhoto * __autoreleasing p in f.photos){
#autoreleasepool {
[p setToSync:#YES];
p = nil;
}
}
f = nil;
}
}
The relationships and retain cycles look like this
Audit has a strong reference to Finding
Finding has a weak reference to Audit and a strong reference to FindingPhoto
FindingPhoto has a weak reference to Finding
What am I missing in terms of being able to effectively loop through these objects and set their properties without causing such a huge spike in memory. I'm assuming it's got something to do with so many Base64 strings being loaded into memory when looping through but never being released.
So, first, make sure you have a batch size set on the fetch request. Choose a relatively small number, but not too small because this isn't for UI processing. You want to batch a reasonable number of objects into memory to reduce loading overhead while keeping memory usage down. Try 50 or 100 and see how it goes, then consider upping the batch size a little.
If all of the objects you're loading are managed objects then the correct way to evict them during processing is to turn them into faults. That's done by calling refreshObject:mergeChanges: on the context. BUT - that discards any changes, and your loop is specifically there to make changes.
So, what you should really be doing is batch saving the objects you've modified and then turning those objects back into faults to remove the data from memory.
So, in your loop, keep a counter of how many you've modified and save the context each time you hit that count and refresh all the objects that were processed so far. The batch on the fetch and the batch size to save should be the same number.
There's probably a big difference in size between your "Finding" objects and the associated images. So your primary aim should be to redesign your database in a way so that unfaulting (loading) a Finding object does not automatically load the base64 encoded image.
That's actually one of the major strengths of Code Data: Loading part of an object hierarchy. Just try to move the base64 encoded data to an own (managed) object so that Core Data does not load it. It will still be loaded as needed when the reference is touched.

Are there metrics for Redhawk performance

I have a dual channel radio where I have two RX_DIGITIZER_CHANNELIZERs and two DDCS. My waveform allocates both channels. The waveform just takes the data from each channel and outputs it to two DataConverters. I am using the snapshot function to capture data. When I start to collect data at higher rates some of the packets get dropped. Is there a way to measure how long a call such as pushPacket takes? If I used the logging function, it would produce too much output to measure how long it takes.
#michael_sw can you plot the data coming from the device in the IDE instead of saving to disk?
How are you monitoring the packet drops?
Do you need to go through the data converter? If you have to it is possible to set a blocking flag in SRI in the downstream REDHAWK device (see chapter 15 in manual) to cause back pressure and block until data converter is done consuming the previous data. This only helps if the data converter is dropping packets.
In the IDE there is a port monitoring mode where you can actually tell when data is being dropped (right click on port and select port monitoring) by a component.
Another option in the data converter you could modify the code to watch the getPacket call for the inputQueueFlushed to be true.
I commonly use timestamping - make a call to one of the system clock functions and either log the time or print the time to the console. If you do this in the function that calls pushPacket and again in the pushPacket handler then you simply take the difference. If this produces too much data, you can simply use a counter and log it only every 1000 calls, etc. Or collect the data for a period of time in an array and log/print them after the component is shut down. Calls to the system clock does not effect performance much compared to CORBA calls.

Give reads priority over writes in Elasticsearch

I have an EC2 server running Elasticsearch 0.9 with a nginx server for read/write access. My index has about 750k small-medium documents. I have a pretty continuous stream of minimal writes (mainly updates) to the content. The speeds/consistency I receive with search is fine with me, but I have some sporadic timeout issues with multi-get (/_mget).
On some pages in my app, our server will request a multi-get of a dozen to a few thousand documents (this usually takes less than 1-2 seconds). The requests that fail, fail with a 30,000 millisecond timeout from the nginx server. I am assuming this happens because the index was temporarily locked for writing/optimizing purposes. Does anyone have any ideas on what I can do here?
A temporary solution would be to lower the timeout and return a user friendly message saying documents couldn't be retrieved (however they still would have to wait ~10 seconds to see an error message).
Some of my other thoughts were to give read priority over writes. Anytime someone is trying to read a part of the index, don't allow any writes/locks to that section. I don't think this would be scalable and it may not even be possible?
Finally, I was thinking I could have a read-only alias and a write-only alias. I can figure out how to set this up through the documentation, but I am not sure if it will actually work like I expect it to (and I'm not sure how I can reliably test it in a local environment). If I set up aliases like this, would the read-only alias still have moments where the index was locked due to information being written through the write-only alias?
I'm sure someone else has come across this before, what is the typical solution to make sure a user can always read data from the index with a higher priority over writes. I would consider increasing our server power, if required. Currently we have 2 m2x-large EC2 instances. One is the primary and the replica, each with 4 shards.
An example dump of cURL info from a failed request (with an error of Operation timed out after 30000 milliseconds with 0 bytes received):
{
"url":"127.0.0.1:9200\/_mget",
"content_type":null,
"http_code":100,
"header_size":25,
"request_size":221,
"filetime":-1,
"ssl_verify_result":0,
"redirect_count":0,
"total_time":30.391506,
"namelookup_time":7.5e-5,
"connect_time":0.0593,
"pretransfer_time":0.059303,
"size_upload":167002,
"size_download":0,
"speed_download":0,
"speed_upload":5495,
"download_content_length":-1,
"upload_content_length":167002,
"starttransfer_time":0.119166,
"redirect_time":0,
"certinfo":[
],
"primary_ip":"127.0.0.1",
"redirect_url":""
}
After more monitoring using the Paramedic plugin, I noticed that I would get timeouts when my CPU would hit ~80-98% (no obvious spikes in indexing/searching traffic). I finally stumbled across a helpful thread on the Elasticsearch forum. It seems this happens when the index is doing a refresh and large merges are occurring.
Merges can be throttled at a cluster or index level and I've updated them from the indicies.store.throttle.max_bytes_per_sec from the default 20mb to 5mb. This can be done during runtime with the cluster update settings API.
PUT /_cluster/settings HTTP/1.1
Host: 127.0.0.1:9200
{
"persistent" : {
"indices.store.throttle.max_bytes_per_sec" : "5mb"
}
}
So far Parmedic is showing a decrease in CPU usage. From an average of ~5-25% down to an average of ~1-5%. Hopefully this can help me avoid the 90%+ spikes I was having lock up my queries before, I'll report back by selecting this answer if I don't have any more problems.
As a side note, I guess I could have opted for more balanced EC2 instances (rather than memory-optimized). I think I'm happy with my current choice, but my next purchase will also take more CPU into account.

How RealVNC works?

I would to know how RealVNC remote viewer works.
It frequently send screenshots to the client in real time ?
or does it use other approach ?
As a very high-level overview, there are two types of VNC servers:
Screen-grabbing. These servers will capture the current display into a buffer, compare it to the client state, and send only the rectangles that differ to the client.
Hook-assisted. Hooking into the display update process, these servers will be informed when the screen changes by the display manager or OS. They can then use that information to send only the changed rectangles to the client.
In both cases, it is effectively a stream of screen updates; however, only the changed regions of the screen are transmitted to the client. Depending on the version of the VNC protocol in use, these updates may be compressed as well.
(Note that the client is free to request a complete screen update any time it wants to, but the server will only do this on its own if the entire screen is changed.)
Also, screen updates are not the only things transmitted. There are separate channels that the server can use to send clipboard updates and mouse position updates (since a user physically at the remote machine may be able to move the mouse too).
The display side of the protocol is
based around a single graphics
primitive: “put a rectangle of pixel
data at a given x,y position”. At
first glance this might seem an
inefficient way of drawing many user
interface components. However,
allowing various different encodings
for the pixel data gives us a large
degree of flexibility in how to trade
off various parameters such as network
bandwidth, client drawing speed and
server processing speed. A sequence of
these rectangles makes a framebuffer
update (or simply update). An update
represents a change from one valid
framebuffer state to another, so in
some ways is similar to a frame of
video. The rectangles in an update are
usually disjoint but this is not
necessarily the case.
Read here to find out more how it works
Yes. It just sends some sort of screenshots (compressed and which reuses unchanged portions of the previous screenshot).
This is by the way the VNC protocol, any client work that way (although the actual way to compress images etc etc may change).
Essentially the server sends Frame Buffer Updates to the client and the client sends keyboard and mouse input and frame buffer update requests to the server.
Frame Buffer Update messages can have different encodings, but in essence they are different ways of representing square screen areas of pixel data. Generally the client asks for Frame Buffer Updates for the entire screen but it can ask for just an area of the screen (for example, small screen clients showing a viewport of the servers screen). The server then sends a FBU (frame buffer update) that contains rectangles where the screen has changed since the last FBU was sent to the client.
The best reference for the RFB/VNC protocol is here. The IETF has a recent (2011) standards document RFC 6143 that covers RFB although it is not an extensive as the reference guide.
It essentially works by sending screenshots on the fly. ("Real time" is something of a misnomer here in that there is no clear deadline.) It does attempt to optimize by only sending areas of the screen that have changed, and some forks of the VNC code line use a mirror driver to receive notification when areas of the display are written to, while others use window message hooks to detect repaint requests.

Resources