Jmeter View Results Tree displays incorrect amount of threads. - multithreading

My Thread Group contains 10 threads with 20 ramp up period. Inside it there 2 samplers called HTTP Requests (one for login, other for getting form) and listener called View Results Tree. Once run, listener only displays three threads under each http request. The question is how to configure it in order to display all of the run threads?

View Results Tree doesn't display "threads", it displays HTTTP Requests and associated sample result(s). There might be some cases when single requests results in multiple nested requests, i.e.
URL Redirect
Embedded Resources (images, scripts, styles)
So for instance if you run 1 request to some site with 1 user and it results in 3 sample results it may be due to:
Response to the main request
Redirect somewhere else
Downloading associated image
Theoretically you can configure JMeter for not following redirects and not downloading embedded resources, but this way your test won't be realistic and most probably it is not something you would like to achieve.
Don't forget to disable or delete View Results Tree listener from your test plan when it comes to real load test as it may be very memory intensive so your it will impact your results in a negative way. See Greedy Listeners - Memory Leeches of Performance Testing article for more detailed explanation.

Related

Best way for Node.js server to return an error before process runs out of heap memory

I'm running Node.js / Express server on a container with pretty strict memory constraints.
One of the endpoints I'd like to expose is a "batch" endpoint where a client can request a list of data objects in bulk from my data store. The individual objects vary in size, so it's difficult to set a hard limit on how many objects can be requested at one time. In most cases a client could request a large amount of objects without any issues, but it certain edge cases even requests for a small amount of objects will trigger an OOM error.
I'm familiar with Node's process.memoryUsage() & process.memoryUsage.rss(), but I'm worried about the performance implications of constantly checking heap (or service) memory usage while serving an individual batch request.
In the longer term, I might consider using memory monitoring to bake in some automatic pagination for the endpoint. In the short term, however, I'd just like to be able to return an informative error to the client in the event that they are requesting too many data objects at a given time (rather than have the entire application crash with an OOM error).
Are there any more effective methods or tools I could be using to solve the problem instead?
you have couple of options.
Options 1.
what is biggest object you have in store. I would say that you allow some {max object count} on api and set container memory to biggestObject x {max allowed objects size}. You can even have some pagination concept added if required where page size = {max object count}
Option 2.
Also using process.memoryUsage() should be fine too. I don't believe it is a not a costly call unless you have read this somewhere. Before each object pull check current memory and go ahead only if safe amount of memory is available.The response in this case can contain only pulled data and lets client to pull remaining ids in next call. Implementable via some paging logic too.
options 3.
explore streams. This I will not be able to add much info for now.

JMeter - How to prevent dashboard from showing every single thread's request - graphs contain 1000s of lines

Basically, how do I configure either the report or my JMX so that graphs are much simpler and not showing all of the requests for every single thread like the below.
Clarification: I want to see all of the requests, but I don't want to see Request-1, Request-2, etc. Request-100 if there are 100 threads. It gets very unwieldy even if the test has only a few requests, since they get multiplied by the number of threads.
I run from the JMX from the command-line headless. I disabled all of the listeners in the JMX; there are only HTTP requests, variables, and cookie/cache/header managers.
I read the JMeter documentation on dashboard generation, but I didn't notice anything helpful.
In response to the comment, no, the request names do not have dynamic thread numbers in them. Snapshot:
I was using Transaction Controllers:
Tried suggestion to use Apply Naming Policy, but that did not work.
The Response Times Over Time is still overcrowded with lines.
If you're using Transaction Controllers and want the only the transactions to appear in the HTML Reporting Dashboard you need to apply naming policy to the controllers
This way the Transaction Controllers children will be treated like Embedded Resources and your charts will be more "clean"

Many routes with Skobbler/Scout maps

We have a requirement whereby we need to present rough pedestrian walking times between the users current location and approximately 12 locations, all on the screen at the same time.
We don't, by default need to present the routes on the map but we do wish to calculate these very quickly and update these values in real-time as the users location changes.
Now we could use RouteManager to calculate routes but these seems to be no real way of identifying which SKRouteInfo in the completed callback is associated with which route settings which was used to kick off a routing operation in the 1st place. Note that we are assuming here that it is safe to kick off multiple routing calls at the same time.
So, other than queueing up the routing requests one at a time and waiting for completion is there anyway of matching up the route info with the routing requests? Or is there another approach we could take?
This scenario is not supported.
When the route calculation process is executed only one single computation runs at a moment and the next computation is launched only when the first one is notifed as completelly finished.

Using Google map objects within a web worker?

The situation:
Too much stuff is running in the main thread of a page making a google map with overlays representing ZIP territories coming from US census data and stuff the client has asked for grouping territories into discreet groups. While there is no major issue on desktops, mobile devices (iPad) decide that the thread is taking too long (max of 6 seconds after data returns) and therefore must have crashed.
Solution: Offload the looping function to gather the points for the shape from each row to a web worker that can work as fast or slow as resources allow on a mobile device. (Three for loops, 1st to select row, 2nd to select column, 3rd for each point within the column. Execution time: matter of 3-6 seconds total for over 2000+ rows with numerous points)
The catch: In order for this to be properly efficient, the points must be made into a shape (polygon) within the web worker. HOWEVER since it is a google.maps.polygon object made up of google.maps.latlng objects it [the web worker] needs to have some knowledge of what those items are within the web worker. Web workers require you to not use window or the DOM so it must import the script and the intent was to pass back just the object as a JSON encoded item. The code fails on any reference of google objects even with importScript() due to the fact those items rely on the window element.
Further complications: Google's API is technically proprietary. The web app code that this is for is bound by NDA so pointed questions could be asked but not a copy/paste of all code.
The solution/any vague ideas:???
TLDR: Need to access google.maps.latlng object and create new instances of (minimally) within a web worker. Web worker should either return Objects ready to be popped into a google.maps.polygon object or should return a google.maps.polygon object. How do I reference the google maps API if I cannot use the default method of importing scripts due to an issue requiring the window object?
UPDATE: Since this writing Ive managed to offload the majority of the grunt work from the main thread to the web worker allowing it to parse through the data asynchronously and assign the data to custom made latlng object.
The catch now is getting the returned values to run the function in the proper context to see if the custom latlng is sufficient for google.maps.polygon to work its magic.
Excerpt from the file that calls the web worker and listens for its response (Coffeescript)
#shapeWorker.onmessage= (event)->
console.log "--------------------TESTING---------------"
data=JSON.parse(event.data)
console.log data
#generateShapes(data.poly,data.center,data.zipNum)
For some reason, its trying to evaluate GenerateShapes in the context of the web worker rather than in the context of the class its in.
Once again it was a complication of too many things going on at once. The scope was restricted due to the usage of -> rather than => which expands the scope to allow the parent class functions.
Apparently the issue resided with the version of iOS this web app needed to run on and a bug with the storage being set arbitrarily low (a tenth of its previous size). With some shrinking of the data and a fix to the iOS version in question I was able to get it running without the usage of web workers. One day I may be able to come back to it with web workers to increase efficiency.

Good approaches for queuing simultaneous NodeJS processes

I am building a simple application to download a set of XML files and parse them into a database using the async module (https://npmjs.org/package/node-async) for flow control. The overall flow is as follows:
Download list of datasets from API (single Request call)
Download metadata for each dataset to get link to XML file (async.each)
Download XML for each dataset (async.parallel)
Parse XML for each dataset into JSON objects (async.parallel)
Save each JSON object to a database (async.each)
In effect, for each dataset there is a parent process (2) which sets of a series of asynchronous child processes (3, 4, 5). The challenge that I am facing is that, because so many parent processes fire before all of the children of a particular process are complete, child processes seem to be getting queued up in the event loop, and it takes a long time for all of the child processes for a particular parent process to resolve and allow garbage collection to clean everything up. The result of this is that even though the program doesn't appear to have any memory leaks, memory usage is still too high, ultimately crashing the program.
One solution which worked was to make some of the child processes synchronous so that they can be grouped together in the event loop. However, I have also seen an alternative solution discussed here: https://groups.google.com/forum/#!topic/nodejs/Xp4htMTfvYY, which pushes parent processes into a queue and only allows a certain number to be running at once. My question then is does anyone know of a more robust module for handling this type of queueing, or any other viable alternative for handling this kind of flow control. I have been searching but so far no luck.
Thanks.
I decided to post this as an answer:
Don't launch all of the processes at once. Let the callback of one request launch the next one. The overall work is still asynchronous, but each request gets run in series. You can then pool up a certain number of the connections to be running simultaneously to maximize I/O throughput. Look at async.eachLimit and replace each of your async.each examples with it.
Your async.parallel calls may be causing issues as well.

Resources