I am currently trying to set up JMeter-load-tests for a JSF application. Everything is fine and I get the correct result if I use the standard "1 thread 1 repeat" - configuration. It also works with as many repetitions as I want, as long as it's a single thread.
As soon as I change the number of threads, for example using 4 threads at the same time, I get ViewExpiredException's and stuff like that. Feels like the viewstate gets lost somewhere, but I am unable to work out what it is since it works fine in a single thread. This does not happen with every request, some seem to work fine.
Scenario #1: A single thread and a any amount of repetitions - It works fine
Scenario #2: More than a single thread and any amount of repetitions - The server throws a ViewExpiredException on some (no pattern visible) requests.
Any tips? Google couldn't help. Thanks in advance.
Edit: I forgot to add: The viewstate seems to get sent as expected. There is never the default viewstate.
JSF applications are easy to load test if you have understood the basics of dynamic parameter correlations. Have you properly correlated the following dynamic parameters:
javax.faces.ViewState
I suggest to use the following possible regular expressions: ($1$ template)
\sname\s*?=\s*?"javax\.faces\.ViewState"[^^]*?\svalue\s*?=\s*?"(.*?)"[^^]*?>
And:
<update id="javax\.faces\.ViewState"><!\[CDATA\[(.*?)\]\]>
You should extract those values from the responses, and reinject them into subsequent requests. There parameters are dynamically generated on each request.
With JMeter, it really hurts to perform correlation because they have to be done manually each time you record again you script. We managed to make this task a lot easier by implementing Auto Correlation Rules on top of JMeter.
Related
Basically, how do I configure either the report or my JMX so that graphs are much simpler and not showing all of the requests for every single thread like the below.
Clarification: I want to see all of the requests, but I don't want to see Request-1, Request-2, etc. Request-100 if there are 100 threads. It gets very unwieldy even if the test has only a few requests, since they get multiplied by the number of threads.
I run from the JMX from the command-line headless. I disabled all of the listeners in the JMX; there are only HTTP requests, variables, and cookie/cache/header managers.
I read the JMeter documentation on dashboard generation, but I didn't notice anything helpful.
In response to the comment, no, the request names do not have dynamic thread numbers in them. Snapshot:
I was using Transaction Controllers:
Tried suggestion to use Apply Naming Policy, but that did not work.
The Response Times Over Time is still overcrowded with lines.
If you're using Transaction Controllers and want the only the transactions to appear in the HTML Reporting Dashboard you need to apply naming policy to the controllers
This way the Transaction Controllers children will be treated like Embedded Resources and your charts will be more "clean"
Over 2 years ago, Remy Lebeau gave me invaluable tips on threads in Delphi. His answers were very useful to me and I feel like I made great progress thanks to him. This post can be found here.
Today, I now face a "conceptual problem" about threads. This is not really about code, this is about the approach one should choose for a certain problem. I know we are not supposed to ask for personal opinions, I am merely asking if, on a technical point a view, one of these approach must be avoided or if they are both viable.
My application has a list of unique product numbers (named SKU) in a database. Querying an API with theses SKUS, I get back a JSON file containing details about these products. This JSON file is processed and results are displayed on screen, and saved in database. So, at one step, a download process is involved and it is executed in a worker thread.
I see two different approaches possible for this whole procedure :
When the user clicks on the start button, a query is fired, building a list of SKUs based on the user criteria. A Tstringlist is then built and, for each element of the list, a thread is launched, downloads the JSON, sends back the result to the main thread and terminates.
This can be pictured like this :
When the user clicks on the start button, a query is fired, building a list of SKUs based on the user criteria. Instead of sending SKU numbers one after another to the worker thread, the whole list is sent, and the worker thread iterates through the list, sending back results for displaying and saving to the main thread (via a synchronize event). So we only have one worker thread working the whole list before terminating.
This can be pictured like this :
I have coded these two different approaches and they both work... with each their downsides that I have experienced.
I am not a professional developer, this is a hobby and, before working my way further down a path or another for "polishing", I would like to know if, on a technical point of view and according to your knowledge and experience, one of the approaches I depicted should be avoided and why.
Thanks for your time
Mathias
Another thing to consider in this case is latency to your API that is producing the JSON. For example, if it takes 30 msec to go back and forth to the server, and 0.01 msec to create the JSON on the server, then querying a single JSON record per request, even if each request is in a different thread, does not make much sense. In that case, it would make sense to do fewer requests to the server, returning more data on each request, and partition the results up among different threads.
The other thing is that threads are not a solution to every problem. I would question why you need to break each sku into a single thread. how long is each individual thread running and how much processing is each thread doing? In general, creating lots of threads, for each thread to work for a fraction of a msec does not make sense. You want the threads to be alive for as long as possible, processing as much data as they can for the job. You don't want the computer to be using as much time creating/destroying threads as actually doing useful work.
I've joined a legacy project, where there's virtually no logging. Few days ago we had a production release that failed massively, and we had no clear idea what's going on. That's why improving logging is one of the priorities now.
I'd like to introduce something like "correlation id", but I'm not sure what approach to take. Googling almost always brings me to the solutions that are suitable for "Microservices talking via REST" architecture, which is not my case.
Architecture is a mix of Spring Framework and NodeJS running on the same Unix box - it looks like this:
Spring receives a Request (first thread is started) and does minor processing.
Processing goes to a thread from ThreadPool (second thread is started).
Mentioned second thread starts a separate process of NodeJS that does some HTML processing.
Process ends, second thread ends, first thread ends.
Options that come to my mind are:
Generate UUID and pass it around as argument.
Generate UUID and store it in ThreadLocal, pass it when necessary when changing threads or when starting a process.
Any other ideas how it can be done correctly?
You are on the right track. Generate a UUID and pass it as a header into the request. For any of the request that do not have this header add a filter thats checks for it and add it.
Your filter will pick such a header and can put it in thread local where MDC can pick it from. There after any logging you do will have the correlation id. When making a call to any other process/request you need to make sure you pass this id as an argument/header. And the cycle repeats.
Your thread doing the task should just be aware of this ID. Its upto you to decide how you want to pass it. Try to just separate out such concerns from your biz logic (Using Aspects or any other way you see fit) and more you can keep this under the hood easier it would be for you.
You can refer to this example
I have a question concerning the integration of split(), resequence() together with multithreading. My (naive) routes are looking like this (abbreviated to explain the problem):
from("file:input")
.process(prioAssign)
.split(body().tokenize("\n")).streaming()
.resequence().simple("${in.header.prio}").allowDuplicates().reverse()
.to("direct:process")
.end()
.process(exportProcessor)
from("direct:process")
.threads(10, 100, "process")
.process(importProcessor) // take some time for processing
I like to accomplish the following things:
The importProcessor work should be distributed over several threads
The items (coming from the splitter) should be processed by priority (resequenced)
The exportProcessor must be triggered when all splitted objects are processed (from one file)
The problem with the code above is, that if I include the resequence step, the export is triggered immediately and the resequencing itself doesn't work. It seems, I don't understand the threading model behind Camel.
Thanks a lot in advance for all hints!
Couldn't it be that your prioAssign processor doesn't build a body that can be split later, and so the split ends instantly and everything moves to the exportProcessor?
I am writing a Rails 3.1 app, and I have a set of three cucumber feature files. When run individually, as with:
cucumber features/quota.feature
-- or --
cucumber features/quota.feature:67 # specifying the specific individual test
...each feature file runs fine. However, when all run together, as with:
cucumber
...one of the tests fails. It's odd because only one test fails; all the other tests in the feature pass (and many of them do similar things). It doesn't seem to matter where in the feature file I place this test; it fails if it's the first test or way down there somewhere.
I don't think it can be the test itself, because it passes when run individually or even when the whole feature file is run individually. It seems like it must be some effect related to running the different feature files together. Any ideas what might be going on?
It looks like there is a coupling between your scenarios. Your failing scenario assumes that system is in some state. When scenarios run individually system is in this state and so scenario passes. But when you run all scenarios, scenarios that ran previously change this state and so it fails.
You should solve it by making your scenarios completely independent. Work of any scenario shouldn't influence results of other scenarios. It's highly encouraged in Cucumber Book and Specification by Example.
I had a similar problem and it took me a long time to figure out the root cause.
I was using #selenium tags to test JQuery scripts on a selenium client.
My page had an ajax call that was sending a POST request. I had a bug in the javascript and the post request was failing. (The feature wasn't complete and I hadn't yet written steps to verify the result of the ajax call.)
This error was recorded in Capybara.current_session.server.error.
When the following non-selenium feature was executed a Before hook within Capybara called Capybara.reset_sessions!
This then called
def reset!
driver.reset! if #touched
#touched = false
raise #server.error if #server and #server.error
ensure
#server.reset_error! if #server
end
#server.error was not nil for each scenario in the following feature(s) and Cucumber reported each step as skipped.
The solution in my case was to fix the ajax call.
So Andrey Botalov and Doug Noel were right. I had carry over from an earlier feature.
I had to keep debugging until I found the exception that was being raised and investigate what was generating it.
I hope this helps someone else that didn't realise they had carry over from an earlier feature.