Error Image Jmeter
Error Image JTL
My API calls take token which is valid for 4 min. In CSV data set config file, we have records with tokens placed in each row for each user. I am running this test for 20 min. I am running JMeter in CLI mode and running another thread to update it every 2 min. Thread uses a custom library to create tokens.
Now the issue is: just in some cases, Jmeter reads the file while it is being updated by a separate thread and this causes errors.
How I know this is caused by thread :
This error appears after the thread updates the file. before it, everything works fine.
My CSV has parameters
server,portNumber,userId,username,password,teamspaceID,Token
and in JMeter script using URL like "Http://${server}:${portNumber}"
but in .jtl file, few of the records have "Http:// some part of token string:8082"
Is there any other efficient way to tackle this
It's a classic race condition, JMeter's CSV Data Set Config doesn't expect that the file can change during the runtime, it's hard to come up with the exact solution without seeing your test plan, however you can consider the following alternatives:
Generating token right before the request using JSR223 PreProcessor, by default the time taken by the PreProcessors is not included into Sampler's elapsed time so you will get only HTTP request execution time in the results file
Putting logic which is not thread safe under Critical Section Controller
Using Inter-Thread Communication Plugin instead of interim CSV file for keeping/passing the tokens
My team lead has solved this by using database. Now we are writing header parameters in database instead of csv file from thread, and in jmeter using pre-processor to get values from database to update in header properties.
In addition we are writing three sets of data. Thread is updating the oldest data and jmeter is using latest data by using order by in pre-processor query.
Related
Is it possible to run a table to table mapping scenario in parallel (multi threading)
we have a huge table and we already created table mapping and scenario on the mapping.
we also executing it from loadplan.
but is there way I can run the scenario in multiple threads to make the data transfer faster.
I am using groovy to script all these task.
It will be better if I get someway to script it in groovy.
A load plan with Parallel steps or a packages with scenarios in asynchronous mode will do for the parallelism part.
An issue you might run in, depending on which KMs are used, is that the same name will be used by temporary tables in all mappings. To avoid that, select the "Use Unique Temporary Object Names" checkbox appears in the Physical tab of your mapping. It will generate a different name for these objects for each execution.
It is possible on the ODI side, you may need some modifications on the mapping to not load any duplicate data. We have a similar flow where we use modula function on a numeric key to split source data into partitions. Then this data gets loaded into target.
To run this interface in multi-thread way, we have a package with a loop that executes the scenario asynchronously of this mapping with a MODULO_VALUE variable.
For loading data we are using oracle sqlloader utility, it is able to work in a parallel way to load data into one target table. I am not sure about if data pump utility also has this ability. But I know if you try to load data by SQL as a multithread approach you would get a ORA-00054: resource busy and acquire with NOWAIT specified error.
As you see there is no Groovy code included in this flow, all handled by ODI mappings, packages and KMs. I hope this helps.
I have around 100 threads running parallel and dumping data in a single table using sqlldr ctl file. the query generates values for ID using expression ID SEQUENCE(MAX,1).
The process fails to load files properly due to parallel execution and may be two or more threads get same ID. it works fine when I run it sequentially with one single thread.
Please suggest a workaround.
Each CSV file contains data associated with a test cases and cases are supposed to be run in parallel. I can not concatenate all files in one go.
You could load the data and then run a separate update in which you could update ID with a traditional oracle sequence?
I am working with a spring batch application for the first time and since the framework is way too flexible, I have a few questions on performance and best practices implementing jobs which I couldn't find clear answers in the spring docs.
My Goals:
read an ASCII file with fixed column length values sent by a third-party with previously specified layout (STEP 1 reader)
validate the read values and register (Log file) the errors (custom messages)
Apply some business logic on the processor to filter any undesirable lines (STEP 1 processor)
write the valid lines on oracle database (STEP 1 writer)
After the execution of the previous step update a table on the database with the the step 1 finish timestamp (STEP 2 tasklet)
Send an email when the job is stopped with a summary of the quantities already processed, errors and written lines, start time and finish time (Are These informations on the jobRepository meta-data?)
Assumptions:
The file is incremental, so the third party always sends the prior file lines (possible with some values changes) and any new lines (~120Million lines on total). A new file is sent every 6 months.
we must validate if input file lines while processing (Are required values present? Some can be converted to number and Dates?)
The job must be stoppable/restartable since is intended to run on a Time window.
What I planning to do:
To achieve some performance on reading and writing I am avoiding use of Spring's out-of-the-box reflection beans and using jdbcBatchWriter to write the processed lines to the database.
The FileReader reads the lines with a custom FieldSetMapper, transform all the columns with FieldSet.readString method (this implies no ParseException on Reading). A Bean injected on the Processor performs parsing and validation, so this way we can avoid skipping exceptions during reading which seems an expensive operation and can count the invalid lines to pass through future steps, saving the info on the step/job execution context.
The processor bean should convert the object read an return a Wrapper with the original object, the parsed values (i.e., Dates and Longs), the first eventual Exception thrown by the Parsing and a boolean that indicates whether the validation was successful or not. After the parsing another CustomProcessor check if the register should be inserted on the database by querying similar or identical registers already inserted. This business rule could imply in a query into the database per valid line in the worst scenario.
A jdbcItemWriter discards null values returned by the processors and writes valid registers to the database.
So The real Questions regarding batch processing:
What are some performance tips that a I could use to improve the batch performance? In a preliminary attempt the load of a perfect valid mock input file into the database led to 15 hours of processing without querying the database to verify if the processed register should be inserted. What could be the local processing simplest solution?
Have you seen partitioning ? http://docs.spring.io/spring-batch/reference/html/scalability.html and this may also helpful remote chunking with the control on reader in spring batch
What's the best approach for generating a page that is the results of complex calculation/data manipulation/api calls (e.g. 5 mins per page)? Obviously I can't do the calculation within my rails web request.
A scheduled task can produce some data, but where should I store it? Should I store it in a postgres table? Should I store it in a document oriented database? Should I store it in memory? Should I generate an html?
I have the feeling of being second-level ignorant about the subject. Is there a well known set of tools to deal with this kind of architectural problem?
Thanks.
I would suggest following approach:
1. Once you receive initial request:
You can start processing in a separate thread when you receive the first request with input for calculation and send some token/unique identifier for the request.
2. Store the result:
Then start the calculation and store the result in memory using some tool like memcached.
3. Poll for the result:
Then the request for fetching the result should keep polling for the result with generated token/unique request identifier. As Adriano said you can use AJAX for that (I am assuming you are getting the requests from Web Browser).
Hi I have grails web application where users can save the data by importing the data from a excel or csv file and my application parses the data and saves them in database, but if there are too many records like 40,000 it's taking more time.
So I what I wanted to do is run this task in the background and notify users after the task is done through an email, so that users don't have to sit there ideally and can work on some other task.
Can you suggest me a way where I can save the records in database using background thread
You can launch the long running task in a separate thread, using the #Asyncannotation, see the documentation for it here:
25.5.2 The #Async Annotation
The #Async annotation can be provided on a method so that invocation
of that method will occur asynchronously. In other words, the caller
will return immediately upon invocation and the actual execution of
the method will occur in a task that has been submitted to a Spring
TaskExecutor. In the simplest case, the annotation may be applied to a
void-returning method.
This is a code example of how to use it:
#Async
void doSomething() {
// this will be executed asynchronously
}
To give a ETL (Extract Transform Load) like structure to the batch, have a look a spring batch. This is an example of how to read a CSV and upload it to the database using spring batch - CSV File Upload.
You can use the Quartz Plugin, and then create a Job that runs on demand and does the inserting for you without blocking the users. Link to plugin: http://grails.org/plugin/quartz