Jmeter passes steps and not print to consul - groovy

I am facing an issue that I could not understand how to resolve.
I created a test plan that need to connect DB and count the results.
The problem is that Jmeter not perform any validation afterwards, I created a JSSR223 in the JDBC request and just want to print the results and Jmeter not print.
I created another sampler to print the DB results and still Jmeter not printing.
Jmeter just passes this steps,
In the results tree I saw that it connects to DB and failed in the assertion, but why it passes the other steps? and just moving to debug sampler?
I can not print the results, I can not perform any debug since it is just black box.
can someone please advise?
you can see in yellow all the steps that Jmeter not performed and just not exists in the results tree.
enter image description here

Get used to check jmeter.log file, it normally contains information regarding what went wrong, you should be able to figure out the root cause by looking into the log file. If you are not - update your question with jmeter.log file contents (at least essential parts)
My expectation is that your ${Conv_sense} variable is not defined (or cannot be cast to Integer). Double check whether it is defined or not using Debug Sampler and View Results Tree listener combination.
Also don't refer JMeter Variables like ${Conv_sense} in Groovy scripts body, use vars.get('Conv_sense}') instead, otherwise it might conflict with Groovy GStringTemplate resulting in undefined behavior.

Related

Logstash-logback-encoder: Using StructuredArguments without formatting in the message?

I wonder what the best practices are with using StructuredArguments inside logging calls when using logstash-logback-encoder to log in JSON format.
I want to log some structured arguments in separate fields, but I don't want to format these arguments into the literal string message.
If I write my log lines like this, everything works fine as I want to, but both my IntelliJ IDEA and Sonarqube static code analysis considers this problematic issues:
log.info("Query executed successfully!", StructuredArguments.value("hits", result.getHits()));
(or more concise)
log.info("Query executed successfully!", v("hits", result.getHits()));
IntelliJ warns this on this line:
more arguments provided (1) than placeholders specified (0)
How can I avoid this? Of course I can silence the warnings and add exceptions for them, but I wonder if that is a best practice.
If you don't want to include the data inside the log message, and want to avoid the static analysis warnings, use Markers instead of StructuredArguments:
import net.logstash.logback.marker.Markers;
log.info(Markers.append("hits", result.getHits()), "Query executed successfully!");
See here for more details.

How to stop the whole test execution but with PASS status in RobotFramework?

Is there any way I can stop the whole robot test execution with PASS status?
For some specific reasons, I need to stop the whole test but still get a GREEN report.
Currently I am using FATAL ERROR which will raise a assertion error and return FAIL to report.
I was trying to create a user keyword to do this, but I am not really familiar with the robot error handling process, could anyone help?
There's an attribute ROBOT_EXIT_ON_FAILURE in BuiltIn.py, and I am thinking about to create another attribute like ROBOT_EXIT_ON_SUCCESS, but have no idea how to.
Environment: robotframework==3.0.2 with Python 3.6.5
There is nothing built-in to support this. By design, a fatal error will cause all remaining tests and suites to have a FAIL status.
Just about your only choice is to write a keyword that sets a global variable, and then have every test include a setup that uses pass execution if to skip the test if the flag is set.
If I understood you correctly, you need to pass the test execution forcefully and return green status for that test, is that right? You have a built in keyword "Pass Execution" for that. Did you try using that?

Why request in View results in a tree listener shows "No data to display" in non-gui mode

Expected Requests and Response are showing after executing jmeter script in GUI mode but when the same is execcuted in non-gui mode, for few requests it shows "No data to display" and Response is empty.
This was encountered in jmeter v4. Can anybody help as in why such variance in requests when script is executed in non-gui mode?
enter image description here
enter image description here
This is done intentionally, JMeter removes request and especially response data from sample results (mainly because it cannot be stored in CSV format) in order to reduce memory consumption and disk IO required to aggregate and store the data.
If you really need to see request and response data you can "tell" JMeter to store it by adding the next lines to user.properties file (lives in "bin" folder of your JMeter installation)
jmeter.save.saveservice.output_format=xml
jmeter.save.saveservice.response_data=true
jmeter.save.saveservice.samplerData=true
jmeter.save.saveservice.requestHeaders=true
jmeter.save.saveservice.url=true
jmeter.save.saveservice.responseHeaders=true
JMeter restart will be required to pick the properties up. Once done - re-run the test and make sure to use the different output .jtl file (or delete the existing one) - this time it will have much more results suitable for displaying in the View Results Tree listener
References:
Configuring JMeter
Apache JMeter Properties Customization Guide
Results File Configuration

Jest snapshot is redundant

I am writing snapshot tests using Jest for a node.js and React app and have installed snapshot-tools extension in VS code.
Some of my tests are displaying this warning in the editor:
[snapshot-tools] The snapshot is redunant
(Presumably it is supposed to say redundant)
What does this warning mean? I am wondering how I can fix it.
I was having the same problem, so I took a look at the "snapshot-tools" code. It marks a snapshot section as redundant, if it doesn't see a corresponding test in the test file that has a matching name and that calls "expect().toMatchSnapshot()" or something similar.
The problem is (as it says on the "Limitations" section of the plugin's marketplace page), it does a static analysis of the test file to find those tests that use snapshots. And the static analysis cannot detect tests that have dynamically generated names, or that don't directly call "expect().toMatchSnapshot()" in the test's body.
For example, I was getting false positive "redundant" warnings, because I had some tests that were doing "expect().toMatchSnapshot()" in their "afterEach()" function, rather than directly in the test body.
This could indicate that the snapshot is no longer linked to a valid test - have you changed your describe/it strings without updating the snapshots? Try running the tests with -- -u appended (eg: npm test -- -u). If that doesn't work, have a look at your snapshots file and compare the titles to your test descriptions.

Read Jenkins build console output and set email content

I have certain errors which I set in my code, which should add corresponding error messages to the email content of the final build email.
I was thinking of printing something such as ("EMAIL CONTENT ERROR: _______") to the console, reading through it (in a pre-send groovy script?), and adding corresponding error messages for each error found.
I was thinking of using a pre-send groovy script, setting the mimeMessage object(was reading jenkins email-ext documentation). Would this be viable?
Also, I have never used groovy before, so pointers to how to approach this would be extremely helpful(or a link to where i can find an implementation of something with a similar idea of reading console). Any advice would be greatly appreciated. Thanks in advance!
Can you check attaching "Build Log" This would highlight all the process of build process.
This is a very similar concept to the question here. The technique there was to use the log parser plugin to scan the console output for you, and then use groovy to add all the errors into an email.
This is generally a pretty good idea because it also means that you can view the same set of highlighted errors from jenkins itself, rather than just the email.
There are a couple of ways this is different from your setup, the main points are:
Yes, write errors directly to the console in a known format
Set the log parser up with regular expressions that find your error format
Instead of a pre-send script, in this case you would use a groovy template for your email generation
The template reads the error list from the console parser and adds them to your email. Each one is a link that links back to the jenkins console output.

Resources