Logstash-logback-encoder: Using StructuredArguments without formatting in the message? - logstash

I wonder what the best practices are with using StructuredArguments inside logging calls when using logstash-logback-encoder to log in JSON format.
I want to log some structured arguments in separate fields, but I don't want to format these arguments into the literal string message.
If I write my log lines like this, everything works fine as I want to, but both my IntelliJ IDEA and Sonarqube static code analysis considers this problematic issues:
log.info("Query executed successfully!", StructuredArguments.value("hits", result.getHits()));
(or more concise)
log.info("Query executed successfully!", v("hits", result.getHits()));
IntelliJ warns this on this line:
more arguments provided (1) than placeholders specified (0)
How can I avoid this? Of course I can silence the warnings and add exceptions for them, but I wonder if that is a best practice.

If you don't want to include the data inside the log message, and want to avoid the static analysis warnings, use Markers instead of StructuredArguments:
import net.logstash.logback.marker.Markers;
log.info(Markers.append("hits", result.getHits()), "Query executed successfully!");
See here for more details.

Related

How to print function call timeline?

I would like to print location and function name of each function during a run.
It would help during debug to identify which function is called when multiple functions have the same name in different places.
It is possible but time costly to add by hand message such as:
println!("(function_name) file = {}, line = {}",file!(),line!());
Do you know such a solution? Have you suggestions to identify easily which function is called and who calls it?
The easiest option would probably be to use something like dtrace or ebpf: hook onto the "function entry" probe (not sure what it's called in linux / ebpf land but I'd think it exists) and just print the relevant information. You may want to add stack-based indentation though, and of course because of compiler optimisations you might have functions going missing. And you might get the mangled names which is not great, but de-mangling is a thing.
You might be able to do something similar by running your program in gdb and creating some sort of programmatic breakpoints which print and immediately continue?
Alternatively, a module-level attribute procedural macro could work: if you get a token stream for the entire module, it might be possible to automatically inject logging data into every function header.

Jest snapshot is redundant

I am writing snapshot tests using Jest for a node.js and React app and have installed snapshot-tools extension in VS code.
Some of my tests are displaying this warning in the editor:
[snapshot-tools] The snapshot is redunant
(Presumably it is supposed to say redundant)
What does this warning mean? I am wondering how I can fix it.
I was having the same problem, so I took a look at the "snapshot-tools" code. It marks a snapshot section as redundant, if it doesn't see a corresponding test in the test file that has a matching name and that calls "expect().toMatchSnapshot()" or something similar.
The problem is (as it says on the "Limitations" section of the plugin's marketplace page), it does a static analysis of the test file to find those tests that use snapshots. And the static analysis cannot detect tests that have dynamically generated names, or that don't directly call "expect().toMatchSnapshot()" in the test's body.
For example, I was getting false positive "redundant" warnings, because I had some tests that were doing "expect().toMatchSnapshot()" in their "afterEach()" function, rather than directly in the test body.
This could indicate that the snapshot is no longer linked to a valid test - have you changed your describe/it strings without updating the snapshots? Try running the tests with -- -u appended (eg: npm test -- -u). If that doesn't work, have a look at your snapshots file and compare the titles to your test descriptions.

Jmeter passes steps and not print to consul

I am facing an issue that I could not understand how to resolve.
I created a test plan that need to connect DB and count the results.
The problem is that Jmeter not perform any validation afterwards, I created a JSSR223 in the JDBC request and just want to print the results and Jmeter not print.
I created another sampler to print the DB results and still Jmeter not printing.
Jmeter just passes this steps,
In the results tree I saw that it connects to DB and failed in the assertion, but why it passes the other steps? and just moving to debug sampler?
I can not print the results, I can not perform any debug since it is just black box.
can someone please advise?
you can see in yellow all the steps that Jmeter not performed and just not exists in the results tree.
enter image description here
Get used to check jmeter.log file, it normally contains information regarding what went wrong, you should be able to figure out the root cause by looking into the log file. If you are not - update your question with jmeter.log file contents (at least essential parts)
My expectation is that your ${Conv_sense} variable is not defined (or cannot be cast to Integer). Double check whether it is defined or not using Debug Sampler and View Results Tree listener combination.
Also don't refer JMeter Variables like ${Conv_sense} in Groovy scripts body, use vars.get('Conv_sense}') instead, otherwise it might conflict with Groovy GStringTemplate resulting in undefined behavior.

Read Jenkins build console output and set email content

I have certain errors which I set in my code, which should add corresponding error messages to the email content of the final build email.
I was thinking of printing something such as ("EMAIL CONTENT ERROR: _______") to the console, reading through it (in a pre-send groovy script?), and adding corresponding error messages for each error found.
I was thinking of using a pre-send groovy script, setting the mimeMessage object(was reading jenkins email-ext documentation). Would this be viable?
Also, I have never used groovy before, so pointers to how to approach this would be extremely helpful(or a link to where i can find an implementation of something with a similar idea of reading console). Any advice would be greatly appreciated. Thanks in advance!
Can you check attaching "Build Log" This would highlight all the process of build process.
This is a very similar concept to the question here. The technique there was to use the log parser plugin to scan the console output for you, and then use groovy to add all the errors into an email.
This is generally a pretty good idea because it also means that you can view the same set of highlighted errors from jenkins itself, rather than just the email.
There are a couple of ways this is different from your setup, the main points are:
Yes, write errors directly to the console in a known format
Set the log parser up with regular expressions that find your error format
Instead of a pre-send script, in this case you would use a groovy template for your email generation
The template reads the error list from the console parser and adds them to your email. Each one is a link that links back to the jenkins console output.

Auto-correlation callback function issue - loadrunner

I'm working in new application written in Siebel 8.1, issue appears when I'm trying to replay script and I can't handle that.
Replay Output:
Error -27086: Auto-correlation callback function
"flCorrelationCallbackParseWebPage" failed (rc=1) for parameter
"Siebel_Parse_Web_Page40"
web_reg_save_param("Siebel_Parse_Web_Page40",
"LB/IC=",
"RB/IC=",
"Ord=1",
"Search=Body",
"RelFrameId=1",
"AutoCorrelationFunction=flCorrelationCallbackParseWebPage",
"AutoCorrelationDll=LrwiSiebelCorrelationWrapper",
LAST);
I have done all steps for prepare record options from: http://software-qe.blogspot.se/2008/01/siebel-7x-record-and-replay-for.html
I'm using Loadrunner 11.52 (Siebel Web protocol), IE8.
We've been using the autocorrelation library for quite a few years on my team and we see this a lot. Unfortunately, it's not an easy problem to diagnose.
First I would check your test results and your VUser log to see if something happened before the autocorrelation failed. (Make sure your logging is set to parameter substitution in runtime settings).
Check your parameter files for extra spaces, commas, etc. Sometimes I've seen that error right after it rejects something about your parameter file.
Worst case scenario, your script is corrupted and you'll have to start over. We've gotten in the habit of making frequent backups of our scripts just because of this issue. Usually, we'll be able to start from our backup and continue or create a new script and paste the old code in. Autocorrelation error "magically" goes away with the same code in a new script.
If auto(magical)correlation does not work then use manual correlation.
Record twice with same data: Compare. You will find session, state and time data.
Change the credentials: Re-record. Compare. You will find credential related correlation
Change the business record but keep the same business process. Re-Record. You will find the business related correlation.
Do not expect autocorrelation to provide a magical working script. You have about a 0.0001% chance of that happening without LoadRunner script development intervenetion.

Resources