How to use jq try with multiple conditions like a case? - switch-statement

I need to output in a single column the value of a field A if it's not null or the value of field B if not null or nothing if both A and B are null.
When I just had filed A to test I wrote this which worked ok
try .fieldA catch ""
But now as I need to take field B if A is null I tried those, and nothing worked
try .fieldA catch try .fieldB catch "" => this only returned empty values ever
try (.fieldA or .fieldB) catch "" => this one outputs true or false, 2 resultst instead of 1
try (.fieldA,.fieldB) catch "" => this one outputs both A and B if both are not nulll, so 2 resultst instead of 1
I'd like the try to stop evaluating whenever the first result is not null
Thanks

Use the alternate operator //, which takes the second alternative if the first is null or false, and empty for the "nothing" result.
If accessing any field might fail, additionally use the optional operator ? on those fields.
{
"fieldA": "A1 present",
"fieldB": "B1 present",
"fieldC": "irrelevant"
}
{
"fieldB": "B2 present",
"fieldC": "irrelevant"
}
{
"fieldA": "A3 present",
"fieldC": "irrelevant"
}
{
"fieldC": "irrelevant"
}
jq '.fieldA // .fieldB // empty'
"A1 present"
"B2 present"
"A3 present"
Demo
Addressing #peak's "warning": If you want to capture a filed that has the explicit boolean value false, while not capturing it if it is either missing or explicitly set to null, you can use values which selects non-null values only, and first to retrieve the first one that exists among a given stream of alternatives:
{
"fieldA": "A1 present",
"fieldB": "B1 present",
"fieldC": "irrelevant"
}
{
"fieldA": false,
"fieldB": "B2 present",
"fieldC": "irrelevant"
}
{
"fieldA": null,
"fieldB": "B3 present",
"fieldC": "irrelevant"
}
{
"fieldB": "B4 present",
"fieldC": "irrelevant"
}
{
"fieldC": "irrelevant"
}
jq 'first(.fieldA, .fieldB | values)'
"A1 present"
false
"B3 present"
"B4 present"
Demo
Using this approach, there's no need to provide the explicit empty case. However, if you want to have a default fallback, add it as the last item in the stream, e.g. first(.fieldA, .fieldB, "" | values).

jq's // operator has quite complicated semantics (*)
and in particular, when evaluating E // F, no distinction is made between E being null, false, or the empty stream.
Also, given an object as input, the expression .x will evaluate to the JSON value null if the key "x" is explicitly
specified as null or if the key "x" is not present at all.
Thus, assuming an object has been given as input,
if we want .fieldA if .fieldA is non-null,
and otherwise .fieldB if .fieldB is non-null,
and otherwise the string "missing", then we would have to write something along the lines of:
if .fieldA != null then .fieldA
elif .fieldB != null then .fieldB
else "missing"
end
Simply replace "missing" by empty to achieve the objective stated in the original question.
(*) See the jq FAQ
for complete details.

Related

Groovy slurper.parser variable of a variable

Here is the snippet of my groovy script:
jsonFileData = slurper.parse(jsonFile)
Here is my JSON file
{
"MEMXYZ": {
"LINKOPT": {
"RMODE": "31",
"AMODE": "ANY"
},
"PROCESSOR": "PROCESSOR XYZ",
"DB2": {
"OWNER": "USER1",
"QUALIFER": "DB2ADMIN",
"SSID": "DBC1"
},
"COBOL": {
"VERSION": "V6",
"CICS": "V5R6M0",
"OPTIONS": "LIST,MAP,RENT",
"DB2": "YES"
}
}
}
println "Print1 ***** Parsing PROCESSOR = ${jsonFileData.MEMXYZ.PROCESSOR}"
println "Print2 ***** Parsing PROCESSOR = ${jsonFileData}.${Member}.PROCESSOR"
The Print1 is working fine with with explicit Member name "MEMXYZ", but I have problem with Print2, which I need to have the dyanmic ${Member} variable substitution. Please help!
${Member} is MEMXYZ
Please help to solve the Print2 statement
".. ${abc} ..." just injects the value of abc variable into string.
To access values of map (result of slurper.parse(...) in your case) you could use one of approaches:
jsonFileData[Member].PROCESSOR
jsonFileData[Member]['PROCESSOR']
So, your print line could look like:
println "PROCESSOR = ${jsonFileData[Member].PROCESSOR}"

Inject matchesJsonPath from Groovy into Spring Cloud Contract

When writing a Spring Cloud Contract in Groovy,
I want to specify an explicit JSON path expression.
The expression:
"$.['variants'][*][?(#.['name'] == 'product_0004' && #.['selected'] == true)]"
shall appear in the generated json, like so:
{
"request" : {
"bodyPatterns": [ {
"matchesJsonPath": "$.['variants'][*][?(#.['name'] == 'product_0004' && #.['selected'] == true)]"
} ]
}
}
in order to match e.g.:
{ "variants": [
{ "name": "product_0003", "selected": false },
{ "name": "product_0004", "selected": true },
{ "name": "product_0005", "selected": false } ]
}
and to not match e.g.:
{ "variants": [
{ "name": "product_0003", "selected": false },
{ "name": "product_0004", "selected": false },
{ "name": "product_0005", "selected": true } ]
}
Is this possible using consumers, bodyMatchers, or some other facility of the Groovy DSL?
There are some possibilities with matching on json path, but you wouldn't necessarily use it for matching on explicit values, but rather to make a flexible stub for the consumer by using regex. There are some possibilities though.
So the body section is your static request body with hardcoded values, while the bodyMatchers section provides you the ability to make the stub matching from the consumer side more flexible.
Contract.make {
request {
method 'POST'
url '/some-url'
body ([
id: id
items: [
foo: foo
bar: bar
],
[
foo: foo
bar: foo
]
])
bodyMatchers {
jsonPath('$.id', byEquality()) //1
jsonPath('$.items[*].foo', byRegex('(?:^|\\W)foo(?:$|\\W)')) //2
jsonPath('$.items[*].bar', byRegex(nonBlank())) //3
}
headers {
contentType(applicationJson())
}
}
response {
status 200
}
}
I referenced some lines
1: "byEquality()" in the bodyMatchers section means: the input from the consumer must be equal to the value provided in the body for this contract/stub to match, in other words must be "id".
2: I'm not sure how nicely the //1 solution will work when the property is in a list, and you want the stub to be flexible with the amount of items provided. Therefor I also included this byRegex which basically means, for any item in the list, the property foo must have exactly value "foo". However, I dont really know why you would want to do this.
3: This is where bodyMatchers are actually most useful. This line means: match to this contract if every property bar in the list of items is a non blank string. This allows you to have a dynamic stub with a flexible size of lists/arrays.
All the conditions in bodyMatchers need to be met for the stub to match.

Logstash: Renaming nested fields based on some condition

I am trying to rename the nested fields from Elasticsearch while migrating to Amazonelasticsearch
In the document, I want to change the
1.If the value field has JSON type. Change the value field to value-keyword and remove "value-whitespace" and "value-standard" if present
2.If the value field has a size of more than 15. Change the value field to value-standard
"_source": {
"applicationid" : "appid",
"interactionId": "716bf006-7280-44ea-a52f-c79da36af1c5",
"interactionInfo": [
{
"value": """{"edited":false}""",
"value-standard": """{"edited":false}""",
"value-whitespace" : """{"edited":false}"""
"title": "msgMeta"
},
{
"title": "msg",
"value": "hello testing",
},
{
"title": "testing",
"value": "I have a text that can be done and changed only the size exist more than 20 so we applied value-standard ",
}
],
"uniqueIdentifier": "a21ed89c-b634-4c7f-ca2c-8be6f31ae7b3",
}
}
the end result should be
"_source": {
"applicationid" : "appid",
"interactionId": "716bf006-7280-44ea-a52f-c79da36af1c5",
"interactionInfo": [
{
"value-keyword": """{"edited":false}""",
"title": "msgMeta"
},
{
"title": "msg",
"value": "hello testing",
},
{
"title": "testing",
"value-standard": "I have a text that can be done and changed only the size exist more than 20 and so we applied value-standard ",
}
],
"uniqueIdentifier": "a21ed89c-b634-4c7f-ca2c-8be6f31ae7b3",
}
}
For 2), you can do it like this:
filter {
if [_source][interactionInfo][2][value] =~ /.{15,15}/ {
mutate {
rename => ["[_source][interactionInfo][2][value]","[_source][interactionInfo][2][value-standard]"]
}
}
}
The regex .{15,15} matches any string 15 characters long. If the field is shorter than 15 characters long, the regex doesn't match and the mutate#rename isn't applied.
For 1), one possible solution would be trying to parse the field with the json filter and if there's no _jsonparsefailure tag, rename the field.
Founded the solution for this one. I have used a ruby filter in Logstash to check each and every document as well as nested document
Here is the ruby code
require 'json'
def register(param)
end
def filter(event)
infoarray = event.get("interactionInfo")
infoarray.each { |x|
if x.include?"value"
value = x["value"]
if value.length > 15
apply_only_keyword(x)
end
end
if x.include?"value"
value = x["value"]
if validate_json(value)
apply_only_keyword(x)
end
end
}
event.set("interactionInfo",infoarray)
return [event]
end
def validate_json(value)
if value.nil?
return false
end
JSON.parse(value)
return true
rescue JSON::ParserError => e
return false
end
def apply_only_keyword(x)
x["value-keyword"] = x["value"]
x.delete("value")
if x.include?"value-standard"
x.delete("value-standard")
end
if x.include?"value-whitespace"
x.delete("value-whitespace")
end
end

How to access dynamic values in script assertion; which are set at test case level?

I have set a dynamic value at test case level for the below response.
{
"orderDetails": {"serviceOrderNumber": "SO-EUAM-MX-EUAM-16423"},
"transactionDetails": {
"statusMessage": "Success",
"startRow": "1",
"endRow": "400",
"totalRow": "1",
"timeZone": "EST"
},
"customerNodeDetails": {
"startDate": "20180622 06:32:39",
"nodeCreateDate": "20180622 06:32:39",
"customerId": "5562"
}
}
assert context.response, 'Response is empty or null'
def json = new groovy.json.JsonSlurper().parseText(context.response)
context.testCase.setPropertyValue('CustID', json.customerNodeDetails.customerId.toString())
Now while asserting another API which is a GET one, I am getting the CustID as customerNumber.
I have used the below code:
assert json.customerNodeDetails.customerNumber == "${#TestCase#CustID}"
and the block of response for the same was:
"customerNodeDetails": {
"nodeLabel": null,
"customerNumber": "5544",
"customerName": "ABCDEFGHIJ ABCDEFGHIJ LMNOPQRSTUV1234",
"taxIdCity": "",
"taxIdState": "",
"salesSegment": "Government",
"dunsNumber": "",
"mdsId": "",
"accountClassification": "B",
"specialCustomerBillCode": ""
}.
But I am getting the below error as:
startup failed: Script65.groovy: 26: unexpected char: '#' # line 26, column 54. eDetails.customerNumber == "${#TestCase# ^ org.codehaus.groovy.syntax.SyntaxException: unexpected char: '#' # line 26, column 54. at org.codehaus.groovy.antlr.AntlrParserPlugin.transformCSTIntoAST(AntlrParserPlugin.java:138) at
Please let me know how to access that value.
If you're referring to a property within a Groovy script, you can't use it directly as you might in other parts of the UI. You need to expand it:
def custID = context.expand('${#TestCase#CustID}')
Or, the messageExchange variable available in script assertions gives you an alternative way to do the same thing:
def alternative = messageExchange.modelItem.testStep.testCase.getPropertyValue('CustID');
Then, if you need a property that's defined somewhere else other than at the test-case level, you can use:
def projectLevelProperty = messageExchange.modelItem.testStep.testCase
.testSuite.project.getPropertyValue('projectLevelProperty');

Using boolean fields in groovy script in elasticsearch - doc['field_name'].value not working

The problem
I am trying to use boolean fields in a script to score. It seems like doc['boolean_field'].value can't be manipulated as a boolean, but _source.boolean_field.value can (even though the Scripting documentation here says "The native value of the field. For example, if its a short type, it will be short.").
What I've tried
I have a field called 'is_new'. This is the mapping:
PUT /test_index/test/_mapping
{
"test": {
"properties": {
"is_new": {
"type": "boolean"
}
}
}
}
I have some documents:
PUT test_index/test/1
{
"is_new": true
}
PUT test_index/test/2
{
"is_new": false
}
I want to do a function_score query that will have a score of 1 if new, and 0 if not:
GET test_index/test/_search
{
"query": {
"function_score": {
"query": {
"match_all": {}
},
"functions": [
{
"script_score": {
"script": "<<my script>>",
"lang": "groovy"
}
}
],
"boost_mode": "replace"
}
}
}
Scripts work when I use the _source.is_new.value field, but don't if I use doc['is_new'].value.
This works:
"if ( _source.is_new) {1} else {0}"
These don't work:
"if ( doc['is_new'].value) {1} else {0}" (always true)
"if ( doc['is_new'].value instanceof Boolean) {1} else {0}" (value isn't a Boolean)
"if ( doc['is_new'].value.toBoolean()) {1} else {0}" (always false)
"if ( doc['is_new']) {1} else {0}" (always true)
I've checked the value, and it thinks it is a string, but I can't do string comparison:
"if ( doc['is_new'].value instanceof String) {1} else {0}" (always true)
"if ( doc['is_new'].value == 'true') {1} else {0}" (always false)
"if ( doc['is_new'].value.equals('true')) {1} else {0}" (always false)
Is this broken, or am I doing it wrong? Apparently it is faster to use doc['field_name'].value, so if possible, it would be nice if this worked.
I am using Elasticsearch v1.4.4.
Thanks!
Isabel
I'm having the same issue on ElasticSearch v1.5.1: Boolean values in the document show up as characters in my script, T' for true and 'F' for false:
if ( doc['is_new'].value == 'T') {1} else {0}
I've just got it!
First, it works only with _source.myField, not with doc['myField'].value.
I think there's a bug there because the toBoolean() method should return a boolean depending on the actual value, but it doesn't.
But I also needed to declare the mapping of the field explicitly as boolean and not_analyzed.
I hope it helps!

Resources