How to print the path for current XML node in Groovy? - groovy

I am iterating through an XML file and want to print the gpath for each node with a value. I spent the day reading Groovy API docs and trying things, but it seems that what I think is simple, is not implemented in any obvious way.
Here is some code, showing the different things you can get from a NodeChild.
import groovy.util.XmlSlurper
def myXmlString = '''
<transaction>
<payment>
<txID>68246894</txID>
<customerName>Huey</customerName>
<accountNo type="Current">15778047</accountNo>
<txAmount>899</txAmount>
</payment>
<receipt>
<txID>68246895</txID>
<customerName>Dewey</customerName>
<accountNo type="Current">16288</accountNo>
<txAmount>120</txAmount>
</receipt>
<payment>
<txID>68246896</txID>
<customerName>Louie</customerName>
<accountNo type="Savings">89257067</accountNo>
<txAmount>210</txAmount>
</payment>
<payment>
<txID>68246897</txID>
<customerName>Dewey</customerName>
<accountNo type="Cheque">123321</accountNo>
<txAmount>500</txAmount>
</payment>
</transaction>
'''
def transaction = new XmlSlurper().parseText(myXmlString)
def nodes = transaction.'*'.depthFirst().findAll { it.name() != '' }
nodes.each { node ->
println node
println node.getClass()
println node.text()
println node.name()
println node.parent()
println node.children()
println node.innerText
println node.GPath
println node.getProperties()
println node.attributes()
node.iterator().each { println "${it.name()} : ${it}" }
println node.namespaceURI()
println node.getProperties().get('body').toString()
println node.getBody()[0].toString()
println node.attributes()
}
I found a post groovy Print path and value of elements in xml that came close to what I need, but it doesn't scale for deep nodes (see output below).
Example code from link:
transaction.'**'.inject([]) { acc, val ->
def localText = val.localText()
acc << val.name()
if( localText ) {
println "${acc.join('.')} : ${localText.join(',')}"
acc = acc.dropRight(1) // or acc = acc[0..-2]
}
acc
}
Output of example code :
transaction/payment/txID : 68246894
transaction/payment/customerName : Huey
transaction/payment/accountNo : 15778047
transaction/payment/txAmount : 899
transaction/payment/receipt/txID : 68246895
transaction/payment/receipt/customerName : Dewey
transaction/payment/receipt/accountNo : 16288
transaction/payment/receipt/txAmount : 120
transaction/payment/receipt/payment/txID : 68246896
transaction/payment/receipt/payment/customerName : Louie
transaction/payment/receipt/payment/accountNo : 89257067
transaction/payment/receipt/payment/txAmount : 210
transaction/payment/receipt/payment/payment/txID : 68246897
transaction/payment/receipt/payment/payment/customerName : Dewey
transaction/payment/receipt/payment/payment/accountNo : 123321
transaction/payment/receipt/payment/payment/txAmount : 500
Besides help getting it right, I also want to understand why there isn't a simple function like node.path or node.gpath that prints the absolute path to a node.

You could do this sort of thing:
import groovy.util.XmlSlurper
import groovy.util.slurpersupport.GPathResult
def transaction = new XmlSlurper().parseText(myXmlString)
def leaves = transaction.depthFirst().findAll { it.children().size() == 0 }
def path(GPathResult node) {
def result = [node.name()]
def pathWalker = [hasNext: { -> node.parent() != node }, next: { -> node = node.parent() }] as Iterator
(result + pathWalker.collect { it.name() }).reverse().join('/')
}
leaves.each { node ->
println "${path(node)} = ${node.text()}"
}
Which gives the output:
transaction/payment/txID = 68246894
transaction/payment/customerName = Huey
transaction/payment/accountNo = 15778047
transaction/payment/txAmount = 899
transaction/receipt/txID = 68246895
transaction/receipt/customerName = Dewey
transaction/receipt/accountNo = 16288
transaction/receipt/txAmount = 120
transaction/payment/txID = 68246896
transaction/payment/customerName = Louie
transaction/payment/accountNo = 89257067
transaction/payment/txAmount = 210
transaction/payment/txID = 68246897
transaction/payment/customerName = Dewey
transaction/payment/accountNo = 123321
transaction/payment/txAmount = 500
Not sure that's what you want though, as you don't say why it "doesn't scale for deep nodes"

Related

Finding a String from list in a String is not efficient enough

def errorList = readFile WORKSPACE + "/list.txt"
def knownErrorListbyLine = errorList.readLines()
def build_log = new URL (Build_Log_URL).getText()
def found_errors = null
for(knownError in knownErrorListbyLine) {
if (build_log.contains(knownError)) {
found_errors = build_log.readLines().findAll{ it.contains(knownError) }
for(error in found_errors) {
println "FOUND ERROR: " + error
}
}
}
I wrote this code to find listed errors in a string, but it takes about 20 seconds.
How can I improve the performance? I would love to learn from this.
Thanks a lot!
list.txt contains a string per line:
Step ... was FAILED
[ERROR] Pod-domainrouter call failed
#type":"ErrorExtender
[postDeploymentSteps] ... does not exist.
etc...
And build logs is where I need to find these errors.
Try this:
def errorList = readFile WORKSPACE + "/list.txt"
def knownErrorListbyLine = errorList.readLines()
def build_log = new URL (Build_Log_URL)
def found_errors = null
for(knownError in knownErrorListbyLine) {
build_log.eachLine{
if ( it.contains(knownError) ) {
println "FOUND ERROR: " + error
}
}
}
This might be even more performant:
def errorList = readFile WORKSPACE + "/list.txt"
def knownErrorListbyLine = errorList.readLines()
def build_log = new URL (Build_Log_URL)
def found_errors = null
build_log.eachLine{
for(knownError in knownErrorListbyLine) {
if ( it.contains(knownError) ) {
println "FOUND ERROR: " + error
}
}
}
Attempt using the last one relying on string eachLine instead.
def errorList = readFile WORKSPACE + "/list.txt"
def knownErrorListbyLine = errorList.readLines()
def build_log = new URL (Build_Log_URL).getText()
def found_errors = null
build_log.eachLine{
for(knownError in knownErrorListbyLine) {
if ( it.contains(knownError) ) {
println "FOUND ERROR: " + error
}
}
}
Try to move build_log.readLines() to the variable outside of the loop.
def errorList = readFile WORKSPACE + "/list.txt"
def knownErrorListbyLine = errorList.readLines()
def build_log = new URL (Build_Log_URL).getText()
def found_errors = null
def buildLogByLine = build_log.readLines()
for(knownError in knownErrorListbyLine) {
if (build_log.contains(knownError)) {
found_errors = buildLogByLine.findAll{ it.contains(knownError) }
for(error in found_errors) {
println "FOUND ERROR: " + error
}
}
}
Update: using multiple threads
Note: this may help in case errorList size is large enough. And also if the matching errors distributed evenly.
def sublists = knownErrorListbyLine.collate(x)
// int x - the sublist size,
// depends on the knownErrorListbyLine size, set the value to get e. g. 4 sublists (threads).
// Also do not use more than 2 threads per CPU. Start from 1 thread per CPU.
def logsWithErrors = []// list for store results per thread
def lock = new Object()
def threads = sublists.collect { errorSublist ->
Thread.start {
def logs = build_log.readLines()
errorSublist.findAll { build_log.contains(it) }.each { error ->
def results = logs.findAll { it.contains(error) }
synchronized(lock) {
logsWithErrors << results
}
}
}
}
threads*.join() // wait for all threads to finish
logsWithErrors.flatten().each {
println "FOUND ERROR: $it"
}
Also, as was suggested earlier by other user, try to measure the logs download time, it could be the bottleneck:
def errorList = readFile WORKSPACE + "/list.txt"
def knownErrorListbyLine = errorList.readLines()
def start = Calendar.getInstance().timeInMillis
def build_log = new URL(Build_Log_URL).getText()
def end = Calendar.getInstance().timeInMillis
println "Logs download time: ${(end-start)/1000} ms"
def found_errors = null

Nested JSON with duplicate keys

I will have to process 10 billion Nested JSON records per day using NiFi (version 1.9). As part of the job, am trying to convert the nested JSON to csv using Groovy script. I referred the below Stack Overflow questions related to the same topic and came up with the below code.
Groovy collect from map and submap
how to convert json into key value pair completely using groovy
But am not sure how to retrieve the value of duplicate keys. Sample json is defined in the variable "json" in the below code. key "Flag1" will be coming in multiple sections (i.e., "OF" & "SF"). I want to get the output as csv.
Below is the output if I execute the below groovy code 2019-10-08 22:33:29.244000,v12,-,36178,0,0/0,10.65.5.56,sf,sf (flag1 key value is replaced by that key column's last occurrence value)
I am not an expert in Groovy. Also please suggest if there is any other better approach, so that I will give a try.
import groovy.json.*
def json = '{"transaction":{"TS":"2019-10-08 22:33:29.244000","CIPG":{"CIP":"10.65.5.56","CP":"0"},"OF":{"Flag1":"of","Flag2":"-"},"SF":{"Flag1":"sf","Flag2":"-"}}'
def jsonReplace = json.replace('{"transaction":{','{"transaction":[{').replace('}}}','}}]}')
def jsonRecord = new JsonSlurper().parseText(jsonReplace)
def columns = ["TS","V","PID","RS","SR","CnID","CIP","Flag1","Flag1"]
def flatten
flatten = { row ->
def flattened = [:]
row.each { k, v ->
if (v instanceof Map) {
flattened << flatten(v)
} else if (v instanceof Collection && v.every {it instanceof Map}) {
v.each { flattened << flatten(it) }
} else {
flattened[k] = v
}
}
flattened
}
print "output: " + jsonRecord.transaction.collect {row -> columns.collect {colName -> flatten(row)[colName]}.join(',')}.join('\n')
Edit: Based on the reply from #cfrick and #stck, I have tried the option and have follow up question below.
#cfrick and #stck- Thanks for your response.
Original source JSON record will have more than 100 columns and I am using "InvokeScriptedProcessor" in NiFi to trigger the Groovy script.
Below is the original Groovy script am using in "InvokeScriptedProcessor" in which I have used Streams(inputstream, outputstream). Is this what you are referring.
Am I doing anything wrong?
import groovy.json.JsonSlurper
class customJSONtoCSV implements Processor {
def REL_SUCCESS = new Relationship.Builder().name("success").description("FlowFiles that were successfully processed").build();
def log
static def flatten(row, prefix="") {
def flattened = new HashMap<String, String>()
row.each { String k, Object v ->
def key = prefix ? prefix + "_" + k : k;
if (v instanceof Map) {
flattened.putAll(flatten(v, k))
} else {
flattened.put(key, v.toString())
}
}
return flattened
}
static def toCSVRow(HashMap row) {
def columns = ["CIPG_CIP","CIPG_CP","CIPG_SLP","CIPG_SLEP","CIPG_CVID","SIPG_SIP","SIPG_SP","SIPG_InP","SIPG_SVID","TG_T","TG_R","TG_C","TG_SDL","DL","I_R","UAP","EDBL","Ca","A","RQM","RSM","FIT","CSR","OF_Flag1","OF_Flag2","OF_Flag3","OF_Flag4","OF_Flag5","OF_Flag6","OF_Flag7","OF_Flag8","OF_Flag9","OF_Flag10","OF_Flag11","OF_Flag12","OF_Flag13","OF_Flag14","OF_Flag15","OF_Flag16","OF_Flag17","OF_Flag18","OF_Flag19","OF_Flag20","OF_Flag21","OF_Flag22","OF_Flag23","SF_Flag1","SF_Flag2","SF_Flag3","SF_Flag4","SF_Flag5","SF_Flag6","SF_Flag7","SF_Flag8","SF_Flag9","SF_Flag10","SF_Flag11","SF_Flag12","SF_Flag13","SF_Flag14","SF_Flag15","SF_Flag16","SF_Flag17","SF_Flag18","SF_Flag19","SF_Flag20","SF_Flag21","SF_Flag22","SF_Flag23","SF_Flag24","GF_Flag1","GF_Flag2","GF_Flag3","GF_Flag4","GF_Flag5","GF_Flag6","GF_Flag7","GF_Flag8","GF_Flag9","GF_Flag10","GF_Flag11","GF_Flag12","GF_Flag13","GF_Flag14","GF_Flag15","GF_Flag16","GF_Flag17","GF_Flag18","GF_Flag19","GF_Flag20","GF_Flag21","GF_Flag22","GF_Flag23","GF_Flag24","GF_Flag25","GF_Flag26","GF_Flag27","GF_Flag28","GF_Flag29","GF_Flag30","GF_Flag31","GF_Flag32","GF_Flag33","GF_Flag34","GF_Flag35","VSL_VSID","VSL_TC","VSL_MTC","VSL_NRTC","VSL_ET","VSL_HRES","VSL_VRES","VSL_FS","VSL_FR","VSL_VSD","VSL_ACB","VSL_ASB","VSL_VPR","VSL_VSST","HRU_HM","HRU_HD","HRU_HP","HRU_HQ","URLF_CID","URLF_CGID","URLF_CR","URLF_RA","URLF_USM","URLF_USP","URLF_MUS","TCPSt_WS","TCPSt_SE","TCPSt_WSFNS","TCPSt_WSF","TCPSt_EM","TCPSt_RSTE","TCPSt_MSS","NS_OPID","NS_ODID","NS_EPID","NS_TrID","NS_VSN","NS_LSUT","NS_STTS","NS_TCPPR","CQA_NL","CQA_CL","CQA_CLC","CQA_SQ","CQA_SQC","TS","V","PID","RS","SR","CnID","A_S","OS","CPr","CVB","CS","HS","SUNR","SUNS","ML","MT","TCPSL","CT","MS","MSH","SID","SuID","UA","DID","UAG","CID","HR","CRG","CP1","CP2","AIDF","UCB","CLID","CLCL","OPTS","PUAG","SSLIL"]
return columns.collect { column ->
return row.containsKey(column) ? row.get(column) : ""
}.join(',')
}
#Override
void initialize(ProcessorInitializationContext context) {
log = context.getLogger()
}
#Override
Set<Relationship> getRelationships() {
return [REL_SUCCESS] as Set
}
#Override
void onTrigger(ProcessContext context, ProcessSessionFactory sessionFactory) throws ProcessException {
try {
def session = sessionFactory.createSession()
def flowFile = session.get()
if (!flowFile) return
flowFile = session.write(flowFile,
{ inputStream, outputStream ->
def bufferedReader = new BufferedReader(new InputStreamReader(inputStream, 'UTF-8'))
def jsonSlurper = new JsonSlurper()
def line
def header = "CIPG_CIP,CIPG_CP,CIPG_SLP,CIPG_SLEP,CIPG_CVID,SIPG_SIP,SIPG_SP,SIPG_InP,SIPG_SVID,TG_T,TG_R,TG_C,TG_SDL,DL,I_R,UAP,EDBL,Ca,A,RQM,RSM,FIT,CSR,OF_Flag1,OF_Flag2,OF_Flag3,OF_Flag4,OF_Flag5,OF_Flag6,OF_Flag7,OF_Flag8,OF_Flag9,OF_Flag10,OF_Flag11,OF_Flag12,OF_Flag13,OF_Flag14,OF_Flag15,OF_Flag16,OF_Flag17,OF_Flag18,OF_Flag19,OF_Flag20,OF_Flag21,OF_Flag22,OF_Flag23,SF_Flag1,SF_Flag2,SF_Flag3,SF_Flag4,SF_Flag5,SF_Flag6,SF_Flag7,SF_Flag8,SF_Flag9,SF_Flag10,SF_Flag11,SF_Flag12,SF_Flag13,SF_Flag14,SF_Flag15,SF_Flag16,SF_Flag17,SF_Flag18,SF_Flag19,SF_Flag20,SF_Flag21,SF_Flag22,SF_Flag23,SF_Flag24,GF_Flag1,GF_Flag2,GF_Flag3,GF_Flag4,GF_Flag5,GF_Flag6,GF_Flag7,GF_Flag8,GF_Flag9,GF_Flag10,GF_Flag11,GF_Flag12,GF_Flag13,GF_Flag14,GF_Flag15,GF_Flag16,GF_Flag17,GF_Flag18,GF_Flag19,GF_Flag20,GF_Flag21,GF_Flag22,GF_Flag23,GF_Flag24,GF_Flag25,GF_Flag26,GF_Flag27,GF_Flag28,GF_Flag29,GF_Flag30,GF_Flag31,GF_Flag32,GF_Flag33,GF_Flag34,GF_Flag35,VSL_VSID,VSL_TC,VSL_MTC,VSL_NRTC,VSL_ET,VSL_HRES,VSL_VRES,VSL_FS,VSL_FR,VSL_VSD,VSL_ACB,VSL_ASB,VSL_VPR,VSL_VSST,HRU_HM,HRU_HD,HRU_HP,HRU_HQ,URLF_CID,URLF_CGID,URLF_CR,URLF_RA,URLF_USM,URLF_USP,URLF_MUS,TCPSt_WS,TCPSt_SE,TCPSt_WSFNS,TCPSt_WSF,TCPSt_EM,TCPSt_RSTE,TCPSt_MSS,NS_OPID,NS_ODID,NS_EPID,NS_TrID,NS_VSN,NS_LSUT,NS_STTS,NS_TCPPR,CQA_NL,CQA_CL,CQA_CLC,CQA_SQ,CQA_SQC,TS,V,PID,RS,SR,CnID,A_S,OS,CPr,CVB,CS,HS,SUNR,SUNS,ML,MT,TCPSL,CT,MS,MSH,SID,SuID,UA,DID,UAG,CID,HR,CRG,CP1,CP2,AIDF,UCB,CLID,CLCL,OPTS,PUAG,SSLIL"
outputStream.write("${header}\n".getBytes('UTF-8'))
while (line = bufferedReader.readLine()) {
def jsonReplace = line.replace('{"transaction":{','{"transaction":[{').replace('}}}','}}]}')
def jsonRecord = new JsonSlurper().parseText(jsonReplace)
def a = jsonRecord.transaction.collect { row ->
return flatten(row)
}.collect { row ->
return toCSVRow(row)
}
outputStream.write("${a}\n".getBytes('UTF-8'))
}
} as StreamCallback)
session.transfer(flowFile, REL_SUCCESS)
session.commit()
}
catch (e) {
throw new ProcessException(e)
}
}
#Override
Collection<ValidationResult> validate(ValidationContext context) { return null }
#Override
PropertyDescriptor getPropertyDescriptor(String name) { return null }
#Override
void onPropertyModified(PropertyDescriptor descriptor, String oldValue, String newValue) { }
#Override
List<PropertyDescriptor> getPropertyDescriptors() {
return [] as List
}
#Override
String getIdentifier() { return null }
}
processor = new customJSONtoCSV()
If I should not use "collect" then what else I need to use to create the rows.
In the output flow file, the record output is coming inside []. I tried the below but it is not working. Not sure whether am doing the right thing. I want csv output without []
return toCSVRow(row).toString()
If you know what you want to extract exactly (and given you want to
generate a CSV from it) IMHO you are way better off to just shape the
data in the way you later want to consume it. E.g.
def data = new groovy.json.JsonSlurper().parseText('[{"TS":"2019-10-08 22:33:29.244000","CIPG":{"CIP":"10.65.5.56","CP":"0"},"OF":{"Flag1":"of","Flag2":"-"},"SF":{"Flag1":"sf","Flag2":"-"}}]')
extractors = [
{ it.TS },
{ it.V },
{ it.PID },
{ it.RS },
{ it.SR },
{ it.CIPG.CIP },
{ it.CIPG.CP },
{ it.OF.Flag1 },
{ it.SF.Flag1 },]
def extract(row) {
extractors.collect{ it(row) }
}
println(data.collect{extract it})
// ⇒ [[2019-10-08 22:33:29.244000, null, null, null, null, 10.65.5.56, 0, of, sf]]
As stated in the other answer, due to the sheer amount of data you are trying to
convert::
Make sure to use a library to generate the CSV file from that, or else
you will hit problems with the content, you try to write (e.g. line
breaks or the data containing the separator char).
Don't use collect (it is eager) to create the rows.
The idea is to modify "flatten" method - it should differentiate between same nested keys by providing parent key as a prefix.
I've simplified code a bit:
import groovy.json.*
def json = '{"transaction":{"TS":"2019-10-08 22:33:29.244000","CIPG":{"CIP":"10.65.5.56","CP":"0"},"OF":{"Flag1":"of","Flag2":"-"},"SF":{"Flag1":"sf","Flag2":"-"}}'
def jsonReplace = json.replace('{"transaction":{','{"transaction":[{').replace('}}','}}]')
def jsonRecord = new JsonSlurper().parseText(jsonReplace)
static def flatten(row, prefix="") {
def flattened = new HashMap<String, String>()
row.each { String k, Object v ->
def key = prefix ? prefix + "." + k : k;
if (v instanceof Map) {
flattened.putAll(flatten(v, k))
} else {
flattened.put(key, v.toString())
}
}
return flattened
}
static def toCSVRow(HashMap row) {
def columns = ["TS","V","PID","RS","SR","CnID","CIP","OF.Flag1","SF.Flag1"] // Last 2 keys have changed!
return columns.collect { column ->
return row.containsKey(column) ? row.get(column) : ""
}.join(', ')
}
def a = jsonRecord.transaction.collect { row ->
return flatten(row)
}.collect { row ->
return toCSVRow(row)
}.join('\n')
println a
Output would be:
2019-10-08 22:33:29.244000, , , , , , , of, sf

Unable to retrieve data from an xml to place in an array

I am trying to retrieve a number of flightids from an xml and place them in an array but I keep getting no data displayed. It doesn't seem to find 'flights' but I am not sure why, is there anything wrong with the code below?
import groovy.xml.XmlUtil
def groovyUtils = new com.eviware.soapui.support.GroovyUtils(context)
def response = context.expand( '${SOAP Request#Response}' )
def parsedxml = new XmlSlurper().parseText(response)
def tests= parsedxml.'**'.findAll { it.name() == 'b:TestId'}
log.info tests
//Get the rate plan codes
def testId = { option ->
def res = option.'**'.findAll {it.name() == 'b:TestId'}
if (res) return option.TestId.text()
null
}
def testIdArray = []
tests.each { if (testId(it)) testIdArray << testId(it) }
for (int i = 0; i < testIdArray.size(); i++) {
log.error "TestIds: " + testIdArray[i]
}
log.warn testIdArray.size()
Below is the xml:
<s:Envelope xxx="xxx" xxx="xxx">
<s:Header>
<a:Action s:mustUnderstand="1">xxx</a:Action>
</s:Header>
<s:Body>
<XML1 xmlns="xxx">
<XML2 xmlns:b="xxx" xmlns:i="xxx">
<XML3>
<b:TestId>000000</b:TestId>
</XML3>
<XML3>
<b:TestId>000000</b:TestId>
</XML3>
</XML2>
</XML1>
</s:Body>
</s:Envelope>
Pass the xml string response to below response variable
def xml = new XmlSlurper().parseText(response)
def getFlightIds = { type = '' ->
type ? xml.'**'.findAll { it.name() == type }.collect { it.FlightId.text() } : xml.'**'.findAll {it.FlightId.text()}
}
//Get the respective flight ids
//Use the desired one as you have mentioned none
def inFlightIds = getFlightIds('InboundFlightInformation')
def outFlightIds = getFlightIds('OutboundFlightInformation')
def allFlightIds = getFlightIds()
log.info "Inbound flight ids: ${inFlightIds}"
log.info "Outbound flight ids: ${outFlightIds}"
log.info "All flight ids: ${allFlightIds}"
You can quickly try online Demo
it.name() in your case it is a NodeChild
so, it.name() returns just a name without prefix. it means you should compare it to FlightId (without b: )
if you want to check a namespace (linked to a prefix) then your lookup must be like this:
def flights = parsedxml.'**'.findAll { it.name() == 'FlightId' && it.namespaceURI()=='xxx' }

Inline Conditional Map Literal in Groovy

Working on some translation / mapping functionality using Maps/JsonBuilder in Groovy.
Is is possible (without creating extra code outside of the map literal creation) .. to conditionally include/exclude certain key/value pairs ? Some thing along the lines of the following ..
def someConditional = true
def mapResult =
[
"id":123,
"somethingElse":[],
if(someConditional){ return ["onlyIfConditionalTrue":true]}
]
Expected results:
If someConditional if false, only 2 key/value pairs will exist in mapResult.
If someConditional if true, all 3 key/value pairs will exist.
Note that I'm sure it could be done if I create methods / and split things up.. for to keep things concise I would want to keep things inside of the map creation.
You can help yourself with with:
[a:1, b:2].with{
if (false) {
c = 1
}
it
}
With a small helper:
Map newMap(m=[:], Closure c) {
m.with c
m
}
E.g.:
def m = newMap {
a = 1
b = 1
if (true) {
c = 1
}
if (false) {
d = 1
}
}
assert m.a == 1
assert m.b == 1
assert m.c == 1
assert !m.containsKey('d')
Or pass an initial map:
newMap(a:1, b:2) {
if (true) {
c = 1
}
if (false) {
d = 1
}
}
edit
Since Groovy 2.5, there is an alternative for with called tap. It
works like with but does not return the return value from the closure,
but the delegate. So this can be written as:
[a:1, b:2].tap{
if (false) {
c = 1
}
}
You could potentially map all false conditions to a common key (e.g. "/dev/null", "", etc) and then remove that key afterwards as part of a contract. Consider the following:
def condA = true
def condB = false
def condC = false
def mapResult =
[
"id":123,
"somethingElse":[],
(condA ? "condA" : "") : "hello",
(condB ? "condB" : "") : "abc",
(condB ? "condC" : "") : "ijk",
]
// mandatory, arguably reasonable
mapResult.remove("")
assert 3 == mapResult.keySet().size()
assert 123 == mapResult["id"]
assert [] == mapResult["somethingElse"]
assert "hello" == mapResult["condA"]
There is no such syntax, the best you can do is
def someConditional = true
def mapResult = [
"id":123,
"somethingElse":[]
]
if (someConditional) {
mapResult.onlyIfConditionalTrue = true
}
I agree with Donal, without code outside of map creation it is difficult.
At least you would have to implement your own ConditionalMap, it is a little work but perfectly doable.
Each element could have it's own condition like
map["a"] = "A"
map["b"] = "B"
map.put("c","C", true)
map.put("d","D", { myCondition })
etc...
Here an incomplete example (I did only put, get, keySet, values and size to illustrate, and not typed - but you probably don't need types here?), you will probably have to implement few others (isEmpty, containsKey etc...).
class ConditionalMap extends HashMap {
/** Default condition can be a closure */
def defaultCondition = true
/** Put an elemtn with default condition */
def put(key, value) {
super.put(key, new Tuple(defaultCondition, value))
}
/** Put an elemetn with specific condition */
def put(key, value, condition) {
super.put(key, new Tuple(condition, value))
}
/** Get visible element only */
def get(key) {
def tuple = super.get(key)
tuple[0] == true ? tuple[1] : null
}
/** Not part of Map , just to know the real size*/
def int realSize() {
super.keySet().size()
}
/** Includes only the "visible" elements keys */
def Set keySet() {
super.keySet().inject(new HashSet(),
{ result, key
->
def tuple = super.get(key)
if (tuple[0])
result.add(key)
result
})
}
/** Includes only the "visible" elements keys */
def Collection values() {
this.keySet().asCollection().collect({ k -> this[k] })
}
/** Includes only the "visible" elements keys */
def int size() {
this.keySet().size()
}
}
/** default condition that do not accept elements */
def map = new ConditionalMap(defaultCondition: false)
/** condition can be a closure too */
// def map = new ConditionalMap(defaultCondition : {-> true == false })
map["a"] = "A"
map["b"] = "B"
map.put("c","C", true)
map.put("d","D", false)
assert map.size() == 1
assert map.realSize() == 4
println map["a"]
println map["b"]
println map["c"]
println map["d"]
println "size: ${map.size()}"
println "realSize: ${map.realSize()}"
println "keySet: ${map.keySet()}"
println "values: ${map.values()}"
/** end of script */
You can use the spread operator to do this for for both maps and lists:
def t = true
def map = [
a:5,
*:(t ? [b:6] : [:])
]
println(map)
[a:5, b:6]
This works in v3, haven't tried in prior versions.

How to use propertyMissing on a class that implements java.util.Map in groovy

I understand that we cannot access Map properties the same way we access them in other classes, because of the ability to get map keys with dot notations in groovy.
Now, Is there a way, for a class that implements java.util.Map, to still benefit from the expando metaclass for using propertyMissing ?
Here is what I'm trying :
LinkedHashMap.metaClass.methodMissing = { method, args ->
println "Invoking ${method}"
"Invoking ${method}"
}
LinkedHashMap.metaClass.propertyMissing = { method, args ->
println "Accessing ${method}"
"Accessing ${method}"
}
def foo = [:]
assert "Invoking bar" == foo.bar() // this works fine
assert "Accessing bar" == foo.bar // this doesn't work, for obvious reasons, but I'd like to be able to do that...
I've been trying through custom DelegatingMetaClasses but didn't succeed...
Not sure it fits your use-case, but you could use Guava and the withDefault method on Maps...
#Grab( 'com.google.guava:guava:16.0.1' )
import static com.google.common.base.CaseFormat.*
def map
map = [:].withDefault { key ->
LOWER_UNDERSCORE.to(LOWER_CAMEL, key).with { alternate ->
map.containsKey(alternate) ? map[alternate] : null
}
}
map.possibleSolution = 'maybe'
assert map.possible_solution == 'maybe'
One side-effect of this is that after the assert, the map contains two key:value pairs:
assert map == [possibleSolution:'maybe', possible_solution:'maybe']
If I understood well you can provide a custom map:
class CustomMap extends LinkedHashMap {
def getAt(name) {
println "getAt($name)"
def r = super.getAt(name)
r ? r : this.propertyMissing(name)
}
def get(name) {
println "get($name)"
super.get(name)
def r = super.get(name)
r ? r : this.propertyMissing(name)
}
def methodMissing(method, args) {
println "methodMissing($method, $args)"
"Invoking ${method}"
}
def propertyMissing(method) {
println "propertyMissing($method)"
"Accessing ${method}"
}
}
def foo = [bar:1] as CustomMap
assert foo.bar == 1
assert foo['bar'] == 1
assert foo.lol == 'Accessing lol'
assert foo['lol'] == 'Accessing lol'
assert foo.bar() == 'Invoking bar'
I reread the groovy Maps javadocs, and I noticed there are 2 versions of the get method. One that takes a single argument, and one that takes 2.
The version that takes 2 does almost what I describe here : it returns a default value if it doesn't find your key.
I get the desired effect, but not in dot notation, therefore I just post this as an alternative solution in case anyone comes across this post :
Map.metaClass.customGet = { key ->
def alternate = key.replaceAll(/_\w/){ it[1].toUpperCase() }
return delegate.get(key, delegate.get(alternate, 'Sorry...'))
}
def m = [myKey : 'Found your key']
assert 'Found your key' == m.customGet('myKey')
assert 'Found your key' == m.customGet('my_key')
assert 'Sorry...' == m.customGet('another_key')
println m
-Result-
m = [myKey:Found your key, my_key:Found your key, anotherKey:Sorry..., another_key:Sorry...]
As in Tim's solution, this leads to m containing both keys after the second assert + 2 keys with the default value (Sorry...) everytime we ask for a new value not present in the initial map... which could be solved by removing the keys with default values. e.g. :
Map.metaClass.customGet = { key ->
def alternate = key.replaceAll(/_\w/){ it[1].toUpperCase() }
def ret = delegate.get(key, delegate.get(alternate, 'Sorry...'))
if (ret == 'Sorry...') {
delegate.remove(key)
delegate.remove(alternate)
}
ret
}
Feel free to comment/correct any mistakes this could lead to... just thinking out loud here...

Resources