Add new key and value pair under map using groovy - groovy

{
"map": {
"key1": [3,12,13,11],
"key2": [21,23],
"key3": [31,32,33]
}}
I have this JSON. similar to key1 or key2 I want to add new key- pair to this json using groovy. I am using JsonSlurper().
def mJson = new File(MAPPINGJSON).text;
def mJsonObject = parser.parseText(mJson);
def list= mJsonObject.map;
def keyFound= false;
for (item in list)
{
if (item.key == templateKey)
{
def values = item.value;
values.add(<some value>);
item.value= values;
keyFound = true;
break;
}
keyFound = false;
}
if(!keyFound)
{
println "Key not found";
// How to add new key pair?
}

list[templateKey] = [<some value>]by daggett is one way, but you can also use a one liner to do the trick.
def list= mJsonObject.map;
list.computeIfAbsent(templateKey, { [] }).add(templateValue)
It uses a function to provide the default value of the map.

Related

Nested JSON with duplicate keys

I will have to process 10 billion Nested JSON records per day using NiFi (version 1.9). As part of the job, am trying to convert the nested JSON to csv using Groovy script. I referred the below Stack Overflow questions related to the same topic and came up with the below code.
Groovy collect from map and submap
how to convert json into key value pair completely using groovy
But am not sure how to retrieve the value of duplicate keys. Sample json is defined in the variable "json" in the below code. key "Flag1" will be coming in multiple sections (i.e., "OF" & "SF"). I want to get the output as csv.
Below is the output if I execute the below groovy code 2019-10-08 22:33:29.244000,v12,-,36178,0,0/0,10.65.5.56,sf,sf (flag1 key value is replaced by that key column's last occurrence value)
I am not an expert in Groovy. Also please suggest if there is any other better approach, so that I will give a try.
import groovy.json.*
def json = '{"transaction":{"TS":"2019-10-08 22:33:29.244000","CIPG":{"CIP":"10.65.5.56","CP":"0"},"OF":{"Flag1":"of","Flag2":"-"},"SF":{"Flag1":"sf","Flag2":"-"}}'
def jsonReplace = json.replace('{"transaction":{','{"transaction":[{').replace('}}}','}}]}')
def jsonRecord = new JsonSlurper().parseText(jsonReplace)
def columns = ["TS","V","PID","RS","SR","CnID","CIP","Flag1","Flag1"]
def flatten
flatten = { row ->
def flattened = [:]
row.each { k, v ->
if (v instanceof Map) {
flattened << flatten(v)
} else if (v instanceof Collection && v.every {it instanceof Map}) {
v.each { flattened << flatten(it) }
} else {
flattened[k] = v
}
}
flattened
}
print "output: " + jsonRecord.transaction.collect {row -> columns.collect {colName -> flatten(row)[colName]}.join(',')}.join('\n')
Edit: Based on the reply from #cfrick and #stck, I have tried the option and have follow up question below.
#cfrick and #stck- Thanks for your response.
Original source JSON record will have more than 100 columns and I am using "InvokeScriptedProcessor" in NiFi to trigger the Groovy script.
Below is the original Groovy script am using in "InvokeScriptedProcessor" in which I have used Streams(inputstream, outputstream). Is this what you are referring.
Am I doing anything wrong?
import groovy.json.JsonSlurper
class customJSONtoCSV implements Processor {
def REL_SUCCESS = new Relationship.Builder().name("success").description("FlowFiles that were successfully processed").build();
def log
static def flatten(row, prefix="") {
def flattened = new HashMap<String, String>()
row.each { String k, Object v ->
def key = prefix ? prefix + "_" + k : k;
if (v instanceof Map) {
flattened.putAll(flatten(v, k))
} else {
flattened.put(key, v.toString())
}
}
return flattened
}
static def toCSVRow(HashMap row) {
def columns = ["CIPG_CIP","CIPG_CP","CIPG_SLP","CIPG_SLEP","CIPG_CVID","SIPG_SIP","SIPG_SP","SIPG_InP","SIPG_SVID","TG_T","TG_R","TG_C","TG_SDL","DL","I_R","UAP","EDBL","Ca","A","RQM","RSM","FIT","CSR","OF_Flag1","OF_Flag2","OF_Flag3","OF_Flag4","OF_Flag5","OF_Flag6","OF_Flag7","OF_Flag8","OF_Flag9","OF_Flag10","OF_Flag11","OF_Flag12","OF_Flag13","OF_Flag14","OF_Flag15","OF_Flag16","OF_Flag17","OF_Flag18","OF_Flag19","OF_Flag20","OF_Flag21","OF_Flag22","OF_Flag23","SF_Flag1","SF_Flag2","SF_Flag3","SF_Flag4","SF_Flag5","SF_Flag6","SF_Flag7","SF_Flag8","SF_Flag9","SF_Flag10","SF_Flag11","SF_Flag12","SF_Flag13","SF_Flag14","SF_Flag15","SF_Flag16","SF_Flag17","SF_Flag18","SF_Flag19","SF_Flag20","SF_Flag21","SF_Flag22","SF_Flag23","SF_Flag24","GF_Flag1","GF_Flag2","GF_Flag3","GF_Flag4","GF_Flag5","GF_Flag6","GF_Flag7","GF_Flag8","GF_Flag9","GF_Flag10","GF_Flag11","GF_Flag12","GF_Flag13","GF_Flag14","GF_Flag15","GF_Flag16","GF_Flag17","GF_Flag18","GF_Flag19","GF_Flag20","GF_Flag21","GF_Flag22","GF_Flag23","GF_Flag24","GF_Flag25","GF_Flag26","GF_Flag27","GF_Flag28","GF_Flag29","GF_Flag30","GF_Flag31","GF_Flag32","GF_Flag33","GF_Flag34","GF_Flag35","VSL_VSID","VSL_TC","VSL_MTC","VSL_NRTC","VSL_ET","VSL_HRES","VSL_VRES","VSL_FS","VSL_FR","VSL_VSD","VSL_ACB","VSL_ASB","VSL_VPR","VSL_VSST","HRU_HM","HRU_HD","HRU_HP","HRU_HQ","URLF_CID","URLF_CGID","URLF_CR","URLF_RA","URLF_USM","URLF_USP","URLF_MUS","TCPSt_WS","TCPSt_SE","TCPSt_WSFNS","TCPSt_WSF","TCPSt_EM","TCPSt_RSTE","TCPSt_MSS","NS_OPID","NS_ODID","NS_EPID","NS_TrID","NS_VSN","NS_LSUT","NS_STTS","NS_TCPPR","CQA_NL","CQA_CL","CQA_CLC","CQA_SQ","CQA_SQC","TS","V","PID","RS","SR","CnID","A_S","OS","CPr","CVB","CS","HS","SUNR","SUNS","ML","MT","TCPSL","CT","MS","MSH","SID","SuID","UA","DID","UAG","CID","HR","CRG","CP1","CP2","AIDF","UCB","CLID","CLCL","OPTS","PUAG","SSLIL"]
return columns.collect { column ->
return row.containsKey(column) ? row.get(column) : ""
}.join(',')
}
#Override
void initialize(ProcessorInitializationContext context) {
log = context.getLogger()
}
#Override
Set<Relationship> getRelationships() {
return [REL_SUCCESS] as Set
}
#Override
void onTrigger(ProcessContext context, ProcessSessionFactory sessionFactory) throws ProcessException {
try {
def session = sessionFactory.createSession()
def flowFile = session.get()
if (!flowFile) return
flowFile = session.write(flowFile,
{ inputStream, outputStream ->
def bufferedReader = new BufferedReader(new InputStreamReader(inputStream, 'UTF-8'))
def jsonSlurper = new JsonSlurper()
def line
def header = "CIPG_CIP,CIPG_CP,CIPG_SLP,CIPG_SLEP,CIPG_CVID,SIPG_SIP,SIPG_SP,SIPG_InP,SIPG_SVID,TG_T,TG_R,TG_C,TG_SDL,DL,I_R,UAP,EDBL,Ca,A,RQM,RSM,FIT,CSR,OF_Flag1,OF_Flag2,OF_Flag3,OF_Flag4,OF_Flag5,OF_Flag6,OF_Flag7,OF_Flag8,OF_Flag9,OF_Flag10,OF_Flag11,OF_Flag12,OF_Flag13,OF_Flag14,OF_Flag15,OF_Flag16,OF_Flag17,OF_Flag18,OF_Flag19,OF_Flag20,OF_Flag21,OF_Flag22,OF_Flag23,SF_Flag1,SF_Flag2,SF_Flag3,SF_Flag4,SF_Flag5,SF_Flag6,SF_Flag7,SF_Flag8,SF_Flag9,SF_Flag10,SF_Flag11,SF_Flag12,SF_Flag13,SF_Flag14,SF_Flag15,SF_Flag16,SF_Flag17,SF_Flag18,SF_Flag19,SF_Flag20,SF_Flag21,SF_Flag22,SF_Flag23,SF_Flag24,GF_Flag1,GF_Flag2,GF_Flag3,GF_Flag4,GF_Flag5,GF_Flag6,GF_Flag7,GF_Flag8,GF_Flag9,GF_Flag10,GF_Flag11,GF_Flag12,GF_Flag13,GF_Flag14,GF_Flag15,GF_Flag16,GF_Flag17,GF_Flag18,GF_Flag19,GF_Flag20,GF_Flag21,GF_Flag22,GF_Flag23,GF_Flag24,GF_Flag25,GF_Flag26,GF_Flag27,GF_Flag28,GF_Flag29,GF_Flag30,GF_Flag31,GF_Flag32,GF_Flag33,GF_Flag34,GF_Flag35,VSL_VSID,VSL_TC,VSL_MTC,VSL_NRTC,VSL_ET,VSL_HRES,VSL_VRES,VSL_FS,VSL_FR,VSL_VSD,VSL_ACB,VSL_ASB,VSL_VPR,VSL_VSST,HRU_HM,HRU_HD,HRU_HP,HRU_HQ,URLF_CID,URLF_CGID,URLF_CR,URLF_RA,URLF_USM,URLF_USP,URLF_MUS,TCPSt_WS,TCPSt_SE,TCPSt_WSFNS,TCPSt_WSF,TCPSt_EM,TCPSt_RSTE,TCPSt_MSS,NS_OPID,NS_ODID,NS_EPID,NS_TrID,NS_VSN,NS_LSUT,NS_STTS,NS_TCPPR,CQA_NL,CQA_CL,CQA_CLC,CQA_SQ,CQA_SQC,TS,V,PID,RS,SR,CnID,A_S,OS,CPr,CVB,CS,HS,SUNR,SUNS,ML,MT,TCPSL,CT,MS,MSH,SID,SuID,UA,DID,UAG,CID,HR,CRG,CP1,CP2,AIDF,UCB,CLID,CLCL,OPTS,PUAG,SSLIL"
outputStream.write("${header}\n".getBytes('UTF-8'))
while (line = bufferedReader.readLine()) {
def jsonReplace = line.replace('{"transaction":{','{"transaction":[{').replace('}}}','}}]}')
def jsonRecord = new JsonSlurper().parseText(jsonReplace)
def a = jsonRecord.transaction.collect { row ->
return flatten(row)
}.collect { row ->
return toCSVRow(row)
}
outputStream.write("${a}\n".getBytes('UTF-8'))
}
} as StreamCallback)
session.transfer(flowFile, REL_SUCCESS)
session.commit()
}
catch (e) {
throw new ProcessException(e)
}
}
#Override
Collection<ValidationResult> validate(ValidationContext context) { return null }
#Override
PropertyDescriptor getPropertyDescriptor(String name) { return null }
#Override
void onPropertyModified(PropertyDescriptor descriptor, String oldValue, String newValue) { }
#Override
List<PropertyDescriptor> getPropertyDescriptors() {
return [] as List
}
#Override
String getIdentifier() { return null }
}
processor = new customJSONtoCSV()
If I should not use "collect" then what else I need to use to create the rows.
In the output flow file, the record output is coming inside []. I tried the below but it is not working. Not sure whether am doing the right thing. I want csv output without []
return toCSVRow(row).toString()
If you know what you want to extract exactly (and given you want to
generate a CSV from it) IMHO you are way better off to just shape the
data in the way you later want to consume it. E.g.
def data = new groovy.json.JsonSlurper().parseText('[{"TS":"2019-10-08 22:33:29.244000","CIPG":{"CIP":"10.65.5.56","CP":"0"},"OF":{"Flag1":"of","Flag2":"-"},"SF":{"Flag1":"sf","Flag2":"-"}}]')
extractors = [
{ it.TS },
{ it.V },
{ it.PID },
{ it.RS },
{ it.SR },
{ it.CIPG.CIP },
{ it.CIPG.CP },
{ it.OF.Flag1 },
{ it.SF.Flag1 },]
def extract(row) {
extractors.collect{ it(row) }
}
println(data.collect{extract it})
// ⇒ [[2019-10-08 22:33:29.244000, null, null, null, null, 10.65.5.56, 0, of, sf]]
As stated in the other answer, due to the sheer amount of data you are trying to
convert::
Make sure to use a library to generate the CSV file from that, or else
you will hit problems with the content, you try to write (e.g. line
breaks or the data containing the separator char).
Don't use collect (it is eager) to create the rows.
The idea is to modify "flatten" method - it should differentiate between same nested keys by providing parent key as a prefix.
I've simplified code a bit:
import groovy.json.*
def json = '{"transaction":{"TS":"2019-10-08 22:33:29.244000","CIPG":{"CIP":"10.65.5.56","CP":"0"},"OF":{"Flag1":"of","Flag2":"-"},"SF":{"Flag1":"sf","Flag2":"-"}}'
def jsonReplace = json.replace('{"transaction":{','{"transaction":[{').replace('}}','}}]')
def jsonRecord = new JsonSlurper().parseText(jsonReplace)
static def flatten(row, prefix="") {
def flattened = new HashMap<String, String>()
row.each { String k, Object v ->
def key = prefix ? prefix + "." + k : k;
if (v instanceof Map) {
flattened.putAll(flatten(v, k))
} else {
flattened.put(key, v.toString())
}
}
return flattened
}
static def toCSVRow(HashMap row) {
def columns = ["TS","V","PID","RS","SR","CnID","CIP","OF.Flag1","SF.Flag1"] // Last 2 keys have changed!
return columns.collect { column ->
return row.containsKey(column) ? row.get(column) : ""
}.join(', ')
}
def a = jsonRecord.transaction.collect { row ->
return flatten(row)
}.collect { row ->
return toCSVRow(row)
}.join('\n')
println a
Output would be:
2019-10-08 22:33:29.244000, , , , , , , of, sf

Clone of list still correct the original list

In groovy the original value get overwritten when I change values in a clone list. Does anyone know if I am doing it wrong or it is a bug older groovy?
I am doing something like this:
List<Foo> myFooList = fooList.newFoos.findAll { it.type == "Types}
List<Foo> newFoo = fooList.oldFoos.findAll { it.type == "Types}.clone()
newFoo.each {
it.value = "neeeew value"
}
Foo fooOne = newFoo.each { foooo ->
fooTwo = fooList.oldFoos.find { it.id == foooo.id}
if(fooTwo.value != foooo.value) {
//Here it should go... but it turns out that fooTwo.value == foooo.value
}
}
the clone method called on list produces a new list but with the same objects in it.
you want to build new list with new objects. here is an example:
#groovy.transform.ToString
class Foo{
String type
String value
}
def fooList = [
new Foo(type:"Types", value:'old value1'),
new Foo(type:"Not", value:'old value2'),
new Foo(type:"Types", value:'old value3'),
new Foo(type:"Not", value:'old value4'),
]
def newFooList = fooList.
findAll{it.type=='Types'}.
collect{ new Foo(type:it.type, value:"new value") } //build new array with transformed elements
//check the original list
fooList.each{assert it.value!='new value'}
//check new list
newFooList.each{assert it.value=='new value'}
assert newFooList.size()==2
println fooList
println newFooList
I solved the issue by adding clone of the element as well, any way it became to much of cowboy fix:
List<Foo> myFooList = fooList.newFoos.findAll { it.type == "Types}
List<Foo> newFoo = fooList.oldFoos.findAll { it.type == "Types}.collect {it.clone()}
newFoo.each {
it.value = "neeeew value"
}
Foo fooOne = newFoo.each { foooo ->
fooTwo = fooList.oldFoos.find { it.id == foooo.id}
if(fooTwo.value != foooo.value) {
//Here it should go... but it turns out that fooTwo.value == foooo.value
}
}

Groovy object properties in map

Instead of having to declare all the properties in a map from an object like:
prop1: object.prop1
Can't you just drop the object in there like below somehow? Or what would be a proper way to achieve this?
results: [
object,
values: [
test: 'subject'
]
]
object.properties will give you a class as well
You should be able to do:
Given your POGO object:
class User {
String name
String email
}
def object = new User(name:'tim', email:'tim#tim.com')
Write a method to inspect the class and pull the non-synthetic properties from it:
def extractProperties(obj) {
obj.getClass()
.declaredFields
.findAll { !it.synthetic }
.collectEntries { field ->
[field.name, obj."$field.name"]
}
}
Then, map spread that into your result map:
def result = [
value: true,
*:extractProperties(object)
]
To give you:
['value':true, 'name':'tim', 'email':'tim#tim.com']
If you don't mind using a few libraries here's an option where you convert the object to json and then parse it back out as a map. I added mine to a baseObject which in your case object would extend.
class BaseObject {
Map asMap() {
def jsonSlurper = new groovy.json.JsonSlurperClassic()
Map map = jsonSlurper.parseText(this.asJson())
return map
}
String asJson(){
def jsonOutput = new groovy.json.JsonOutput()
String json = jsonOutput.toJson(this)
return json
}
}
Also wrote it without the json library originally. This is like the other answers but handles cases where the object property is a List.
class BaseObject {
Map asMap() {
Map map = objectToMap(this)
return map
}
def objectToMap(object){
Map map = [:]
for(item in object.class.declaredFields){
if(!item.synthetic){
if (object."$item.name".hasProperty('length')){
map."$item.name" = objectListToMap(object."$item.name")
}else if (object."$item.name".respondsTo('asMap')){
map << [ (item.name):object."$item.name"?.asMap() ]
} else{
map << [ (item.name):object."$item.name" ]
}
}
}
return map
}
def objectListToMap(objectList){
List list = []
for(item in objectList){
if (item.hasProperty('length')){
list << objectListToMap(item)
}else {
list << objectToMap(item)
}
}
return list
}
}
This seems to work well
*:object.properties

Return Nested Key in Groovy

I am trying to determine the best way to return nested key values using groovy. If I have a map:
def map = [
OrganizationName: 'SampleTest',
Address: [
Street: '123 Sample St',
PostalCode: '00000',
]
]
Is there a way to return all of the keys? OrganizationName, OrganizationURL, Address.Street, Address.PostalCode? If I didn't have an map within a map I could use map.keySet() as String[]. Should I just loop through each key and see if it is an instanceof another map?
The Groovy libraries don't provide a method for this, but you can write your own. Here's an example that you can copy-paste into the Groovy console
List<String> getNestedMapKeys(Map map, String keyPrefix = '') {
def result = []
map.each { key, value ->
if (value instanceof Map) {
result += getNestedMapKeys(value, keyPrefix += "$key.")
} else {
result << "$keyPrefix$key"
}
}
result
}
// test it out
def map = [
OrganizationName: 'SampleTest',
Address: [
Street: '123 Sample St',
PostalCode: '00000',
]
]
assert ['OrganizationName', 'Address.Street', 'Address.PostalCode'] == getNestedMapKeys(map)
Use Following generic recursion Method To generate the list of all nested map keys
def getListOfKeys(def map, String prefix,def listOfKeys){
if(map instanceof Map){
map.each { key, value->
if(prefix.isEmpty()){
listOfKeys<<key
}else{
listOfKeys<< prefix+"."+key
}
if(value instanceof List){
List list = value
list.eachWithIndex { item, index ->
if(prefix.isEmpty()){
getListOfKeys(item, key+"["+index+"]",listOfKeys)
}else{
getListOfKeys(item, prefix+"."+key+"["+index+"]",listOfKeys)
}
}
}else if(value instanceof Map){
if(prefix.isEmpty()){
getListOfKeys(value, key,listOfKeys)
}else{
getListOfKeys(value, prefix+"."+key,listOfKeys)
}
}
}
}
}
call above method as follows
def void findAllKeysInMap(){
Map map = [ "fields":[
"project":
[
"key": "TP"
],
"summary": "Api Testing Issue.",
"description": "This issue is only for api testing purpose",
"issuetype": [
"name": ["Bug":["hello":[["saurabh":"gaur","om":"prakash"], ["gaurav":"pandey"], ["mukesh":"mishra"]]]]
]
]
]
def listOfKeys=[]
getListOfKeys(map, '', listOfKeys)
println "listOfKeys>>>"+listOfKeys
}
output:
listOfKeys>>>[fields, fields.project, fields.project.key, fields.summary, fields.description, fields.issuetype, fields.issuetype.name, fields.issuetype.name.Bug, fields.issuetype.name.Bug.hello, fields.issuetype.name.Bug.hello[0].saurabh, fields.issuetype.name.Bug.hello[0].om, fields.issuetype.name.Bug.hello[1].gaurav, fields.issuetype.name.Bug.hello[2].mukesh]
There's no such method You're looking for in groovy. You need to do it using instanceof and probably a recursive method.
Slightly shorter:
String key = 'Address.Street'
key.split('\\.').inject(yourMap) {map, path -> map[path]}
If you can't guarantee that the path exists and is complete (say, if you tried to access OrganizationName.id) you'll need to add some safeguards (check that map is not null, and that it's really a Map)

Access values of static closure in Groovy

I'd like to store some properties in a static closure and later access them during a method call:
class Person {
static someMap = { key1: "value1", key2: "value2" }
}
So how can I write a method within Person, which retrieves this stored data?
For the simple case, you're better off using a map.
If you really do want to evaluate it as a closure (possibly to create your own DSL), you'll need to change your syntax slightly as John points out. Here's one way to do it using a Builder class to evaluate the "something" closure within whatever is passed to the builder.
It uses groovy metaprogramming to intercept calls with method/property missing and to save them off:
class SomethingBuilder {
Map valueMap = [:]
SomethingBuilder(object) {
def callable = object.something
callable.delegate = this
callable.resolveStrategy = Closure.DELEGATE_FIRST
callable()
}
def propertyMissing(String name) {
return valueMap[name]
}
def propertyMissing(String name, value) {
valueMap[name] = value
}
def methodMissing(String name, args) {
if (args.size() == 1) {
valueMap[name] = args[0]
} else {
valueMap[name] = args
}
}
}
class Person {
static something = {
key1 "value1" // calls methodMissing("key1", ["value1"])
key2("value2") // calls methodMissing("key2", ["value2"])
key3 = "value3" // calls propertyMissing("key3", "value3")
key4 "foo", "bar", "baz" // calls methodMissing("key4", ["foo","bar","baz"])
}
}
def builder = new SomethingBuilder(new Person())
assert "value1" == builder."key1" // calls propertyMissing("key1")
assert "value2" == builder."key2" // calls propertyMissing("key2")
assert "value3" == builder."key3" // calls propertyMissing("key3")
assert ["foo", "bar", "baz"] == builder."key4" // calls propertyMissing("key4")
If they need to be initialized by a closure, instead of a map, then somebody's got to run the closure in order to pick up and record the values as they're set.
Your syntax there isn't valid. Remember closures are just anonymous methods. Your syntax looks like you're trying to define a map, but closures would need to call methods, set variables or return maps. e.g.
static someClosure = { key1 = "value1"; key2 = "value2" } // set variables
static someClosure = { key1 "value1"; key2 = "value2" } // call methods
static someClosure = { [key1: "value1", key2: "value2"] } // return a map
Then of course whoever's running the closure needs to have the right metaprogramming to record the results.
It sounds like what you really want is just to define a map.
static someMap = [key1: "value1", key2: "value2"]
This is what I came up with to extract static closure properties:
class ClosureProps {
Map props = [:]
ClosureProps(Closure c) {
c.delegate = this
c.each{"$it"()}
}
def methodMissing(String name, args) {
props[name] = args.collect{it}
}
def propertyMissing(String name) {
name
}
}
// Example
class Team {
static schema = {
table team
id teamID
roster column:playerID, cascade:[update,delete]
}
}
def c = new ClosureProps(Team.schema)
println c.props.table

Resources