I have an object list-
List<Person> personList = [
{name: "a" , age:20 },
{name: "b" , age:24 },
{name: "c" , age:25 },
{name: "d" , age:26 },
]
Now, what is the shortest way to remove age from each object?
Final list will be:
personList = [
{name: "a" },
{name: "b" },
{name: "c" },
{name: "d" },
]
With a bit syntax lift up your example works using findAll
def x = [
[name: "a" , age:20 ],
[name: "b" , age:24 ],
[name: "c" , age:25 ],
[name: "d" , age:26 ]
]
println x.collect {it.findAll {it.key != 'age'}}
[[name:a], [name:b], [name:c], [name:d]]
First of all you should not create a List with type of Person (unknown class) and fill it with Maps without cast.
With Maps you have at least 2 simple options.
Option 1 - create a new List:
personList = personList.collect{ [ name:it.name ] }
Option 2 - mutate the existing List:
personList*.remove( 'age' )
Related
This is my situation
I have a string like : ABCDEF
Then an array like : ['ABC','GBC','DE','DEF',...]
I need to find the substrings that compose the string ABCDEF
I did this :
let info = data_.filter( v => { return action_.toLowerCase().includes(v[0].toLowerCase())});
But the result return also DE, my string is known must be composed by 2 substring, so the only match must be
ABCDEF
[
[ 'ABC', '0.06172000' ],
[ 'DEF', '675.1805' ]
]
not
ABCDEF
[
[ 'ABC', '0.06172000' ],
[ 'DE', '0.0537598600' ],
[ 'DEF', '675.1805' ]
]
I have to combine the substrings to get the one, maybe in the filter function, or after the filter result, or there are other solutions?
Array:
regions = [
{name: "region1"},
{name: "region2"},
{name: "region3"},
{name: "region4"},
{name: "region5"},
{name: "region6"}]
Json:
{
"region1" : ["cluster1"],
"region2" : [],
"region3" : ["cluster1"],
"region4" : ["cluster1","cluster2"]
}
resource "type" "name" {
count = length(regionLength)
name = "region-name/cluster-name"
}
I need resource created with such name output like this
region1/cluster1
region2
region3/cluster1
region4/cluster1
region4/cluster2
Can we achieve this too:
Final = []
For r , cs in arr:
for oc in regions:
if r == oc.name:
for c in cs:
oc[‘cluster’] = r-c
Final.push(oc)
Thanks in advance.
You can achieve that as folllows:
variable "regions" {
default = {
"region1" : ["cluster1"],
"region2" : [],
"region3" : ["cluster1"],
"region4" : ["cluster1","cluster2"]
}
}
locals {
region_list = flatten([for region, clusters in var.regions:
[ for cluster in coalescelist(clusters, [""]):
"${region}/${cluster}"
]
])
}
which gives:
region_list = [
"region1/cluster1",
"region2/",
"region3/cluster1",
"region4/cluster1",
"region4/cluster2",
]
What I have:
An array of strings that I wish to query with. ['a', 'b', 'c']
Data I'm querying against:
A collection of objects of type foo, all with a bar field. The bar field is an array of strings, the same type as the one I'm querying with, potentially with some of the same elements.
foo1 = { bar: ['a'] }
foo2 = { bar: ['d'] }
foo3 = { bar: ['a', 'c']}
What I need:
A query that returns all foo objects whose entire bar array is contained within the query array. In the example above, I'd want foo1 and foo3 to come back
using aggregation
you might need to use $setIsSubset in aggregate pipeline
db.col.aggregate(
[
{$project : { bar : 1 , isSubset: { $setIsSubset : [ "$bar" , ['a','b','c'] ] }}},
{$match : { isSubset : true}}
]
)
collection
> db.col.find()
{ "_id" : ObjectId("5a6420d984eeec7b0b2f767b"), "bar" : [ "a" ] }
{ "_id" : ObjectId("5a6420d984eeec7b0b2f767c"), "bar" : [ "a", "c" ] }
{ "_id" : ObjectId("5a6420d984eeec7b0b2f767d"), "bar" : [ "d" ] }
aggregate
> db.col.aggregate([{$project : { bar : 1 , isSubset: { $setIsSubset : [ "$bar" , ['a','b','c'] ] }}}, {$match : {isSubset : true}}])
{ "_id" : ObjectId("5a6420d984eeec7b0b2f767b"), "bar" : [ "a" ], "isSubset" : true }
{ "_id" : ObjectId("5a6420d984eeec7b0b2f767c"), "bar" : [ "a", "c" ], "isSubset" : true }
>
EDIT
using find with $expr
db.col.find({$expr : { $setIsSubset : [ "$bar" , ['a','b','c'] ] }})
result
> db.col.find({$expr : { $setIsSubset : [ "$bar" , ['a','b','c'] ] }})
{ "_id" : ObjectId("5a6420d984eeec7b0b2f767b"), "bar" : [ "a" ] }
{ "_id" : ObjectId("5a6420d984eeec7b0b2f767c"), "bar" : [ "a", "c" ] }
>
I am trying to add a computed value to my viewModel object. And I am using foreach to create a table of rows. I am not able to get around this computed function.
I am trying to do this.
viewModel =
{
objectName: ko.observable([
{ value: "", triggerValue: "0"},
{ value: "", triggerValue: "1"},
{ value: "", triggerValue: "1"}
]),
};
viewModel.objectName().value= ko.computed(function() {
return this.objectName().triggerValue= "0" ? "Apple" : "Microsoft";
}, this);
I want the viewModel objectName output to look like
{value: "Apple", triggerValue: "0"},
{value: "Microsoft", triggerValue: "1"},
{value: "Microsoft", triggerValue: "1"}
Thanks.
KDK
Several mistakes going on here
you are using an observable instead of observableArray, technically an observable can store an array, but you are better off using an observableArray
you are trying to tie a computed to objectName().value, but object name is suppose to be an array so it would not have a value, it would ideally be like this objectName()[1].value.
This is not how to assign properties, ko.computed is not a replacement for a function, computed are for monitoring existing observales and and re calculating when a change is made in one of the monitored observables.
I would do something like this.
viewModel =
{
objectName: ko.observable([
{ value: setType(0), triggerValue: "0"},
{ value: setType(1), triggerValue: "1"},
{ value: setType(1), triggerValue: "1"}
]),
};
function setType(trigger){
return trigger = "0" ? "Apple" : "Microsoft"
}
or better yet
viewModel =
{
objectName: ko.observable([
setVal(0),
setVal(1),
setVal(1),
]),
};
function setVal(trigger){
return {value: (trigger = "0" ? "Apple" : "Microsoft"), triggerValue: trigger };
}
Does anyone have any sample Groovy code to convert a JSON document to CSV file? I have tried to search on Google but to no avail.
Example input (from comment):
[ company_id: '1',
web_address: 'vodafone.com/',
phone: '+44 11111',
fax: '',
email: '',
addresses: [
[ type: "office",
street_address: "Vodafone House, The Connection",
zip_code: "RG14 2FN",
geo: [ lat: 51.4145, lng: 1.318385 ] ]
],
number_of_employees: 91272,
naics: [
primary: [
"517210": "Wireless Telecommunications Carriers (except Satellite)" ],
secondary: [
"517110": "Wired Telecommunications Carriers",
"517919": "Internet Service Providers",
"518210": "Web Hosting"
]
]
More info from an edit:
def export(){
def exportCsv = [ [ id:'1', color:'red', planet:'mars', description:'Mars, the "red" planet'],
[ id:'2', color:'green', planet:'neptune', description:'Neptune, the "green" planet'],
[ id:'3', color:'blue', planet:'earth', description:'Earth, the "blue" planet'],
]
def out = new File('/home/mandeep/groovy/workspace/FirstGroovyProject/src/test.csv')
exportCsv.each {
def row = [it.id, it.color, it.planet,it.description]
out.append row.join(',')
out.append '\n'
}
return out
}
Ok, how's this:
import groovy.json.*
// Added extra fields and types for testing
def js = '''{"infile": [{"field1": 11,"field2": 12, "field3": 13},
{"field1": 21, "field4": "dave","field3": 23},
{"field1": 31,"field2": 32, "field3": 33}]}'''
def data = new JsonSlurper().parseText( js )
def columns = data.infile*.keySet().flatten().unique()
// Wrap strings in double quotes, and remove nulls
def encode = { e -> e == null ? '' : e instanceof String ? /"$e"/ : "$e" }
// Print all the column names
println columns.collect { c -> encode( c ) }.join( ',' )
// Then create all the rows
println data.infile.collect { row ->
// A row at a time
columns.collect { colName -> encode( row[ colName ] ) }.join( ',' )
}.join( '\n' )
That prints:
"field3","field2","field1","field4"
13,12,11,
23,,21,"dave"
33,32,31,
Which looks correct to me