getting data from ConfigObject - groovy

How to get data from this confstruction:
sth {
[
firstname="me"
second="sfdg"
]
[
adress="adfhajkfdh"
]
}
I used ConfigObject but when I get from it keySet it gives me whole list(firstname, secondname,adress) bu I have to separete it. Is any way to get data from only one tab e.g. only "firstname" and "secondname".

As I understand it, you used to have config like:
sth {
firstname="me"
second="sfdg"
adress="adfhajkfdh"
}
but you now want to structure that into columns?
One way this could be done is by structuring each column into a separate property like so:
sth {
column1 {
firstname="me"
second="sfdg"
}
column2 {
adress="adfhajkfdh"
}
}
Or, you could declare another columns property which constains a list of columns (each of which is a list of properties that you want in each column), ie:
sth {
firstname="me"
second="sfdg"
adress="adfhajkfdh"
columns = [ column1:[ 'firstname', 'second' ], column2:[ 'address' ] ]
}
Personally, I prefer the second approach as it should still work with your old code, you don't need to iterate the ConfigObject structure to get all the properties, and properties could be in multiple columns (if this becomes a future requirement)

Related

How to query an array inside another array in CosmosDB?

I have something similar to the following json structure and I need to search for an specific filename in the Azure CosmosDB data explorer (no matter the position), I have been trying differents ways and also using CROSS APPLY FLATTEN but i cannot get it
{
"entityId": "f07256a5-0e60-412a-bcc9-2e1aa66b69f5",
"array1": [
{
"array2": [
{
"fileName": "filename1.pdf",
},
{
"fileName": "filename2.pdf",
}
]
}
]
}
Any ideas? thanks
You need something like below,
SELECT
c.fileName
FROM d
JOIN f IN d.array1
JOIN c IN f.array2
WHERE c.fileName = "filename1.pdf"
This one works for me:
SELECT c.entityId
FROM c
JOIN one IN c.array1
JOIN two IN one.array2
WHERE two.fileName = 'filename1.pdf'
It uses self-joins to create an object for each filename and then filters from those which has the right filename.

How to filter Subscribers based on array of tags in Loopabck

I've two models - subscribers and tags
Sample data:
{
subscribers: [
{
name: "User 1",
tags: ["a","b"]
},
{
name: "User 2",
tags: ["c","d"]
}
]
}
I want to filter subscribers based on their tags.
If I give a and b tags, User 1 should list
If I give a and c tags,
both User 1 and User 2 should list
Here is what I tried:
Method 1:
tags is a column in subscribers model with array data type
/subscribers/?filter={"where":{"tags":{"inq":["a","b"]}}} // doesn't work
Method 2:
Created a separate table tags and set subscribers has many tags.
/subscribers/?filter={"where":{"tags":{"inq":["a","b"]}}} // doesn't work
How can I achieve this in Loopback without writing custom methods?
I've Postgresql as the connector
UPDATE
As mentioned in the loopback docs you should use inq not In
The inq operator checks whether the value of the specified property matches any of the values provided in an array. The general syntax is:
{where: { property: { inq: [val1, val2, ...]}}}
From this:
/subscribers/?filter={"where":{"tags":{"In":["a","b"]}}}
To this:
/subscribers/?filter={"where":{"tags":{"inq":["a","b"]}}}
Finally found a hack, using Regex! it's not a performant solution, but it works!!
{ "where": { "tags": { "regexp": "a|b" } } }

How do I get json data with retrhinkdb

(I'm not good eng.)
I use a group to include dates. I want to get the information out in a row. What do i need to do
.group(r.row('time_create').dayOfWeek())
json export
[
{
group: 1,
reduction: [
{
detail: "no",
id: "37751c10-97ea-4a3a-b2c9-3e8b39383b79",
order_id: "15",
status: "Not_Delivery",
time_create: "2018-09-23T15:25:13.141Z"
}
]
}
]
i want change data json to
{
"date":
{
"Sun": [
{
detail: "no",
order_id: "15",
status: "Not_Delivery",
time_create: "2018-09-28 15:25:13"
}
]
}
}
Do i have to give the information out as i want.
Looks like you tried but didn't manage to transform the data from your previous question. ;)
Here is a proposition, this is not the only way of doing it.
First, it seems you want to remove the id field. You may do that in your ReQL using without:
.group(r.row('time_create').dayOfWeek()).without('id')
(You may apply without('id') before group, it should work the same, see this for more details.)
Then, to transform the result array (let's call it queryResult) into an object (let's call it output):
// prepare the skeleton of the output
let output = {
date: {}
};
// walk the result, filling the output in the process
queryResult.forEach((groupData) => {
let key = groupData.group;
if (!output[key]) {
output[key] = [];
}
output.date[key].push(...groupData.reduction);
})
Now you almost have your desired structure in output, the only thing is that day keys are still numbers and not a short day name. In my opinion, this should be handled by the front-end, since you may want to have different languages implemented for your front-end. But anyway, the idea is always the same: having a translation table that maps Rethink's day numbers with human-readable day names:
const translationTable = {
1: 'Mon',
2: 'Tue',
// ...
7: 'Sun'
};
Now if you do that in your front-end, you just replace the data's keys on the fly, when displaying is needed (or retrieve the key from the day name, depending on how you display stuff). Otherwise, if you go for a back-end implementation (which, again, is clearly not the best solution), you can change one line in the code above (assuming you declared translationTable already):
let key = groupData.group;
// becomes
let key = translationTable[groupData.group];
Feel free to ask in comments if there's something you don't understand!

How to easily promote a JSON member to the main event level? [duplicate]

This question already has an answer here:
Eliminate the top-level field in Logstash
(1 answer)
Closed 5 years ago.
I'm using an http_poller to hit an API endpoint for some info I want to index with elasticsearch. The result is in JSON and is a list of records, looking like this:
{
"result": [
{...},
{...},
...
]
}
Each result object in the array is what I really want to turn into an event that gets indexed in ElasticSearch, so I tried using the split filter to turn the object into a series of events instead. It worked reasonably well, but now I have a series of events that look like this:
{
result: { ... }
}
My current filter looks like this:
filter {
if [type] == "history" {
split {
field => "result"
}
}
}
Each of those result objects has about 20 fields, most of which I want, so while I know I can transform them by doing something along the lines of
filter {
if [type] == "history" {
split {
field => "result"
}
mutate {
add_field => { "field1" => "%{[result][field1]}"
#... x15-20 more fields
remove_field => "result"
}
}
}
But with so many fields I was hoping there's a one-liner to just copy all the fields of the 'result' value up to be the event.
This can be done with a ruby filter like this:
ruby {
code => '
if (event.get("result"))
event.get("result").each { |k,v|
event.set(k,v);
}
event.remove("result");
end
'
}
I don't know of any way to do this with any of the built in/publicly available filters.

Arangodb removing subitems from document

How does one remove subitems from a document. So say I have a document called sales with each sale has a sale.item which contains {name,price,code}.
I want to remove each item which is not valid, by checking the code for blank or null.
Trying something like below fails with errors, am not sure if I need to use sub-query and how.
FOR sale in sales
FOR item in sale.items
FILTER item.code == ""
REMOVE item IN sale.items
Another attempt
FOR sale in sales
LET invalid = (
FOR item in sale.items
FILTER item.code == ""
RETURN item
)
REMOVE invalid IN sale.items LET removed = OLD RETURN removed
The following query will rebuild the items for each document in sales. It will only keep item whose code is not null and not the empty string:
FOR doc IN sales
LET newItems = (
FOR item IN doc.items
FILTER item.code != null && item.code != ''
RETURN item
)
UPDATE doc WITH { items: newItems } IN sales
Here is the test data I used:
db.sales.insert({
items: [
{ code: null, what: "delete-me" },
{ code: "", what: "delete-me-too" },
{ code: "123", what: "better-keep-me" },
{ code: true, what: "keep-me-too" }
]
});
db.sales.insert({
items: [
{ code: "", what: "i-will-vanish" },
{ code: null, what: "i-will-go-away" },
{ code: "abc", what: "not me!" }
]
});
db.sales.insert({
items: [
{ code: "444", what: "i-will-remain" },
{ code: null, what: "i-will-not" }
]
});
There's a better way to do this, without sub-queries. Instead, a function for removing an array element, will be used:
FOR doc IN sales
FOR item IN doc.items
FILTER item.code == ""
UPDATE doc WITH { items: REMOVE_VALUE( doc.items, item ) } IN sales
REMOVE_VALUE takes an array as the first argument, and an array item inside that array as the second argument, and returns an array that has all the items of the first argument, but without that specific item that was in the second argument.
Example:
REMOVE_VALUE([1, 2, 3], 3) = [1, 2]
Example with subdocuments being the values:
REMOVE_VALUE( [ {name: cake}, {name: pie, taste: delicious}, {name: cheese} ] , {name: cheese}) = [ {name: cake}, {name: pie, taste: delicious} ]
You cannot just use REMOVE_VALUE separately, the way you use the REMOVE command separately. You must use it as part of an UPDATE command not as part of a REMOVE command. Unfortunately, the way it works is to make a copy of the "items" list inside your one specific "doc" that you are currently dealing with, but the copy has the subdocument you don't like, removed from the "items" list. That new copy of the list replaces the old copy of the items list.
There is one more, most efficient way to remove subdocuments from a list - and that is by accessing the specific cell of the list with items[2] - and you have to use fancy array functions even fancier than the one I used here, to find out the specific cell in the list (whether it's [2] or [3] or [567]) and then to replace the contents of that cell with Null, using the UPDATE command, and then to set the options to KeepNull = false. That's the "most efficient" way to do it but it would be a monstrous looking complicated query ): I might write that query later and put it here but right now .. I would honestly suggest using the method I described above, unless you have a thousand subdocuments in each list.

Resources