object to Tuple with list and dictionary c# - c#-4.0

{
"data": [
{
"id": 10,
"title": "Administration",
"active": true,
"type": {
"id": 2,
"name": "Manager"
}
},
{
"id": 207,
"title": "MCO - Exact Match 1",
"active": true,
"type": {
"id": 128,
"name": "Group"
}
},
{
"id": 1201,
"title": "Regression",
"active": false,
"type": {
"id": 2,
"name": "Manager"
}
}
]
}
i am trying to create a tuple in the below format using linq. not sure how to start with group/aggregate. Any help is appreciated. I went over few threads and could not able to find something similar to this.
var tuple = new List<Tuple<int, List<Dictionary<int,bool>>>();
2 10, true
1201, false
128 207, true

Here is a full working code:
var o = new {
data = new [] {
new {
id = 10,
title = "Administration",
active = true,
type = new {
id = 2,
name = "Manager"
}
},
new {
id = 207,
title = "MCO - Exact Match 1",
active = true,
type = new {
id = 128,
name = "Group"
}
},
new {
id = 1201,
title = "Regression",
active = false,
type = new {
id = 2,
name = "Manager"
}
}
}
};
var result = o.data.GroupBy(
item => item.type.id, // the group key
item => new Dictionary<int, bool>() {{ item.id, item.active }}, // the transformed elements in the group
(id, items) => new Tuple<int, List<Dictionary<int, bool>>>(id, items.ToList()) // transformation of grouping result to the final desired format
).ToList();
// check correctness
foreach (var entry in result) {
Console.Write(entry.Item1);
foreach (var dict in entry.Item2) {
foreach (var kvp in dict)
Console.WriteLine("\t\t" + kvp.Key + "\t" + kvp.Value);
}
}
And this is how it works:
o is the data model, represented using anonymous types. You can obviously use a strongly typed model here, if you already have it;
on o we apply the four-argument version of GroupBy, described in detail in the official docs from Microsoft. Basically:
the first lambda expression selects the group key;
the second lambda defines the elements that are part of each group;
the third lambda transforms each (group key, enumeration of group elements) into the Tuple<int, List<Dictionary<int, bool>>> format;
at the end we call ToList() to compute the result and store it as a list of tuples.
the last part prints the result (did not spend much time prettifying it, but it does its job validating the code).

Related

is it possble to use lowdb to update an existing json value?

I'm using lowdb https://github.com/typicode/lowdb.
I have a small database that looks like this
{
"orders": [
{
"id": "0",
"kit": "not a real order"
},
{
"id": "1",
"kit": "kit_1"
}
],
"total orders": 21,
"216862330724548608": 1
}
is it possble to change the "kit": "x" to "kit": "y"
x and y are user input so I can't just use replace because I don't know what it will be equal to.
I did try to use some kind of replace but it didn't work
let = updateOrders = (items, id, newValue) => {
const {orders} = items;
orders.map((item) => {
item.kit = newValue;
//if you need id check uncomment below code and add id in arguments and pass id
// if (item.id === id) {
// item.kit = newValue;
// }
})
console.log(orders);
};
updateOrders(orders, 'updated');
Hopefully it would help.

Conditional Iteration on a parsed json object using each in groovy

I'm trying to create an XML based on the data from a .json. So, my .json file looks something like:
{
"fruit1":
{
"name": "apple",
"quantity": "three",
"taste": "good",
"color": { "walmart": "{{red}}","tj": "{{green}}" }
},
"fruit2":
{
"name": "banana",
"quantity": "five",
"taste": "okay",
"color": { "walmart": "{{gmo}}","tj": "{{organic}}" }
}
}
I can create the XML just fine with the below code, from the above json
import groovy.xml.*
import groovy.json.JsonSlurper
def GenerateXML() {
def jsonSlurper = new JsonSlurper();
def fileReader = new BufferedReader(
new FileReader("/home/workspace/sample.json"))
def parsedData = jsonSlurper.parse(fileReader)
def writer = new FileWriter("sample.XML")
def builder = new StreamingMarkupBuilder()
builder.encoding = 'UTF-8'
writer << builder.bind {
mkp.xmlDeclaration()
"friuts"(version:'$number', application: "FunApp"){
delegate.deployables {
parsedData.each { index, obj ->
"fruit"(name:obj.name, quantity:obj.quantity) {
delegate.taste(obj.taste)
delegate.color {
obj.color.each { name, value ->
it.entry(key:name, value)
}
}
}
}
}
}
}
}
I want to extend this code, such that it looks for particular keys. And if they are present, the loop is performed for those maps as well and as such extends the resulting file.
So, if i have the JSON as like so:
{"fruit1":
{
"name": "apple",
"quantity": "three",
"taste": "good",
"color": { "walmart": "{{red}}","tj": "{{green}}" }
},
"fruit2":
{
"name": "banana",
"quantity": "five",
"taste": "okay",
"color": { "walmart": "{{gmo}}","tj": "{{organic}}" }
},
"chip1":
{
"name": "lays",
"quantity": "one",
"type": "baked"
},
"chip2":
{
"name": "somename",
"quantity": "one",
"type": "fried"
}
}
I want to add an IF, so that it check if any key(s) like 'chip*' is there. And if yes, perform another iteration. If not just skip that section of logic, and not throw any err. like this
import groovy.xml.*
import groovy.json.JsonSlurper
def GenerateXML() {
def jsonSlurper = new JsonSlurper();
def fileReader = new BufferedReader(
new FileReader("/home/okram/workspace/objectsRepo/sample.json"))
def parsedData = jsonSlurper.parse(fileReader)
def writer = new FileWriter("sample.XML")
def builder = new StreamingMarkupBuilder()
builder.encoding = 'UTF-8'
writer << builder.bind {
mkp.xmlDeclaration()
"fruits"(version:'$number', application: "FunApp"){
deployables {
parsedData.each { index, obj ->
"fruit"(name:obj.name, quantity:obj.quantity) {
taste(obj.taste)
color {
obj.color.each { name, value ->
it.entry(key:name, value)
}
}
}
}
}
}
if (parsedData.containsKey('chip*')){
//perform the iteration of the chip* maps
//to access the corresponding values
//below code fails, but that is the intent
parsedData.<onlyTheOnesPassing>.each { index1, obj1 ->
"Chips"(name:obj1.name, quantity:obj1.quantity) {
type(obj1.type)
}
}
}
}
}
I found the same dificult, but on Javascript language, if the logic help you, here what I made:
There are two ways:
You can use the library Lodash on the "get" here: Lodash get or the another one "has": Lodash has.
With they you can put the object and the path and check if there is one without getting any error.
Examples:
_.has(object, 'chip1.name');
// => false
_.has(object, 'fruit1');
// => true
Or you can put the code of the methods here:
// Recursively checks the nested properties of an object and returns the
// object property in case it exists.
static get(obj, key) {
return key.split(".").reduce(function (o, x) {
return (typeof o == "undefined" || o === null) ? o : o[x];
}, obj);
}
// Recursively checks the nested properties of an object and returns
//true in case it exists.
static has(obj, key) {
return key.split(".").every(function (x) {
if (typeof obj != "object" || obj === null || !x in obj)
return false;
obj = obj[x];
return true;
});
}
I hope it helps! :)

Return object with dynamic keys in AQL

Can I return something like:
{
"c/12313" = 1,
"c/24223" = 2,
"c/43423" = 3,
...
}
from an AQL query? The idea is something like (this non-working code):
for c in my_collection
return { c._id : c.sortOrder }
where sortOrder is some property on my documents.
Yes, it is possible to have dynamic attribute names:
LET key = "foo"
LET value = "bar"
RETURN { [ key ]: value } // { "foo": "bar" }
An expression to compute the attribute key has to be wrapped in [ square brackets ], like in JavaScript.
This doesn't return quite the desired result however:
FOR c IN my_collection
RETURN { [ c._id ]: c.sortOrder }
[
{ "c/12313": 1 },
{ "c/24223": 2 },
{ "c/43423": 3 },
...
]
To not return separate objects for every key, MERGE() and a subquery are required:
RETURN MERGE(
FOR c IN my_collection
RETURN { [ c._id ]: c.sortOrder }
)
[
{
"c/12313": 1,
"c/24223": 2,
"c/43423": 3,
...
}
]

How do I create a query to count occurrences of tag words for each user in a dataset

I am pretty new to couchDB and having issues coming up with a query.
This is an example of the data set I am working with
{
"_id": "data",
"_rev": "3-b78ec99614827106f637148c73dbf876",
"data": [
{
"id": 0,
"tags": [
"cupidatat",
"mollit",
"labore",
"minim",
"pariatur",
"qui",
"ipsum"
]
},
{
"id": 1,
"tags": [
"ex",
"cillum",
"est",
"et",
"mollit",
"mollit",
"exercitation"
]
}
This is my map function
function(doc) {
for(var i in doc.data)
{
var person = doc.data[i];
for(var tag in person.tags)
{
emit(person.tags, 1);
}
}
}
This is the reduce function
function(keys, values)
{
return sum(values);
}
I am trying to produce results that will give me the number of occurrences of each tag for all the records like,
key value
"cupidatat" 1
"mollit" 3
How do I fix it so I can get the right results?
Looks like you are very close. Using your sample doc I got the results you are looking for using this map function:
function(doc) {
for (var i = 0; i < doc.data.length; i++) {
for (var j = 0; j < doc.data[i].tags.length; j++) {
emit(doc.data[i].tags[j], 1);
}
}
}
and used the built-in reduce:
_sum
The following request returns JSON in the format you specify:
curl -X GET http://host:5984/db/_design/words/_view/count?reduce=true&group_level=1

What is an effective way to search world-wide location names with ElasticSearch?

I have location information provided by GeoNames.org parsed into a relational database. Using this information, I am attempting to build an ElasticSearch index that contains populated place (city) names, administrative division (state, province, etc.) names, country names and country codes. My goal is to provide a location search that is similar to Google Maps':
I don't need the cool bold highlighting, but I do need the search to return similar results in a similar way. I've tried creating a mapping with a single location field consisting of the entire location name (e.g., "Round Rock, TX, United States") and I've also tried having five separate fields consisting of each piece of a location. I've tried keyword and prefix queries and edgengram analyzers; I have been unsuccessful in finding the correct configuration to get this working properly.
What kinds of analyzers--both index and search--should I be looking at to accomplish my goals? This search doesn't have to be as perfected as Google's but I'd like it to be at least similar.
I do want to support partial-name matches, which is why I've been fiddling with edgengram. For example, a search of "round r" should match Round Rock, TX, United States. Also, I would prefer that results whose populated place (city) names begin with the exact search term be ranked higher than other results. For example, a search of "round ro" should match Round Rock, TX, United States before Round, Some Province, RO (Romania). I hope I've made this clear enough.
Here is my current index configuration (this is an anonymous type in C# that is later serialized to JSON and passed to the ElasticSearch API):
settings = new
{
index = new
{
number_of_shards = 1,
number_of_replicas = 0,
refresh_interval = -1,
analysis = new
{
analyzer = new
{
edgengram_index_analyzer = new
{
type = "custom",
tokenizer = "index_tokenizer",
filter = new[] { "lowercase", "asciifolding" },
char_filter = new[] { "no_commas_char_filter" },
stopwords = new object[0]
},
search_analyzer = new
{
type = "custom",
tokenizer = "standard",
filter = new[] { "lowercase", "asciifolding" },
char_filter = new[] { "no_commas_char_filter" },
stopwords = new object[0]
}
},
tokenizer = new
{
index_tokenizer = new
{
type = "edgeNGram",
min_gram = 1,
max_gram = 100
}
},
char_filter = new
{
no_commas_char_filter = new
{
type = "mapping",
mappings = new[] { ",=>" }
}
}
}
}
},
mappings = new
{
location = new
{
_all = new { enabled = false },
properties = new
{
populatedPlace = new { index_analyzer = "edgengram_index_analyzer", type = "string" },
administrativeDivision = new { index_analyzer = "edgengram_index_analyzer", type = "string" },
administrativeDivisionAbbreviation = new { index_analyzer = "edgengram_index_analyzer", type = "string" },
country = new { index_analyzer = "edgengram_index_analyzer", type = "string" },
countryCode = new { index_analyzer = "edgengram_index_analyzer", type = "string" },
population = new { type = "long" }
}
}
}
This might be what you are looking for:
"analysis": {
"tokenizer": {
"name_tokenizer": {
"type": "edgeNGram",
"max_gram": 100,
"min_gram": 2,
"side": "front"
}
},
"analyzer": {
"name_analyzer": {
"tokenizer": "whitespace",
"type": "custom",
"filter": ["lowercase", "multi_words", "name_filter"]
},
},
"filter": {
"multi_words": {
"type": "shingle",
"min_shingle_size": 2,
"max_shingle_size": 10
},
"name_filter": {
"type": "edgeNGram",
"max_gram": 100,
"min_gram": 2,
"side": "front"
},
}
}
I think using name_analyzer will replicate the google search that you are talking about. You can tweak the configuration a bit to suit your needs.

Resources