How can I make ObjectGraphBuilder to build my class instance from an string? I mean if I have
String myString = """invoices{
invoice(date: new Date(106,1,2)){
item(count:5){
product(name:'ULC', dollar:1499)
}
item(count:1){
product(name:'Visual Editor', dollar:499)
}
}
invoice(date: new Date(106,1,2)){
item(count:4) {
product(name:'Visual Editor', dollar:499)
}
}
"""
how can turn this string (myString) into an instance of the invoice class (I assume I have to use ObjectGraphBuilder but how?)
Given an instance of the class invoice ( with all of its nested properties), how can I turn that instance into an string like myString?
I also want to be able serialize and deserialize from a text file too but I assume it is the same as the string.
You can work with GroovyShell to evaluate the string and delegate the methods called in the script to an ObjectGraphBuilder. I repeated the "invoices" method. If this is unacceptable, take a look at Going to Mars with Domain-Specific Languages, by Guillaume Laforge, where he teaches how to customize the compiler.
I also created an Invoices class, because of the way ObjectGraphBuilder works. If this will be dynamic for you, take a look at its resolvers.
import groovy.transform.ToString as TS
#TS class Invoices { List<Invoice> invoices=[] }
#TS class Invoice { List<Item> items=[]; Date date }
#TS class Item { Integer count; Product product }
#TS class Product { String name; Integer dollar; Vendor vendor }
#TS class Vendor { Integer id }
String myString = """
invoices {
invoice(date: new Date(106,1,2)){
item(count:5){
product(name:'ULC', dollar:1499)
}
item(count:1){
product(name:'Visual Editor', dollar:499)
}
}
invoice(date: new Date(106,1,2)){
item(count:4) {
product(name:'Visual Editor', dollar:499)
}
}
}
"""
invoicesParser = { Closure c ->
new ObjectGraphBuilder().invoices c
}
binding = new Binding( [invoices: invoicesParser] )
invoices = new GroovyShell(binding).evaluate myString
assert invoices.invoices.size() == 2
Update: as for your second question, i'm not aware, and neither could found, any way back to the object graph builder representation. You can roll your own, but i think you will be better if you try something like json. Does your use case permit you to do so?
use( groovy.json.JsonOutput ) {
assert invoices.toJson().prettyPrint() == """{
"invoices": [
{
"date": "2006-02-02T02:00:00+0000",
"items": [
{
"product": {
"vendor": null,
"dollar": 1499,
"name": "ULC"
},
"count": 5
},
{
"product": {
"vendor": null,
"dollar": 499,
"name": "Visual Editor"
},
"count": 1
}
]
},
{
"date": "2006-02-02T02:00:00+0000",
"items": [
{
"product": {
"vendor": null,
"dollar": 499,
"name": "Visual Editor"
},
"count": 4
}
]
}
]
}"""
}
Related
I'm using lowdb https://github.com/typicode/lowdb.
I have a small database that looks like this
{
"orders": [
{
"id": "0",
"kit": "not a real order"
},
{
"id": "1",
"kit": "kit_1"
}
],
"total orders": 21,
"216862330724548608": 1
}
is it possble to change the "kit": "x" to "kit": "y"
x and y are user input so I can't just use replace because I don't know what it will be equal to.
I did try to use some kind of replace but it didn't work
let = updateOrders = (items, id, newValue) => {
const {orders} = items;
orders.map((item) => {
item.kit = newValue;
//if you need id check uncomment below code and add id in arguments and pass id
// if (item.id === id) {
// item.kit = newValue;
// }
})
console.log(orders);
};
updateOrders(orders, 'updated');
Hopefully it would help.
I'm trying to create an XML based on the data from a .json. So, my .json file looks something like:
{
"fruit1":
{
"name": "apple",
"quantity": "three",
"taste": "good",
"color": { "walmart": "{{red}}","tj": "{{green}}" }
},
"fruit2":
{
"name": "banana",
"quantity": "five",
"taste": "okay",
"color": { "walmart": "{{gmo}}","tj": "{{organic}}" }
}
}
I can create the XML just fine with the below code, from the above json
import groovy.xml.*
import groovy.json.JsonSlurper
def GenerateXML() {
def jsonSlurper = new JsonSlurper();
def fileReader = new BufferedReader(
new FileReader("/home/workspace/sample.json"))
def parsedData = jsonSlurper.parse(fileReader)
def writer = new FileWriter("sample.XML")
def builder = new StreamingMarkupBuilder()
builder.encoding = 'UTF-8'
writer << builder.bind {
mkp.xmlDeclaration()
"friuts"(version:'$number', application: "FunApp"){
delegate.deployables {
parsedData.each { index, obj ->
"fruit"(name:obj.name, quantity:obj.quantity) {
delegate.taste(obj.taste)
delegate.color {
obj.color.each { name, value ->
it.entry(key:name, value)
}
}
}
}
}
}
}
}
I want to extend this code, such that it looks for particular keys. And if they are present, the loop is performed for those maps as well and as such extends the resulting file.
So, if i have the JSON as like so:
{"fruit1":
{
"name": "apple",
"quantity": "three",
"taste": "good",
"color": { "walmart": "{{red}}","tj": "{{green}}" }
},
"fruit2":
{
"name": "banana",
"quantity": "five",
"taste": "okay",
"color": { "walmart": "{{gmo}}","tj": "{{organic}}" }
},
"chip1":
{
"name": "lays",
"quantity": "one",
"type": "baked"
},
"chip2":
{
"name": "somename",
"quantity": "one",
"type": "fried"
}
}
I want to add an IF, so that it check if any key(s) like 'chip*' is there. And if yes, perform another iteration. If not just skip that section of logic, and not throw any err. like this
import groovy.xml.*
import groovy.json.JsonSlurper
def GenerateXML() {
def jsonSlurper = new JsonSlurper();
def fileReader = new BufferedReader(
new FileReader("/home/okram/workspace/objectsRepo/sample.json"))
def parsedData = jsonSlurper.parse(fileReader)
def writer = new FileWriter("sample.XML")
def builder = new StreamingMarkupBuilder()
builder.encoding = 'UTF-8'
writer << builder.bind {
mkp.xmlDeclaration()
"fruits"(version:'$number', application: "FunApp"){
deployables {
parsedData.each { index, obj ->
"fruit"(name:obj.name, quantity:obj.quantity) {
taste(obj.taste)
color {
obj.color.each { name, value ->
it.entry(key:name, value)
}
}
}
}
}
}
if (parsedData.containsKey('chip*')){
//perform the iteration of the chip* maps
//to access the corresponding values
//below code fails, but that is the intent
parsedData.<onlyTheOnesPassing>.each { index1, obj1 ->
"Chips"(name:obj1.name, quantity:obj1.quantity) {
type(obj1.type)
}
}
}
}
}
I found the same dificult, but on Javascript language, if the logic help you, here what I made:
There are two ways:
You can use the library Lodash on the "get" here: Lodash get or the another one "has": Lodash has.
With they you can put the object and the path and check if there is one without getting any error.
Examples:
_.has(object, 'chip1.name');
// => false
_.has(object, 'fruit1');
// => true
Or you can put the code of the methods here:
// Recursively checks the nested properties of an object and returns the
// object property in case it exists.
static get(obj, key) {
return key.split(".").reduce(function (o, x) {
return (typeof o == "undefined" || o === null) ? o : o[x];
}, obj);
}
// Recursively checks the nested properties of an object and returns
//true in case it exists.
static has(obj, key) {
return key.split(".").every(function (x) {
if (typeof obj != "object" || obj === null || !x in obj)
return false;
obj = obj[x];
return true;
});
}
I hope it helps! :)
I am trying to convert object to JSON. Object has a trait which supposed to convert object. But I get weird json result.
import groovy.json.*
trait JsonPackageTrait {
def toJson() {
JsonOutput.prettyPrint(
JsonOutput.toJson(this)
)
}
}
class Item {
def id, from, to, weight
}
def item = new Item()
item.with {
id = 1234512354
from = 'London'
to = 'Liverpool'
weight = 15d.lbs()
}
item = item.withTraits JsonPackageTrait
println item.toJson()
JSON result
{
"from": "London",
"id": 1234512354,
"to": "Liverpool",
"proxyTarget": {
"from": "London",
"id": 1234512354,
"to": "Liverpool",
"weight": 33.069
},
"weight": 33.069
}
So it seems I cannot do it like this?
Well, whatever. As using withTraits leads to creating proxy of the original object I resolved like this for my current implementation
trait JsonPackageTrait {
def toJson() {
JsonOutput.prettyPrint(
JsonOutput.toJson(this.$delegate)
)
}
}
I am new to ELastic Search.
Data in Elastic search is in Parent-Child Model.I want to perform search in this data using java api.
parent type contains author details and child type contains book details like book name,book publisher, book category.
While performing a search on child details,I need to get the parent details also and vice versa. Sometimes search conditions will be on parent type as well as child. eg search for books written by author1 and type Fiction.
How can i implement this in java? I have referred the elastic search documentation but not able to get a solution
Please help
First set up your index with the parent/child mapping. In the mapping below I have also added a untokenized field for categories so you can execute filter queries on that field. (For creating the index and documents I'm using the JSON API not the Java API as that was not part of the question.)
POST /test
{
"mappings": {
"book": {
"_parent": {
"type": "author"
},
"properties":{
"category":{
"type":"string",
"fields":{
"raw":{
"type":"string",
"index": "not_analyzed"
}
}
}
}
}
}
}
Create some author documents:
POST /test/author/1
{
"name": "jon doe"
}
POST /test/author/2
{
"name": "jane smith"
}
Create some book documents, specifying the relationship between book and author in the request.
POST /test/book/12?parent=1
{
"name": "fictional book",
"category": "Fiction",
"publisher": "publisher1"
}
POST /test/book/16?parent=2
{
"name": "book of history",
"category": "historical",
"publisher": "publisher2"
}
POST /test/book/20?parent=2
{
"name": "second fictional book",
"category": "Fiction",
"publisher": "publisher2"
}
The Java class below executes 3 queries:
Search on all books that have the term 'book' in the title and
return the authors.
Search on all authors that have the terms 'jon doe' in the name and
return the books.
Search for books written by 'jane smith' and that are of type Fiction.
You can run the class from the command line, or import into Eclipse and right click on the class and select 'Run As > Java Application'. (You'll need to have the Elasticsearch library in the classpath.)
import java.util.concurrent.ExecutionException;
import org.elasticsearch.action.search.SearchRequestBuilder;
import org.elasticsearch.action.search.SearchResponse;
import org.elasticsearch.client.Client;
import org.elasticsearch.client.transport.TransportClient;
import org.elasticsearch.common.settings.ImmutableSettings;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.transport.InetSocketTransportAddress;
import org.elasticsearch.index.query.FilterBuilders;
import org.elasticsearch.index.query.HasChildQueryBuilder;
import org.elasticsearch.index.query.HasParentQueryBuilder;
import org.elasticsearch.index.query.QueryBuilders;
import org.elasticsearch.index.query.TermFilterBuilder;
public class ParentChildQueryExample {
public static void main(String args[]) throws InterruptedException, ExecutionException {
//Set the Transport client which is used to communicate with your ES cluster. It is also possible to set this up using the Client Node.
Settings settings = ImmutableSettings.settingsBuilder()
.put("cluster.name", "elasticsearch").build();
Client client = new TransportClient(settings)
.addTransportAddress(new InetSocketTransportAddress(
"localhost",
9300));
//create the searchRequestBuilder object.
SearchRequestBuilder searchRequestBuilder = new SearchRequestBuilder(client).setIndices("test");
//Query 1. Search on all books that have the term 'book' in the title and return the 'authors'.
HasChildQueryBuilder bookNameQuery = QueryBuilders.hasChildQuery("book", QueryBuilders.matchQuery("name", "book"));
System.out.println("Exectuing Query 1");
SearchResponse searchResponse1 = searchRequestBuilder.setQuery(bookNameQuery).execute().actionGet();
System.out.println("There were " + searchResponse1.getHits().getTotalHits() + " results found for Query 1.");
System.out.println(searchResponse1.toString());
System.out.println();
//Query 2. Search on all authors that have the terms 'jon doe' in the name and return the 'books'.
HasParentQueryBuilder authorNameQuery = QueryBuilders.hasParentQuery("author", QueryBuilders.matchQuery("name", "jon doe"));
System.out.println("Exectuing Query 2");
SearchResponse searchResponse2 = searchRequestBuilder.setQuery(authorNameQuery).execute().actionGet();
System.out.println("There were " + searchResponse2.getHits().getTotalHits() + " results found for Query 2.");
System.out.println(searchResponse2.toString());
System.out.println();
//Query 3. Search for books written by 'jane smith' and type Fiction.
TermFilterBuilder termFilter = FilterBuilders.termFilter("category.raw", "Fiction");
HasParentQueryBuilder authorNameQuery2 = QueryBuilders.hasParentQuery("author", QueryBuilders.matchQuery("name", "jane smith"));
SearchResponse searchResponse3 = searchRequestBuilder.setQuery(QueryBuilders.filteredQuery(authorNameQuery2, termFilter)).execute().actionGet();
System.out.println("There were " + searchResponse3.getHits().getTotalHits() + " results found for Query 3.");
System.out.println(searchResponse3.toString());
System.out.println();
}
}
You can use Parent-Child documents for this.
Let's create an index bookstore with simple mappings for author documents and book documents. You can add more fields as per your requirements. See this for more information about indexing parent/child documents.
PUT bookstore
{
"mappings": {
"author": {
"properties": {
"authorname": {
"type": "string"
}
}
},
"book": {
"_parent": {
"type": "author"
},
"properties": {
"bookname": {
"type": "string"
}
}
}
}
}
Now let's add two authors:
PUT bookstore/author/1
{
"authorname": "author1"
}
PUT bookstore/author/2
{
"authorname": "author2"
}
Now let's add two books of author author1:
PUT bookstore/book/11?parent=1
{
"bookname": "book11"
}
PUT bookstore/book/12?parent=1
{
"bookname": "book12"
}
Now let's add two books of author author2:
PUT bookstore/book/21?parent=2
{
"bookname": "book21"
}
PUT bookstore/book/22?parent=2
{
"bookname": "book22"
}
We're done indexing documents. Now let's start searching.
Search all books authored by author author1 (Read more about this here)
POST bookstore/book/_search
{
"query": {
"has_parent": {
"type": "author",
"query": {
"term": {
"authorname": "author1"
}
}
}
}
}
Search the author of book book11 (Read more about this here)
POST bookstore/author/_search
{
"query": {
"has_child": {
"type": "book",
"query": {
"term": {
"bookname": "book11"
}
}
}
}
}
Search for books named book12 and authored by author1. You need to use bool queries to achieve this. (There can be a better example for this scenario with more fields in the documents)
POST bookstore/book/_search
{
"query": {
"bool": {
"must": [
{
"has_parent": {
"type": "author",
"query": {
"term": {
"authorname": "author1"
}
}
}
},
{
"term": {
"bookname": {
"value": "book12"
}
}
}
]
}
}
}
I've done something similar with "spring-data-elasticsearch" library. There are heaps of samples available on their test suite.
Follow this link on git : https://github.com/spring-projects/spring-data-elasticsearch/blob/master/src/test/java/org/springframework/data/elasticsearch/NestedObjectTests.java
List<Car> cars = new ArrayList<Car>();
Car saturn = new Car();
saturn.setName("Saturn");
saturn.setModel("SL");
Car subaru = new Car();
subaru.setName("Subaru");
subaru.setModel("Imprezza");
Car ford = new Car();
ford.setName("Ford");
ford.setModel("Focus");
cars.add(saturn);
cars.add(subaru);
cars.add(ford);
Person foo = new Person();
foo.setName("Foo");
foo.setId("1");
foo.setCar(cars);
Car car = new Car();
car.setName("Saturn");
car.setModel("Imprezza");
Person bar = new Person();
bar.setId("2");
bar.setName("Bar");
bar.setCar(Arrays.asList(car));
List<IndexQuery> indexQueries = new ArrayList<IndexQuery>();
IndexQuery indexQuery1 = new IndexQuery();
indexQuery1.setId(foo.getId());
indexQuery1.setObject(foo);
IndexQuery indexQuery2 = new IndexQuery();
indexQuery2.setId(bar.getId());
indexQuery2.setObject(bar);
indexQueries.add(indexQuery1);
indexQueries.add(indexQuery2);
elasticsearchTemplate.putMapping(Person.class);
elasticsearchTemplate.bulkIndex(indexQueries);
elasticsearchTemplate.refresh(Person.class, true);
SearchQuery searchQuery = new NativeSearchQueryBuilder().build();
List<Person> persons = elasticsearchTemplate.queryForList(searchQuery, Person.class);
Doing transformation of a JSON feed to a format that can be consumed by our Endeca Instance, and settled on writing this transformation in Groovy, due to tools like JsonSlurper and MarkupBuilder. Our JSON feed input looks like this (saved as stage/newObject.json):
[
{
"Name": "Object Name",
"DimID": 0000,
"ObjectID": "Object-0000",
"TypeID": 1,
"Type": "Object Type",
"Description": "A Description",
"DirectorID": "007",
"DirectorName": "Bond, James",
"ParentObjectID": null,
"ObjectLevelID": 1,
"ObjectLevel": "MI-6",
"ProjectNumbers": [
"0001",
"0002"
],
"ManagerIDs": [
"001"
],
"ManagerNames": [
"M"
],
"SubObjectIDs": [
"006",
"005"
],
"TaskNumbers": [
"001"
],
"ProjectNames": [
"Casino Royal",
"Goldfinger",
"License to Kill"
]
}
]
And the code I've got to do the transform is this:
def rawJSONFile = new File("stage/newObject.json")
JsonSlurper slurper = new JsonSlurper()
def slurpedJSON = slurper.parseText(rawJSONFile.text)
def xmlOutput = new MarkupBuilder(new FileWriter(new File("stage/ProcessedOutput.xml")))
xmlOutput.RECORDS() {
for (object in slurpedJSON) {
RECORD() {
for (field in object) {
if (field.value != null) {
if (field.value.value.class.equals(java.util.ArrayList)) {
if (field.value.value.size > 0) {
for (subField in field.value.value) {
if (subField != null) {
PROP(NAME: field.key) {
PVAL(subField)
}
}
}
}
} else {
PROP(NAME: field.key) {
PVAL(field.value)
}
}
}
}
}
}
}
The issue we're experiencing is about half way down the groovy script, where it's dealing with sub fields (that is the arrays within the JSON), the closure that's creating the "PVAL" node is passing the subField variable by reference and it's not being treated as a string but a character array, so trying to do the output, we get a Memory location, rather than a String. The workaround we've got so far is this, but I wanted to know if there was a more elegant solution:
for (subField in field.value.value) {
if (subField != null) {
PROP(NAME: field.key) {
StringBuilder subFieldValue = new StringBuilder();
for(int i =0; i<subField.length; i++){
subFieldValue.append(subField[i])
}
PVAL(subFieldValue.toString())
}
}
}
Change subField in field.value.value to subField in field.value in
for (subField in field.value) {
if (subField != null) {
PROP(NAME: field.key) {
PVAL(subField)
}
}
}
Although this logic can be simplified as
xmlOutput.RECORDS {
slurpedJSON.each { map ->
Record {
map.each { k, v ->
if ( v in ArrayList ) {
v.each { val ->
PROP(NAME: k) {
PVAL(val)
}
}
} else {
PROP(NAME: k) {
PVAL(v)
}
}
}
}
}
}