I am new to ELastic Search.
Data in Elastic search is in Parent-Child Model.I want to perform search in this data using java api.
parent type contains author details and child type contains book details like book name,book publisher, book category.
While performing a search on child details,I need to get the parent details also and vice versa. Sometimes search conditions will be on parent type as well as child. eg search for books written by author1 and type Fiction.
How can i implement this in java? I have referred the elastic search documentation but not able to get a solution
Please help
First set up your index with the parent/child mapping. In the mapping below I have also added a untokenized field for categories so you can execute filter queries on that field. (For creating the index and documents I'm using the JSON API not the Java API as that was not part of the question.)
POST /test
{
"mappings": {
"book": {
"_parent": {
"type": "author"
},
"properties":{
"category":{
"type":"string",
"fields":{
"raw":{
"type":"string",
"index": "not_analyzed"
}
}
}
}
}
}
}
Create some author documents:
POST /test/author/1
{
"name": "jon doe"
}
POST /test/author/2
{
"name": "jane smith"
}
Create some book documents, specifying the relationship between book and author in the request.
POST /test/book/12?parent=1
{
"name": "fictional book",
"category": "Fiction",
"publisher": "publisher1"
}
POST /test/book/16?parent=2
{
"name": "book of history",
"category": "historical",
"publisher": "publisher2"
}
POST /test/book/20?parent=2
{
"name": "second fictional book",
"category": "Fiction",
"publisher": "publisher2"
}
The Java class below executes 3 queries:
Search on all books that have the term 'book' in the title and
return the authors.
Search on all authors that have the terms 'jon doe' in the name and
return the books.
Search for books written by 'jane smith' and that are of type Fiction.
You can run the class from the command line, or import into Eclipse and right click on the class and select 'Run As > Java Application'. (You'll need to have the Elasticsearch library in the classpath.)
import java.util.concurrent.ExecutionException;
import org.elasticsearch.action.search.SearchRequestBuilder;
import org.elasticsearch.action.search.SearchResponse;
import org.elasticsearch.client.Client;
import org.elasticsearch.client.transport.TransportClient;
import org.elasticsearch.common.settings.ImmutableSettings;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.transport.InetSocketTransportAddress;
import org.elasticsearch.index.query.FilterBuilders;
import org.elasticsearch.index.query.HasChildQueryBuilder;
import org.elasticsearch.index.query.HasParentQueryBuilder;
import org.elasticsearch.index.query.QueryBuilders;
import org.elasticsearch.index.query.TermFilterBuilder;
public class ParentChildQueryExample {
public static void main(String args[]) throws InterruptedException, ExecutionException {
//Set the Transport client which is used to communicate with your ES cluster. It is also possible to set this up using the Client Node.
Settings settings = ImmutableSettings.settingsBuilder()
.put("cluster.name", "elasticsearch").build();
Client client = new TransportClient(settings)
.addTransportAddress(new InetSocketTransportAddress(
"localhost",
9300));
//create the searchRequestBuilder object.
SearchRequestBuilder searchRequestBuilder = new SearchRequestBuilder(client).setIndices("test");
//Query 1. Search on all books that have the term 'book' in the title and return the 'authors'.
HasChildQueryBuilder bookNameQuery = QueryBuilders.hasChildQuery("book", QueryBuilders.matchQuery("name", "book"));
System.out.println("Exectuing Query 1");
SearchResponse searchResponse1 = searchRequestBuilder.setQuery(bookNameQuery).execute().actionGet();
System.out.println("There were " + searchResponse1.getHits().getTotalHits() + " results found for Query 1.");
System.out.println(searchResponse1.toString());
System.out.println();
//Query 2. Search on all authors that have the terms 'jon doe' in the name and return the 'books'.
HasParentQueryBuilder authorNameQuery = QueryBuilders.hasParentQuery("author", QueryBuilders.matchQuery("name", "jon doe"));
System.out.println("Exectuing Query 2");
SearchResponse searchResponse2 = searchRequestBuilder.setQuery(authorNameQuery).execute().actionGet();
System.out.println("There were " + searchResponse2.getHits().getTotalHits() + " results found for Query 2.");
System.out.println(searchResponse2.toString());
System.out.println();
//Query 3. Search for books written by 'jane smith' and type Fiction.
TermFilterBuilder termFilter = FilterBuilders.termFilter("category.raw", "Fiction");
HasParentQueryBuilder authorNameQuery2 = QueryBuilders.hasParentQuery("author", QueryBuilders.matchQuery("name", "jane smith"));
SearchResponse searchResponse3 = searchRequestBuilder.setQuery(QueryBuilders.filteredQuery(authorNameQuery2, termFilter)).execute().actionGet();
System.out.println("There were " + searchResponse3.getHits().getTotalHits() + " results found for Query 3.");
System.out.println(searchResponse3.toString());
System.out.println();
}
}
You can use Parent-Child documents for this.
Let's create an index bookstore with simple mappings for author documents and book documents. You can add more fields as per your requirements. See this for more information about indexing parent/child documents.
PUT bookstore
{
"mappings": {
"author": {
"properties": {
"authorname": {
"type": "string"
}
}
},
"book": {
"_parent": {
"type": "author"
},
"properties": {
"bookname": {
"type": "string"
}
}
}
}
}
Now let's add two authors:
PUT bookstore/author/1
{
"authorname": "author1"
}
PUT bookstore/author/2
{
"authorname": "author2"
}
Now let's add two books of author author1:
PUT bookstore/book/11?parent=1
{
"bookname": "book11"
}
PUT bookstore/book/12?parent=1
{
"bookname": "book12"
}
Now let's add two books of author author2:
PUT bookstore/book/21?parent=2
{
"bookname": "book21"
}
PUT bookstore/book/22?parent=2
{
"bookname": "book22"
}
We're done indexing documents. Now let's start searching.
Search all books authored by author author1 (Read more about this here)
POST bookstore/book/_search
{
"query": {
"has_parent": {
"type": "author",
"query": {
"term": {
"authorname": "author1"
}
}
}
}
}
Search the author of book book11 (Read more about this here)
POST bookstore/author/_search
{
"query": {
"has_child": {
"type": "book",
"query": {
"term": {
"bookname": "book11"
}
}
}
}
}
Search for books named book12 and authored by author1. You need to use bool queries to achieve this. (There can be a better example for this scenario with more fields in the documents)
POST bookstore/book/_search
{
"query": {
"bool": {
"must": [
{
"has_parent": {
"type": "author",
"query": {
"term": {
"authorname": "author1"
}
}
}
},
{
"term": {
"bookname": {
"value": "book12"
}
}
}
]
}
}
}
I've done something similar with "spring-data-elasticsearch" library. There are heaps of samples available on their test suite.
Follow this link on git : https://github.com/spring-projects/spring-data-elasticsearch/blob/master/src/test/java/org/springframework/data/elasticsearch/NestedObjectTests.java
List<Car> cars = new ArrayList<Car>();
Car saturn = new Car();
saturn.setName("Saturn");
saturn.setModel("SL");
Car subaru = new Car();
subaru.setName("Subaru");
subaru.setModel("Imprezza");
Car ford = new Car();
ford.setName("Ford");
ford.setModel("Focus");
cars.add(saturn);
cars.add(subaru);
cars.add(ford);
Person foo = new Person();
foo.setName("Foo");
foo.setId("1");
foo.setCar(cars);
Car car = new Car();
car.setName("Saturn");
car.setModel("Imprezza");
Person bar = new Person();
bar.setId("2");
bar.setName("Bar");
bar.setCar(Arrays.asList(car));
List<IndexQuery> indexQueries = new ArrayList<IndexQuery>();
IndexQuery indexQuery1 = new IndexQuery();
indexQuery1.setId(foo.getId());
indexQuery1.setObject(foo);
IndexQuery indexQuery2 = new IndexQuery();
indexQuery2.setId(bar.getId());
indexQuery2.setObject(bar);
indexQueries.add(indexQuery1);
indexQueries.add(indexQuery2);
elasticsearchTemplate.putMapping(Person.class);
elasticsearchTemplate.bulkIndex(indexQueries);
elasticsearchTemplate.refresh(Person.class, true);
SearchQuery searchQuery = new NativeSearchQueryBuilder().build();
List<Person> persons = elasticsearchTemplate.queryForList(searchQuery, Person.class);
Related
I have indexed some documents in ElasticSearch.
Short view looks like this:
{
"tenant_id":"abcd1234"
"provider_id":"3456"
.
.
.
"doctor_summary": ["line1", "line2", "line3"] OR it could be null.
}
I want to calculate this list as 1 if not null,
query_template = {
"bool":{
"must":[
{"term":{"tenant_id.keyword":tenant_id}},
{"term":{"provider_id.keyword":provider_id}},
{"range":{"visit_date":{"gte":from_date,"lte":to_date}}},
],
"filter":[]
}
}
aggs_query = {
"doctor_summary_count":{
"value_count":{"field":"doctor_summary.keyword"}
}
}
res = CLIENT.search(index = config['elasticsearch']['abcd'],
query = query_template,
size = 10000,
aggs = aggs_query)
After calling this aggregation query, it gives result as ( size of the list * total doctor_summary field).
For example: the result of above query on above document should be 1. (As it is not null).
But it gives as 3(because list contains 3 lines.)
You can use exist query in aggregation with filter aggregation.
Query:
{
"aggs": {
"count": {
"filter": {
"exists": {
"field": "doctor_summary.keyword"
}
}
}
}
}
Response:
"aggregations" : {
"count" : {
"doc_count" : 1
}
}
I'm fairly new to Azure Functions.
I've created a C# WebHook / Azure Function (I guess that's the right thing) to take my json content and convert it into a collection of simple poco/dto objects.
public static class GenericWebHookCSharp
{
[FunctionName("GenericWebHookCsharpOne")]
public static async Task<HttpResponseMessage /* object */> Run([HttpTrigger(WebHookType = "genericJson")]HttpRequestMessage req, TraceWriter log)
{
try
{
log.Info(string.Format("C# GenericWebHookCsharpOne about to process a request. ('{0}')", DateTime.Now.ToLongTimeString()));
//IUnityContainer container = new UnityContainer();
//container.RegisterType<IJsonToPersonRequestWrapperConverter, JsonToPersonRequestWrapperConverter>();
//IJsonToPersonRequestWrapperConverter jsonConvtr = container.Resolve<IJsonToPersonRequestWrapperConverter>();
//ICollection<Person> emps = await jsonConvtr.ConvertHttpRequestMessageToPersonCollection(req);
/* above commented code is my "real" code where I take the INPUT request-body-as-json and convert it into a ICollection of Person(s) */
/* below code, I just fake-creating some persons */
string jsonContent = await req.Content.ReadAsStringAsync();
ICollection<Person> persons = new List<Person>();
for(int i = 0; i< 10; i++)
{
persons.Add(new Person() { PersonUuid = Guid.NewGuid(), LastName = "LN" + i.ToString(), FirstName = "FN" + i.ToString(), BirthDate = DateTimeOffset.Now.AddYears(-1 * (20 + i))});
}
string serializedJson = Newtonsoft.Json.JsonConvert.SerializeObject(persons);
log.Info(string.Format("C# GenericWebHookCsharpOne finished a request. ('{0}')", DateTime.Now.ToLongTimeString()));
return req.CreateResponse(HttpStatusCode.OK , serializedJson);
}
catch (Exception ex)
{
string errorMsg = ex.Message;// ExceptionHelper.GenerateFullFlatMessage(ex);
log.Error(errorMsg);
return req.CreateResponse(HttpStatusCode.BadRequest, errorMsg);
}
}
}
In another .cs file
public class Person
{
public Guid PersonUuid { get; set; }
public string LastName { get; set; }
public string FirstName { get; set; }
public DateTimeOffset? BirthDate { get; set; }
}
If I debug it in Visual Studio, it works fine.
So I added this as a step on my Logic App as seen below
So I want to add a new step that is a "for each" step. Here is what I get now: (image below). All I see is "body" from the initial trigger and the "convert" webhook function (that I have above)........
How do I get the "persons" (so I can do a for-each-person) collection to show up and be available for the next step on the Logic App?
EDIT/APPEND:
The end game is to push a service-bus-message for "each" of my Person(s).
As requested, here is the "person json".....
[{
"PersonUuid": "7ec8cc4d-831c-4c89-8516-47424ee2658d",
"LastName": "LN0",
"FirstName": "FN0",
"BirthDate": "1997-08-17T09:46:16.9839382-04:00"
},
{
"PersonUuid": "275264bc-5a86-476d-a189-512afa1e3ce4",
"LastName": "LN1",
"FirstName": "FN1",
"BirthDate": "1996-08-17T09:46:16.9844385-04:00"
},
{
"PersonUuid": "e522b827-2d2e-465d-a30a-c4b619d2e8e4",
"LastName": "LN2",
"FirstName": "FN2",
"BirthDate": "1995-08-17T09:46:16.9844385-04:00"
},
{
"PersonUuid": "f16bce36-3491-4519-bc82-580939f61b2e",
"LastName": "LN3",
"FirstName": "FN3",
"BirthDate": "1994-08-17T09:46:16.9844385-04:00"
},
{
"PersonUuid": "42456057-39ef-45aa-bd7c-ad6a8fa74fd1",
"LastName": "LN4",
"FirstName": "FN4",
"BirthDate": "1993-08-17T09:46:16.9844385-04:00"
},
{
"PersonUuid": "14088a6e-3c44-4cb0-927d-19f5eda279c4",
"LastName": "LN5",
"FirstName": "FN5",
"BirthDate": "1992-08-17T09:46:16.9844385-04:00"
},
{
"PersonUuid": "332a5cde-3cd1-467a-9dfc-2b187d6ae32e",
"LastName": "LN6",
"FirstName": "FN6",
"BirthDate": "1991-08-17T09:46:16.9844385-04:00"
},
{
"PersonUuid": "6debe134-19e6-4b16-a91d-05ded511eff6",
"LastName": "LN7",
"FirstName": "FN7",
"BirthDate": "1990-08-17T09:46:16.9844385-04:00"
},
{
"PersonUuid": "e61ef8a1-09d3-4c5b-b948-df8e0858cd29",
"LastName": "LN8",
"FirstName": "FN8",
"BirthDate": "1989-08-17T09:46:16.9844385-04:00"
},
{
"PersonUuid": "e9b27632-d3a4-4fe8-8745-04edfa8854f7",
"LastName": "LN9",
"FirstName": "FN9",
"BirthDate": "1988-08-17T09:46:16.9844385-04:00"
}]
Ok.
I gotta get this written down before I forget. Wow, what a ride.
First the C# code on the WebHook. Note the "anonymousPersonWrapper" code.
public static class GenericWebHookCSharp
{
[FunctionName("GenericWebHookCsharpOne")]
public static async Task<HttpResponseMessage /* object */> Run([HttpTrigger(WebHookType = "genericJson")]HttpRequestMessage req, TraceWriter log)
{
try
{
log.Info(string.Format("C# GenericWebHookCsharpOne about to process a request. ('{0}')", DateTime.Now.ToLongTimeString()));
///////* below code, I just fake-creating some persons */
string jsonContent = await req.Content.ReadAsStringAsync();
ICollection<Person> persons = new List<Person>();
for (int i = 0; i < 3; i++)
{
persons.Add(new Person() { PersonUuid = Guid.NewGuid(), LastName = "LN" + i.ToString(), FirstName = "FN" + i.ToString(), BirthDate = DateTimeOffset.Now.AddYears(-1 * (20 + i)) });
}
/* the below is the "trick" to allow the for-each to work in the Logic-App. at least in my experience */
var anonymousPersonWrapper = new
{
personWrapper = persons
};
string personWrapperJsonString = JsonConvert.SerializeObject(anonymousPersonWrapper);
log.Info(string.Format("C# GenericWebHookCsharpOne finished a request. ('{0}')", DateTime.Now.ToLongTimeString()));
HttpResponseMessage returnReq = req.CreateResponse(HttpStatusCode.OK , personWrapperJsonString );
return returnReq;
}
catch (Exception ex)
{
string errorMsg = ex.Message;
log.Error(errorMsg);
return req.CreateResponse(HttpStatusCode.BadRequest, errorMsg);
}
}
}
But putting a breakpoint on "return returnReq;", I was able to see that personWrapperJsonString contained the below Json:
{
"personWrapper": [{
"PersonUuid": "31fb318d-a9bf-4c2f-ad16-0810ddd73746",
"LastName": "LN0",
"FirstName": "FN0",
"BirthDate": "1997-08-17T15:10:08.9633612-04:00"
},
{
"PersonUuid": "73fdacc7-e1e8-48ff-b161-1bd8b5f4aec1",
"LastName": "LN1",
"FirstName": "FN1",
"BirthDate": "1996-08-17T15:10:08.9633612-04:00"
},
{
"PersonUuid": "d18b4324-2d3e-41ca-9525-fe769af89e9c",
"LastName": "LN2",
"FirstName": "FN2",
"BirthDate": "1995-08-17T15:10:08.9633612-04:00"
}]
}
Ok.
Then I added a "Parse Json" action (below image)
Then I setup the Parse-Json. Below.
The above parse-json setup is not complete.
Click on the button "Use sample payload to generate schema" and that will pop a new window. Paste in your "personWrapper" json from earlier. As seen in the below image.
The above will of course create the json-schema that you need (that is for-each friendly). As seen below.
Now we're so close.
Add a For-Each (using the "More" button when you add a new step) (as seen below)
Now we setup the for-each. Looked what showed up! The "personWrapper" (below image)
For grins, I made the sessionId be the PersonUuid value (just to show that I can get hold of one of the scalar properties of the object. (image below).
And now the json as the Content of the Service Bus message. (below image)
I then published the Azure-Functions and deployed the Logic-App, and sent a request to the trigger.
Back to azure portal. The PersonUuid showed up as the SessionId! (image below)
And a quick check in Service Bus Explorer to "peek" the contents of the message (image below)
Ok, a few breadcrumbs:
I got a hint from here about putting the collection side a "wrapper".
Json.NET validate JSON array against Schema
A few errors I got along the way
"Invalid type. Expected Object but got Array."
UnsupportedMediaType "The WebHook request must contain an entity body formatted as JSON."
"this output is an array" "a foreach cannot be nested inside of another foreach"
'Json' expects its parameter to be a string or an XML.The provided value is of type 'Array.
As Steven Van Eycken mentioned that we could parse string to array with json fucntion in the logic application. In your case we could parse the string to array in the Logic app or return jarry directly from Azure function . We can choose one of the following ways to do that . I also test it on my side, it works correctly.
In the logic App
json(body('Your action name'))
Or
Return Jarry directly in the Azure function
var jarry =JArray.Parse(Newtonsoft.Json.JsonConvert.SerializeObject(persons));
log.Info(string.Format("C# GenericWebHookCsharpOne finished a request. ('{0}')", DateTime.Now.ToLongTimeString()));
return req.CreateResponse(HttpStatusCode.OK, jarry);
I am trying to convert object to JSON. Object has a trait which supposed to convert object. But I get weird json result.
import groovy.json.*
trait JsonPackageTrait {
def toJson() {
JsonOutput.prettyPrint(
JsonOutput.toJson(this)
)
}
}
class Item {
def id, from, to, weight
}
def item = new Item()
item.with {
id = 1234512354
from = 'London'
to = 'Liverpool'
weight = 15d.lbs()
}
item = item.withTraits JsonPackageTrait
println item.toJson()
JSON result
{
"from": "London",
"id": 1234512354,
"to": "Liverpool",
"proxyTarget": {
"from": "London",
"id": 1234512354,
"to": "Liverpool",
"weight": 33.069
},
"weight": 33.069
}
So it seems I cannot do it like this?
Well, whatever. As using withTraits leads to creating proxy of the original object I resolved like this for my current implementation
trait JsonPackageTrait {
def toJson() {
JsonOutput.prettyPrint(
JsonOutput.toJson(this.$delegate)
)
}
}
I have location information provided by GeoNames.org parsed into a relational database. Using this information, I am attempting to build an ElasticSearch index that contains populated place (city) names, administrative division (state, province, etc.) names, country names and country codes. My goal is to provide a location search that is similar to Google Maps':
I don't need the cool bold highlighting, but I do need the search to return similar results in a similar way. I've tried creating a mapping with a single location field consisting of the entire location name (e.g., "Round Rock, TX, United States") and I've also tried having five separate fields consisting of each piece of a location. I've tried keyword and prefix queries and edgengram analyzers; I have been unsuccessful in finding the correct configuration to get this working properly.
What kinds of analyzers--both index and search--should I be looking at to accomplish my goals? This search doesn't have to be as perfected as Google's but I'd like it to be at least similar.
I do want to support partial-name matches, which is why I've been fiddling with edgengram. For example, a search of "round r" should match Round Rock, TX, United States. Also, I would prefer that results whose populated place (city) names begin with the exact search term be ranked higher than other results. For example, a search of "round ro" should match Round Rock, TX, United States before Round, Some Province, RO (Romania). I hope I've made this clear enough.
Here is my current index configuration (this is an anonymous type in C# that is later serialized to JSON and passed to the ElasticSearch API):
settings = new
{
index = new
{
number_of_shards = 1,
number_of_replicas = 0,
refresh_interval = -1,
analysis = new
{
analyzer = new
{
edgengram_index_analyzer = new
{
type = "custom",
tokenizer = "index_tokenizer",
filter = new[] { "lowercase", "asciifolding" },
char_filter = new[] { "no_commas_char_filter" },
stopwords = new object[0]
},
search_analyzer = new
{
type = "custom",
tokenizer = "standard",
filter = new[] { "lowercase", "asciifolding" },
char_filter = new[] { "no_commas_char_filter" },
stopwords = new object[0]
}
},
tokenizer = new
{
index_tokenizer = new
{
type = "edgeNGram",
min_gram = 1,
max_gram = 100
}
},
char_filter = new
{
no_commas_char_filter = new
{
type = "mapping",
mappings = new[] { ",=>" }
}
}
}
}
},
mappings = new
{
location = new
{
_all = new { enabled = false },
properties = new
{
populatedPlace = new { index_analyzer = "edgengram_index_analyzer", type = "string" },
administrativeDivision = new { index_analyzer = "edgengram_index_analyzer", type = "string" },
administrativeDivisionAbbreviation = new { index_analyzer = "edgengram_index_analyzer", type = "string" },
country = new { index_analyzer = "edgengram_index_analyzer", type = "string" },
countryCode = new { index_analyzer = "edgengram_index_analyzer", type = "string" },
population = new { type = "long" }
}
}
}
This might be what you are looking for:
"analysis": {
"tokenizer": {
"name_tokenizer": {
"type": "edgeNGram",
"max_gram": 100,
"min_gram": 2,
"side": "front"
}
},
"analyzer": {
"name_analyzer": {
"tokenizer": "whitespace",
"type": "custom",
"filter": ["lowercase", "multi_words", "name_filter"]
},
},
"filter": {
"multi_words": {
"type": "shingle",
"min_shingle_size": 2,
"max_shingle_size": 10
},
"name_filter": {
"type": "edgeNGram",
"max_gram": 100,
"min_gram": 2,
"side": "front"
},
}
}
I think using name_analyzer will replicate the google search that you are talking about. You can tweak the configuration a bit to suit your needs.
How can I make ObjectGraphBuilder to build my class instance from an string? I mean if I have
String myString = """invoices{
invoice(date: new Date(106,1,2)){
item(count:5){
product(name:'ULC', dollar:1499)
}
item(count:1){
product(name:'Visual Editor', dollar:499)
}
}
invoice(date: new Date(106,1,2)){
item(count:4) {
product(name:'Visual Editor', dollar:499)
}
}
"""
how can turn this string (myString) into an instance of the invoice class (I assume I have to use ObjectGraphBuilder but how?)
Given an instance of the class invoice ( with all of its nested properties), how can I turn that instance into an string like myString?
I also want to be able serialize and deserialize from a text file too but I assume it is the same as the string.
You can work with GroovyShell to evaluate the string and delegate the methods called in the script to an ObjectGraphBuilder. I repeated the "invoices" method. If this is unacceptable, take a look at Going to Mars with Domain-Specific Languages, by Guillaume Laforge, where he teaches how to customize the compiler.
I also created an Invoices class, because of the way ObjectGraphBuilder works. If this will be dynamic for you, take a look at its resolvers.
import groovy.transform.ToString as TS
#TS class Invoices { List<Invoice> invoices=[] }
#TS class Invoice { List<Item> items=[]; Date date }
#TS class Item { Integer count; Product product }
#TS class Product { String name; Integer dollar; Vendor vendor }
#TS class Vendor { Integer id }
String myString = """
invoices {
invoice(date: new Date(106,1,2)){
item(count:5){
product(name:'ULC', dollar:1499)
}
item(count:1){
product(name:'Visual Editor', dollar:499)
}
}
invoice(date: new Date(106,1,2)){
item(count:4) {
product(name:'Visual Editor', dollar:499)
}
}
}
"""
invoicesParser = { Closure c ->
new ObjectGraphBuilder().invoices c
}
binding = new Binding( [invoices: invoicesParser] )
invoices = new GroovyShell(binding).evaluate myString
assert invoices.invoices.size() == 2
Update: as for your second question, i'm not aware, and neither could found, any way back to the object graph builder representation. You can roll your own, but i think you will be better if you try something like json. Does your use case permit you to do so?
use( groovy.json.JsonOutput ) {
assert invoices.toJson().prettyPrint() == """{
"invoices": [
{
"date": "2006-02-02T02:00:00+0000",
"items": [
{
"product": {
"vendor": null,
"dollar": 1499,
"name": "ULC"
},
"count": 5
},
{
"product": {
"vendor": null,
"dollar": 499,
"name": "Visual Editor"
},
"count": 1
}
]
},
{
"date": "2006-02-02T02:00:00+0000",
"items": [
{
"product": {
"vendor": null,
"dollar": 499,
"name": "Visual Editor"
},
"count": 4
}
]
}
]
}"""
}