Groovy YamlBuilder field case - groovy

I'm trying to generate a cloudformation template with groovy YamlBuilder, that's why case matters here. The issue is that by default YamlBuilder converts pascal-case fields to camel-case.
Please see the example (runnable via groovyConsole):
import groovy.yaml.YamlBuilder
class Person {
String Name = 'Mickey Mouse'
}
YamlBuilder builder = new YamlBuilder()
builder {
Node new Person()
}
builder.toString()
The code above returns "Name" field in lower case:
---
Node:
name: "Mickey Mouse"
I need:
---
Node:
Name: "Mickey Mouse"
I've tried plenty of options but haven't found how can I tell YamlBuilder to keep "Name" field case. Or maybe I can use some annotations over the field?

Related

Cucumber - DocStringType - Jackson Data Bind UnrecognizedPropertyException - Even If Property Existed

Below is my Feature File
Scenario Outline: CucumberTest
Given Generate Data Set
"""json
{
"tcIdentifier":"TC1"
}
"""
Examples:
|TESTCASEIDENTIFIER|
|TC1 |
Step Defintion Would Look Like Below
#Given("Generate Data Set")
public void generateDataSet(DataSetMetaData dataSetMetaData) {
System.out.println(dataSetMetaData);
}
#DocStringType
public DataSetMetaData createTestDataForSorting(String details) throws JsonProcessingException {
return new ObjectMapper().readValue(details, DataSetMetaData.class);
}
Details of the DataSetMetaData
#Getter
#Setter
#ToString
#AllArgsConstructor
#Builder
#NoArgsConstructor
public class DataSetMetaData {
private String tcIdentifier;
}
Expected : Data Binding from the Docstring to be Transformed to DataSetMetaData POJO
ACtual : We Are Encountered with the Exception
com.fasterxml.jackson.databind.exc.UnrecognizedPropertyException: Unrecognized field "tcIdentifier" not marked as ignorable (0 known properties: ])
From Some of the Previous Responses on similar - Exception - community has suggested to Annotate the Field as #JsonProperty - What I am Failing to Understand - if the variable Names matches the JSON Data Key - Ideally Binding Should work - For some Strange Reason - even if the attribute Exist - UnrecognizedPropertyException: Unrecognized field "tcIdentifier"
Following are the maven Co-ordinates related to the Cucumber and Jackson Dependencies
implementation group: 'io.cucumber', name: 'cucumber-java', version: '7.3.4
implementation group: 'net.logstash.logback', name: 'logstash-logback-encoder', version: '7.2'
Do let me know if any Further Information is required
#M.P. Korstanje and #GaƫlJ - Thank you very much for insights and giving directions towards analysis in the right Direction
Adding the Lombok Dependencies with the scope of annotationProcessor did the magic - Hope it Helps

Add strong typing to objects from JsonSlurper

I'm having some trouble getting typing to work with the JsonSlurper in Groovy. I'm fairly new to Groovy, and even newer to adding strong types to it - bear with me.
Right now I've created a trait which defines the general shape of my JSON object, and I'm trying to cast the results of parseText to it.
import groovy.json.JsonSlurper
trait Person {
String firstname
String lastname
}
def person = (Person)(new JsonSlurper().parseText('{"firstname": "Lando", "lastname": "Calrissian"}'))
println person.lastname
This throws
Exception in thread "main" org.codehaus.groovy.runtime.typehandling.GroovyCastException: Cannot cast object '{firstname=Lando, lastname=Calrissian}' with class 'org.apache.groovy.json.internal.LazyMap' to class 'Person' due to: groovy.lang.GroovyRuntimeException: Could not find matching constructor for: Person(org.apache.groovy.json.internal.LazyMap)
...
I can see why my code doesn't make sense, I'm not trying to change the type of the data (casting), I'm just trying to let my IDE know that this is what's inside of my object.
Is it possible to at least add code completion to my JSON objects? I'd love to get runtime type checking, as well, but it's not necessary.
you could try to use delegate
this allows to wrap class around map
import groovy.json.JsonSlurper
class Person {
#Delegate Map delegate
String getFirstname(){ delegate.get('firstname') }
String getLastname(){ delegate.get('lastname') }
}
def person = new Person(delegate:new JsonSlurper().parseText('{"firstname": "Lando", "lastname": "Calrissian"}'))
println person.lastname
or for example use Gson for parsing:
#Grab(group='com.google.code.gson', module='gson', version='2.8.5')
import com.google.gson.Gson
class Person {
String firstname
String lastname
}
def person = new Gson().fromJson('{"firstname": "Lando", "lastname": "Calrissian"}', Person.class)
assert person instanceof Person
println person.lastname
This actually is a cast and Groovy will try to turn your Map into said object.
From the docs:
The coercion operator (as) is a variant of casting. Coercion converts object from one type to another without them being compatible for assignment.
The way this works for a POJO is to construct a new object using the Map-c'tor. This will either unroll into calling setters or works directly with static compilation.
Be aware, that using maps with excess keys will lead to errors. So I'd only use this for toy projects. Use a proper JSON-mapper like e.g. Jackson instead.
So the solution here is to not use a trait (which is basically a interface) but a regular class.

GORM setup using Gradle and Groovy

Can anyone please share the steps to setup GORM using gradle and use the same in groovy ?
GORM for Hibernate has excellent documentation
Particularly the section of Using GORM For Hibernate Outside Grails
At minimum you need:
compile "org.grails:grails-datastore-gorm-hibernate5:6.1.10.RELEASE"
runtime "com.h2database:h2:1.4.192"
runtime "org.apache.tomcat:tomcat-jdbc:8.5.0"
runtime "org.apache.tomcat.embed:tomcat-embed-logging-log4j:8.5.0"
runtime "org.slf4j:slf4j-api:1.7.10"
Entities should go under src/main/groovy
#Entity
class Person implements GormEntity<Person> {
String firstName
String lastName
static constraints = {
firstName blank:false
lastName blank:false
}
}
and then finally bootstrap the data store somewhere:
import org.grails.orm.hibernate.HibernateDatastore
Map configuration = [
'hibernate.hbm2ddl.auto':'create-drop',
'dataSource.url':'jdbc:h2:mem:myDB'
]
HibernateDatastore datastore = new HibernateDatastore( configuration, Person)

SnakeYAML by example

I am trying to read and parse a YAML file with SnakeYAML and turn it into a config POJO for my Java app:
// Groovy pseudo-code
class MyAppConfig {
List<Widget> widgets
String uuid
boolean isActive
// Ex: MyAppConfig cfg = new MyAppConfig('/opt/myapp/config.yaml')
MyAppConfig(String configFileUri) {
this(loadMap(configFileUri))
}
private static HashMap<String,HashMap<String,String>> loadConfig(String configFileUri) {
Yaml yaml = new Yaml();
HashMap<String,HashMap<String,String>> values
try {
File configFile = Paths.get(ClassLoader.getSystemResource(configUri).toURI()).toFile();
values = (HashMap<String,HashMap<String,String>>)yaml.load(new FileInputStream(configFile));
} catch(FileNotFoundException | URISyntaxException ex) {
throw new MyAppException(ex.getMessage(), ex);
}
values
}
MyAppConfig(HashMap<String,HashMap<String,String>> yamlMap) {
super()
// Here I want to extract keys from 'yamlMap' and use their values
// to populate MyAppConfig's properties (widgets, uuid, isActive, etc.).
}
}
Example YAML:
widgets:
- widget1
name: blah
age: 3000
isSilly: true
- widget2
name: blah meh
age: 13939
isSilly: false
uuid: 1938484
isActive: false
Since it appears that SnakeYAML only gives me a HashMap<String,<HashMap<String,String>> to represent my config data, it seems as though we can only have 2 nested mapped properties that SnakeYAML supports (the outer map and in the inner map of type <String,String>)...
But what if widgets contains a list/sequence (say, fizzes) which contained a list of, say, buzzes, which contained yet another list, etc? Is this simply a limitation of SnakeYAML or am I using the API incorrectly?
To extract values out of this map, I need to iterate its keys/values and (seemingly) need to apply my own custom validation. Does SnakeYAML provide any APIs for doing this extraction + validation? For instance, instead of hand-rolling my own code to check to see if uuid is a property defined inside the map, it would be great if I could do something like yaml.extract('uuid'), etc. And then ditto for the subsequent validation of uuid (and any other property).
YAML itself contains a lot of powerful concepts, such as anchors and references. Does SnakeYAML handle these concepts? What if an end user uses them in the config file - how am I supposed to detect/validate/enforce them?!? Does SnakeYAML provide an API for doing this?
Do you mean like this:
#Grab('org.yaml:snakeyaml:1.17')
import org.yaml.snakeyaml.*
import org.yaml.snakeyaml.constructor.*
import groovy.transform.*
String exampleYaml = '''widgets:
| - name: blah
| age: 3000
| silly: true
| - name: blah meh
| age: 13939
| silly: false
|uuid: 1938484
|isActive: false'''.stripMargin()
#ToString(includeNames=true)
class Widget {
String name
Integer age
boolean silly
}
#ToString(includeNames=true)
class MyConfig {
List<Widget> widgets
String uuid
boolean isActive
static MyConfig fromYaml(yaml) {
Constructor c = new Constructor(MyConfig)
TypeDescription t = new TypeDescription(MyConfig)
t.putListPropertyType('widgts', Widget)
c.addTypeDescription(t);
new Yaml(c).load(yaml)
}
}
println MyConfig.fromYaml(exampleYaml)
Obviously, that's a script to run in the Groovy console, you wouldn't need the #Grab line, as you probably already have the library in your classpath ;-)

Cloudera Search Map Widget with iso alpha-2 country codes

I discovered the amazing widget from Cloudera Search called Map. I would like to use it to display the count of records by country but it only works with iso alpha-3 country-codes. I only have iso alpha-2 country codes values in my records (see the difference here http://www.nationsonline.org/oneworld/country_code_list.htm).
I would like to know how could I obtain an iso alpha-3 country code? I would like to mention that my raw data is in csv format and I have a field called Country that contains the full country name and another one called Country_Code that stores iso apha-2 country codes.
I tried to modify both the SOLR schema.xml and the Morphlines file but with no positive results. Any idea is highly appreciated.
Thank you!
I was facing the same problem actually. I managed to solve it by creating a custom Morphlines command, as outlined below.
Build a custom Morphlines command.
In Morphlines, you can build your own command rather easily. (See Implementing your own Custom Command). Here is a sample of the code you could use in your command builder:
// Nested class:
private static final class ConvertCountryCode extends AbstractCommand {
private final String fieldName;
public ConvertCountryCode(Command Builder builder, Config config, Command parent, Command child, MorphlineContext context) {
super(builder, config, parent, child, context);
this.fileName = getConfigs().getString(config, "field");
}
#Override
#SuppressWarning("unchecked")
protected boolean doProcess(Record record) {
ListIterator iter = record.get(fieldName).listIterator();
while(iter.hasNext()) {
Locale locale = new Locale ("", iter.next().toString());
String result = locale.getISO3Country();
iter.set(result);
}
return super.doProcess(record);
}
}
Once you have your command builder, you can edit your Morphlines conf file to add the command, like this:
commands: [{
convertCountryCode {
field: Country_Code
}
}
When used, this command would replace all your ISO Alpha-2 codes with ISO Alpha-3 as you add them to your index. I've tested this solution, and it works! Make sure to add your package to the list of command imports for your Morphline.
Use Java command
Alternatively, if you don't want to build a custom command, you can use the Java command.

Resources