Transform PCollection<KV> to custom class - cassandra

My goal is to read a file from GCS and write it to Cassandra.
New to Apache Beam/Dataflow, I could find most of the hand on build with Python. Unfortunately CassandraIO is only Java native with Beam.
I used the word count example as a template and try to get rid of the TextIO.write() and replace it with a CassandraIO.<Words>write().
Here my java class for the Cassandra table
package org.apache.beam.examples;
import java.io.Serializable;
import com.datastax.driver.mapping.annotations.Column;
import com.datastax.driver.mapping.annotations.PartitionKey;
import com.datastax.driver.mapping.annotations.Table;
#Table(keyspace = "test", name = "words", readConsistency = "ONE", writeConsistency = "QUORUM",
caseSensitiveKeyspace = false, caseSensitiveTable = false)
public class Words implements Serializable {
// private static final long serialVersionUID = 1L;
#PartitionKey
#Column(name = "word")
public String word;
#Column(name = "count")
public long count;
public Words() {
}
public Words(String word, int count) {
this.word = word;
this.count = count;
}
#Override
public boolean equals(Object obj) {
Words other = (Words) obj;
return this.word.equals(other.word) && this.count == other.count;
}
}
And here the pipeline part of the main code.
static void runWordCount(WordCount.WordCountOptions options) {
Pipeline p = Pipeline.create(options);
// Concepts #2 and #3: Our pipeline applies the composite CountWords transform, and passes the
// static FormatAsTextFn() to the ParDo transform.
p.apply("ReadLines", TextIO.read().from(options.getInputFile()))
.apply(new WordCountToCassandra.CountWords())
// Here I'm not sure how to transform PCollection<KV> into PCollection<Words>
.apply(MapElements.into(TypeDescriptor.of(Words.class)).via(PCollection<KV<String, Long>>)
}))
.apply(CassandraIO.<Words>write()
.withHosts(Collections.singletonList("my_ip"))
.withPort(9142)
.withKeyspace("test")
.withEntity(Words.class));
p.run().waitUntilFinish();
}
My understand is to use a PTransform to pass from PCollection<T1> from PCollection<T2>. I don't know how to map that.

If it's 1:1 mapping, MapElements.into is the right choice.
You can either specify a class that implements SerializableFunction<FromType, ToType>, or simply use a lambda, for example:
.apply(MapElements.into(TypeDescriptor.of(Words.class)).via(kv -> new Words(kv.getKey(), kv.getValue()));
Please check MapElements for more information.
If the transformation is not one-to-one, there are other available options such as FlatMapElements or ParDo.

Related

How do you call a variable from another class's methods in java?

I'm currently trying to print a variable from one class in another class.
import java.util.HashMap;
import java.util.Random;
class Word{
Random r = new Random();
int low = 0;
int high = 25;
int result = r.nextInt(high-low) + low;
String word;
public void Mapski(){
HashMap<Integer, String> map = new HashMap<>();
map.put(0,"apple");
map.put(1,"banana");
map.put(2,"cantaloupe");
map.put(3,"durian");
map.put(4,"elderberry");
map.put(5,"fig");
map.put(6,"grapefruit");
map.put(7,"hashbrown");
map.put(8,"ice cream");
map.put(9,"jackfruit");
map.put(10,"kale");
map.put(11,"lettuce");
map.put(12,"mango");
map.put(13,"nachos");
map.put(14,"oatmeal");
map.put(15,"plum");
map.put(16,"quesadilla");
map.put(17,"raspberry");
map.put(18,"strawberry");
map.put(19,"tangelo");
map.put(20,"udon noodles");
map.put(21,"venison");
map.put(22,"watermelon");
map.put(23,"xigua");
map.put(24,"yogurt");
map.put(25,"zucchini");
String word = map.get(result);
this.word = word;
}
public void setWord(String word){
this.word = word;
}
public String getWord(){
return word;
}
}
I've tried using setters and getters but I just keep getting back that word is null. I know that the word variable in my Mapski() method is correct because when I add System.out.println(word) to the method Mapski(), it prints a random food from my list when I call the method in my main class with word.Mapski() after creating Word word = new Word();.
How can I make it so that when I'm in my main class I can call Mapski()'s word from class World with a food as the value instead of null? All help is appreciated. Thank you!

How to add queries by search criteria?

I am new to jhipster.
Is there a guide of steps to follow to implement search criteria?
I want to add to an entity, the possibility of filtering by any of its fields.
I have enabled filtering, but can't find a mode to show up on the entity's CRUD page.
I understand that I have to code in different places in the application ... but I did not find any examples or guides.
In advance, thank you very much for giving me some help
I don't know any guide for that but I also use Jhipster and developed my app search criteria and works fine. I hope this is what you're looking for.
I gonna use a comment table as an example where I want to filter by hidden, flagged, or both with the possibility of pagination on front. So I created a CommentRepositoryCustom.java:
import org.springframework.data.domain.Page;
import org.springframework.data.domain.Pageable;
public interface CommentRepositoryCustom {
Page<Comment> findCommentByCommentHiddenAndFlagged(Boolean hidden, Boolean flagged, Pageable pageable);
}
After that I created the implementation for my findCommentByCommentHiddenAndFlagged in another custom repository I created called CommentRepositoryCustomImpl.java:
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.data.domain.Page;
import org.springframework.data.domain.PageImpl;
import org.springframework.data.domain.Pageable;
import org.springframework.stereotype.Repository;
import org.springframework.transaction.annotation.Transactional;
import javax.persistence.EntityManager;
import javax.persistence.TypedQuery;
import javax.persistence.criteria.CriteriaBuilder;
import javax.persistence.criteria.CriteriaQuery;
import javax.persistence.criteria.Predicate;
import javax.persistence.criteria.Root;
#Repository
#Transactional
public class CommentRepositoryCustomImpl implements CommentRepositoryCustom{
private static final String HIDDEN_LABEL = "hidden";
private static final String FLAGGED_LABEL = "flagged";
#Autowired
private EntityManager em;
#Override
public Page<Comment> findCommentByCommentHiddenAndFlagged(Boolean hidden, Boolean flagged, Pageable pageable) {
CriteriaBuilder criteriaBuilder = em.getCriteriaBuilder();
CriteriaQuery<Comment> commentCriteriaQuery = criteriaBuilder.createQuery(Comment.class);
Root<Comment> commentRoot = commentCriteriaQuery.from(Comment.class);
Predicate predicateHidden = criteriaBuilder.equal(commentRoot.get(HIDDEN_LABEL), hidden);
Predicate predicateFlagged = criteriaBuilder.equal(commentRoot.get(FLAGGED_LABEL), flagged);
Predicate concatenate;
TypedQuery<Comment> typedQuery;
CriteriaQuery<Long> countQuery = criteriaBuilder.createQuery(Long.class);
Root<Comment> countCommentRoot = countQuery.from(Comment.class);
Long count;
if (hidden != null && flagged != null) {
concatenate = criteriaBuilder.and(predicateHidden);
concatenate = criteriaBuilder.and(concatenate, predicateFlagged);
countQuery.select(criteriaBuilder.count(countCommentRoot)).where(concatenate);
typedQuery = em.createQuery(commentCriteriaQuery.select(commentRoot).where(concatenate));
} else if(hidden != null) {
concatenate = criteriaBuilder.and(predicateHidden);
countQuery.select(criteriaBuilder.count(countCommentRoot)).where(concatenate);
typedQuery = em.createQuery(commentCriteriaQuery.select(commentRoot).where(concatenate));
} else if (flagged != null) {
concatenate = criteriaBuilder.and(predicateFlagged);
countQuery.select(criteriaBuilder.count(countCommentRoot)).where(concatenate);
typedQuery = em.createQuery(commentCriteriaQuery.select(commentRoot).where(concatenate));
} else {
countQuery.select(criteriaBuilder.count(countCommentRoot));
typedQuery = em.createQuery(commentCriteriaQuery.select(commentRoot));
}
typedQuery.setFirstResult((int) pageable.getOffset()).setMaxResults(pageable.getPageSize());
count = em.createQuery(countQuery).getSingleResult();
return new PageImpl<>(typedQuery.getResultList(), pageable, count);
}
}
Now you just need to call findCommentByCommentHiddenAndFlagged with the right params in your service.
Let me know if this what you're looking for.
Do you have a Get endpoint in package web.rest that looks like this? #GetMapping("/clcomments") public ResponseEntity<List<Entity>> getAllComments(CommentsCriteria criteria) {... }
If you do, search by criteria works!

Generate a script to create a table from the entity definition

Is there a way to generate the statement CREATE TABLE from an entity definition? I know it is possible using Achilles but I want to use the regular Cassandra entity.
The target is getting the following script from the entity class below.
Statement
CREATE TABLE user (userId uuid PRIMARY KEY, name text);
Entity
#Table(keyspace = "ks", name = "users",
readConsistency = "QUORUM",
writeConsistency = "QUORUM",
caseSensitiveKeyspace = false,
caseSensitiveTable = false)
public static class User {
#PartitionKey
private UUID userId;
private String name;
// ... constructors / getters / setters
}
Create a class named Utility with package name com.datastax.driver.mapping to access some utils method from that package.
package com.datastax.driver.mapping;
import com.datastax.driver.core.*;
import com.datastax.driver.core.utils.UUIDs;
import com.datastax.driver.mapping.annotations.ClusteringColumn;
import com.datastax.driver.mapping.annotations.Column;
import com.datastax.driver.mapping.annotations.PartitionKey;
import com.datastax.driver.mapping.annotations.Table;
import java.net.InetAddress;
import java.nio.ByteBuffer;
import java.util.*;
/**
* Created by Ashraful Islam
*/
public class Utility {
private static final Map<Class, DataType.Name> BUILT_IN_CODECS_MAP = new HashMap<>();
static {
BUILT_IN_CODECS_MAP.put(Long.class, DataType.Name.BIGINT);
BUILT_IN_CODECS_MAP.put(Boolean.class, DataType.Name.BOOLEAN);
BUILT_IN_CODECS_MAP.put(Double.class, DataType.Name.DOUBLE);
BUILT_IN_CODECS_MAP.put(Float.class, DataType.Name.FLOAT);
BUILT_IN_CODECS_MAP.put(Integer.class, DataType.Name.INT);
BUILT_IN_CODECS_MAP.put(Short.class, DataType.Name.SMALLINT);
BUILT_IN_CODECS_MAP.put(Byte.class, DataType.Name.TINYINT);
BUILT_IN_CODECS_MAP.put(long.class, DataType.Name.BIGINT);
BUILT_IN_CODECS_MAP.put(boolean.class, DataType.Name.BOOLEAN);
BUILT_IN_CODECS_MAP.put(double.class, DataType.Name.DOUBLE);
BUILT_IN_CODECS_MAP.put(float.class, DataType.Name.FLOAT);
BUILT_IN_CODECS_MAP.put(int.class, DataType.Name.INT);
BUILT_IN_CODECS_MAP.put(short.class, DataType.Name.SMALLINT);
BUILT_IN_CODECS_MAP.put(byte.class, DataType.Name.TINYINT);
BUILT_IN_CODECS_MAP.put(ByteBuffer.class, DataType.Name.BLOB);
BUILT_IN_CODECS_MAP.put(InetAddress.class, DataType.Name.INET);
BUILT_IN_CODECS_MAP.put(String.class, DataType.Name.TEXT);
BUILT_IN_CODECS_MAP.put(Date.class, DataType.Name.TIMESTAMP);
BUILT_IN_CODECS_MAP.put(UUID.class, DataType.Name.UUID);
BUILT_IN_CODECS_MAP.put(LocalDate.class, DataType.Name.DATE);
BUILT_IN_CODECS_MAP.put(Duration.class, DataType.Name.DURATION);
}
private static final Comparator<MappedProperty<?>> POSITION_COMPARATOR = new Comparator<MappedProperty<?>>() {
#Override
public int compare(MappedProperty<?> o1, MappedProperty<?> o2) {
return o1.getPosition() - o2.getPosition();
}
};
public static String convertEntityToSchema(Class<?> entityClass) {
Table table = AnnotationChecks.getTypeAnnotation(Table.class, entityClass);
String ksName = table.caseSensitiveKeyspace() ? Metadata.quote(table.keyspace()) : table.keyspace().toLowerCase();
String tableName = table.caseSensitiveTable() ? Metadata.quote(table.name()) : table.name().toLowerCase();
List<MappedProperty<?>> pks = new ArrayList<>();
List<MappedProperty<?>> ccs = new ArrayList<>();
List<MappedProperty<?>> rgs = new ArrayList<>();
Set<? extends MappedProperty<?>> properties = MappingConfiguration.builder().build().getPropertyMapper().mapTable(entityClass);
for (MappedProperty<?> mappedProperty : properties) {
if (mappedProperty.isComputed())
continue; //Skip Computed
if (mappedProperty.isPartitionKey())
pks.add(mappedProperty);
else if (mappedProperty.isClusteringColumn())
ccs.add(mappedProperty);
else
rgs.add(mappedProperty);
}
if (pks.isEmpty()) {
throw new IllegalArgumentException("No Partition Key define");
}
Collections.sort(pks, POSITION_COMPARATOR);
Collections.sort(ccs, POSITION_COMPARATOR);
StringBuilder query = new StringBuilder("CREATE TABLE ");
if (!ksName.isEmpty()) {
query.append(ksName).append('.');
}
query.append(tableName).append('(').append(toSchema(pks));
if (!ccs.isEmpty()) {
query.append(',').append(toSchema(ccs));
}
if (!rgs.isEmpty()) {
query.append(',').append(toSchema(rgs));
}
query.append(',').append("PRIMARY KEY(");
query.append('(').append(join(pks, ",")).append(')');
if (!ccs.isEmpty()) {
query.append(',').append(join(ccs, ","));
}
query.append(')').append(");");
return query.toString();
}
private static String toSchema(List<MappedProperty<?>> list) {
StringBuilder sb = new StringBuilder();
if (!list.isEmpty()) {
MappedProperty<?> first = list.get(0);
sb.append(first.getMappedName()).append(' ').append(BUILT_IN_CODECS_MAP.get(first.getPropertyType().getRawType()));
for (int i = 1; i < list.size(); i++) {
MappedProperty<?> field = list.get(i);
sb.append(',').append(field.getMappedName()).append(' ').append(BUILT_IN_CODECS_MAP.get(field.getPropertyType().getRawType()));
}
}
return sb.toString();
}
private static String join(List<MappedProperty<?>> list, String separator) {
StringBuilder sb = new StringBuilder();
if (!list.isEmpty()) {
sb.append(list.get(0).getMappedName());
for (int i = 1; i < list.size(); i++) {
sb.append(separator).append(list.get(i).getMappedName());
}
}
return sb.toString();
}
}
How to use it ?
System.out.println(convertEntityToSchema(User.class));
Output :
CREATE TABLE ks.users(userid uuid,name text,PRIMARY KEY((userid)));
Limitation :
UDT, collection not supported
Only support and distinguish these data type long,boolean,double,float,int,short,byte,ByteBuffer,InetAddress,String,Date,UUID,LocalDate,Duration
From the answer of Ashraful Islam, I have made a functional version in case someone is interested (#Ashraful Islam please feel free to add it to your answer if you prefer).
I also have added the support to ZonedDateTime following the recommendations of Datastax to use a type tuple<timestamp,varchar> (see their documentation).
import com.datastax.driver.core.*;
import com.datastax.driver.mapping.MappedProperty;
import com.datastax.driver.mapping.MappingConfiguration;
import com.datastax.driver.mapping.annotations.Table;
import com.google.common.collect.ImmutableMap;
import java.net.InetAddress;
import java.nio.ByteBuffer;
import java.time.ZonedDateTime;
import java.util.*;
import java.util.function.Predicate;
import java.util.stream.Collectors;
/**
* Inspired by Ashraful Islam
* https://stackoverflow.com/questions/44950245/generate-a-script-to-create-a-table-from-the-entity-definition/45039182#45039182
*/
public class CassandraScriptGeneratorFromEntities {
private static final Map<Class, DataType> BUILT_IN_CODECS_MAP = ImmutableMap.<Class, DataType>builder()
.put(Long.class, DataType.bigint())
.put(Boolean.class, DataType.cboolean())
.put(Double.class, DataType.cdouble())
.put(Float.class, DataType.cfloat())
.put(Integer.class, DataType.cint())
.put(Short.class, DataType.smallint())
.put(Byte.class, DataType.tinyint())
.put(long.class, DataType.bigint())
.put(boolean.class, DataType.cboolean())
.put(double.class, DataType.cdouble())
.put(float.class, DataType.cfloat())
.put(int.class, DataType.cint())
.put(short.class, DataType.smallint())
.put(byte.class, DataType.tinyint())
.put(ByteBuffer.class, DataType.blob())
.put(InetAddress.class, DataType.inet())
.put(String.class, DataType.text())
.put(Date.class, DataType.timestamp())
.put(UUID.class, DataType.uuid())
.put(LocalDate.class, DataType.date())
.put(Duration.class, DataType.duration())
.put(ZonedDateTime.class, TupleType.of(ProtocolVersion.NEWEST_SUPPORTED, CodecRegistry.DEFAULT_INSTANCE, DataType.timestamp(), DataType.text()))
.build();
private static final Predicate<List<?>> IS_NOT_EMPTY = ((Predicate<List<?>>) List::isEmpty).negate();
public static StringBuilder convertEntityToSchema(final Class<?> entityClass, final String defaultKeyspace, final long ttl) {
final Table table = Objects.requireNonNull(entityClass.getAnnotation(Table.class), () -> "The given entity " + entityClass + " is not annotated with #Table");
final String keyspace = Optional.of(table.keyspace())
.filter(((Predicate<String>) String::isEmpty).negate())
.orElse(defaultKeyspace);
final String ksName = table.caseSensitiveKeyspace() ? Metadata.quote(keyspace) : keyspace.toLowerCase(Locale.ROOT);
final String tableName = table.caseSensitiveTable() ? Metadata.quote(table.name()) : table.name().toLowerCase(Locale.ROOT);
final Set<? extends MappedProperty<?>> properties = MappingConfiguration.builder().build().getPropertyMapper().mapTable(entityClass);
final List<? extends MappedProperty<?>> partitionKeys = Optional.of(
properties.stream()
.filter(((Predicate<MappedProperty<?>>) MappedProperty::isComputed).negate())
.filter(MappedProperty::isPartitionKey)
.sorted(Comparator.comparingInt(MappedProperty::getPosition))
.collect(Collectors.toList())
).filter(IS_NOT_EMPTY).orElseThrow(() -> new IllegalArgumentException("No Partition Key define in the given entity"));
final List<MappedProperty<?>> clusteringColumns = properties.stream()
.filter(((Predicate<MappedProperty<?>>) MappedProperty::isComputed).negate())
.filter(MappedProperty::isClusteringColumn)
.sorted(Comparator.comparingInt(MappedProperty::getPosition))
.collect(Collectors.toList());
final List<MappedProperty<?>> otherColumns = properties.stream()
.filter(((Predicate<MappedProperty<?>>) MappedProperty::isComputed).negate())
.filter(((Predicate<MappedProperty<?>>) MappedProperty::isPartitionKey).negate())
.filter(((Predicate<MappedProperty<?>>) MappedProperty::isClusteringColumn).negate())
.sorted(Comparator.comparing(MappedProperty::getPropertyName))
.collect(Collectors.toList());
final StringBuilder query = new StringBuilder("CREATE TABLE IF NOT EXISTS ");
Optional.of(ksName).filter(((Predicate<String>) String::isEmpty).negate()).ifPresent(ks -> query.append(ks).append('.'));
query.append(tableName).append("(\n").append(toSchema(partitionKeys));
Optional.of(clusteringColumns).filter(IS_NOT_EMPTY).ifPresent(list -> query.append(",\n").append(toSchema(list)));
Optional.of(otherColumns).filter(IS_NOT_EMPTY).ifPresent(list -> query.append(",\n").append(toSchema(list)));
query.append(',').append("\nPRIMARY KEY(");
query.append('(').append(join(partitionKeys)).append(')');
Optional.of(clusteringColumns).filter(IS_NOT_EMPTY).ifPresent(list -> query.append(", ").append(join(list)));
query.append(')').append(") with default_time_to_live = ").append(ttl);
return query;
}
private static String toSchema(final List<? extends MappedProperty<?>> list) {
return list.stream()
.map(property -> property.getMappedName() + ' ' + BUILT_IN_CODECS_MAP.getOrDefault(property.getPropertyType().getRawType(), DataType.text()))
.collect(Collectors.joining(",\n"));
}
private static String join(final List<? extends MappedProperty<?>> list) {
return list.stream().map(MappedProperty::getMappedName).collect(Collectors.joining(", "));
}

Lucene query with numeric field does not find anything

I try to understand how the lucene query syntax works so I wrote this small program.
When using a NumericRangeQuery I can find the documents I want but when trying to parse a search condition, it can't find any hits, although I'm using the same conditions.
i understand the difference can be explained by the analyzer but the StandardAnalyzer is used which does not remove numeric values.
Can someone tell me what I'm doing wrong ?
Thanks.
package org.burre.lucene.matching;
import java.io.IOException;
import org.apache.lucene.analysis.standard.StandardAnalyzer;
import org.apache.lucene.document.*;
import org.apache.lucene.index.*;
import org.apache.lucene.queryparser.classic.ParseException;
import org.apache.lucene.queryparser.classic.QueryParser;
import org.apache.lucene.search.IndexSearcher;
import org.apache.lucene.search.NumericRangeQuery;
import org.apache.lucene.search.Query;
import org.apache.lucene.search.ScoreDoc;
import org.apache.lucene.store.*;
import org.apache.lucene.util.Version;
public class SmallestEngine {
private static final Version VERSION=Version.LUCENE_48;
private StandardAnalyzer analyzer = new StandardAnalyzer(VERSION);
private Directory index = new RAMDirectory();
private Document buildDoc(String name, int beds) {
Document doc = new Document();
doc.add(new StringField("name", name, Field.Store.YES));
doc.add(new IntField("beds", beds, Field.Store.YES));
return doc;
}
public void buildSearchEngine() throws IOException {
IndexWriterConfig config = new IndexWriterConfig(VERSION,
analyzer);
IndexWriter w = new IndexWriter(index, config);
// Generate 10 houses with 0 to 3 beds
for (int i=0;i<10;i++)
w.addDocument(buildDoc("house"+(100+i),i % 4));
w.close();
}
/**
* Execute the query and show the result
*/
public void search(Query q) throws IOException {
System.out.println("executing query\""+q+"\"");
IndexReader reader = DirectoryReader.open(index);
try {
IndexSearcher searcher = new IndexSearcher(reader);
ScoreDoc[] hits = searcher.search(q, 10).scoreDocs;
System.out.println("Found " + hits.length + " hits.");
for (int i = 0; i < hits.length; ++i) {
int docId = hits[i].doc;
Document d = searcher.doc(docId);
System.out.println(""+(i+1)+". " + d.get("name") + ", beds:"
+ d.get("beds"));
}
} finally {
if (reader != null)
reader.close();
}
}
public static void main(String[] args) throws IOException, ParseException {
SmallestEngine me = new SmallestEngine();
me.buildSearchEngine();
System.out.println("SearchByRange");
me.search(NumericRangeQuery.newIntRange("beds", 3, 3,true,true));
System.out.println("-----------------");
System.out.println("SearchName");
me.search(new QueryParser(VERSION,"name",me.analyzer).parse("house107"));
System.out.println("-----------------");
System.out.println("Search3Beds");
me.search(new QueryParser(VERSION,"beds",me.analyzer).parse("3"));
System.out.println("-----------------");
System.out.println("Search3BedsInRange");
me.search(new QueryParser(VERSION,"name",me.analyzer).parse("beds:[3 TO 3]"));
}
}
The output of this program is:
SearchByRange
executing query"beds:[3 TO 3]"
Found 2 hits.
1. house103, beds:3
2. house107, beds:3
-----------------
SearchName
executing query"name:house107"
Found 1 hits.
1. house107, beds:3
-----------------
Search3Beds
executing query"beds:3"
Found 0 hits.
-----------------
Search3BedsInRange
executing query"beds:[3 TO 3]"
Found 0 hits.
You need to use NumericRangeQuery to perform a search on the numeric field.
The answer here could give you some insight.
Also the answer here says
for numeric values (longs, dates, floats, etc.) you need to have NumericRangeQuery. Otherwise Lucene has no idea how do you want to define similarity.
What you need to do is to write your own QueryParser:
public class CustomQueryParser extends QueryParser {
// ctor omitted
#Override
public Query newTermQuery(Term term) {
if (term.field().equals("beds")) {
// manually construct and return non-range query for numeric value
} else {
return super.newTermQuery(term);
}
}
#Override
public Query newRangeQuery(String field, String part1, String part2, boolean startInclusive, boolean endInclusive) {
if (field.equals("beds")) {
// manually construct and return range query for numeric value
} else {
return super.newRangeQuery(field, part1, part2, startInclusive, endInclusive);
}
}
}
It seems like you always have to use the NumericRangeQuery for numeric conditions. (thanks to Mindas) so as he suggested I created My own more intelligent QueryParser.
Using the Apache commons-lang function StringUtils.isNumeric() I can create a more generic QueryParser:
public class IntelligentQueryParser extends QueryParser {
// take over super constructors
#Override
protected org.apache.lucene.search.Query newRangeQuery(String field,
String part1, String part2, boolean part1Inclusive, boolean part2Inclusive) {
if(StringUtils.isNumeric(part1))
{
return NumericRangeQuery.newIntRange(field, Integer.parseInt(part1),Integer.parseInt(part2),part1Inclusive,part2Inclusive);
}
return super.newRangeQuery(field, part1, part2, part1Inclusive, part2Inclusive);
}
#Override
protected org.apache.lucene.search.Query newTermQuery(
org.apache.lucene.index.Term term) {
if(StringUtils.isNumeric(term.text()))
{
return NumericRangeQuery.newIntRange(term.field(), Integer.parseInt(term.text()),Integer.parseInt(term.text()),true,true);
}
return super.newTermQuery(term);
}
}
Just wanted to share this.

spring 3.0 type conversion from string in predefined format to a Map having key as the Enum type and value as any type

I am trying to create a generic convertor using Spring 3.0's Type Conversion feature for converting Strings in format "<KEY2>:<VAL>,<KEY2>:<VAL2>,<KEY3>:<VAL3>"
to a Map holding the key-value pairs where key can be an Enum type and value can be of any user-defined or Java's inbuilt type.
Below is the code I tried out.
Note: I am not good at using generics, so please bear with me if I have used generics in wrong way.
import java.lang.reflect.GenericDeclaration;
import java.lang.reflect.TypeVariable;
import java.util.HashMap;
import java.util.Map;
import org.springframework.core.convert.converter.Converter;
import org.springframework.core.convert.converter.ConverterFactory;
/**
* Converts a string in a format
* "<KEY2>:<VAL>,<KEY2>:<VAL2>,<KEY3>:<VAL3>"
* to a {#link Map} instance holding the key-value pairs <i>where key
* can be an Enum type and value can be of any type</i>.
*
* #author jigneshg
*
* #param <K>
* #param <V>
*/
public class StringToMapConvertorFactory<V> implements ConverterFactory<String, Map<Enum<?>, V>> {
private static final String COMMA = ",";
private static final String COLON = ":";
#Override
public <T extends Map<Enum<?>, V>> Converter<String, T> getConverter(
Class<T> targetType) {
return new StringToMapConverter<T>(targetType);
}
private final class StringToMapConverter<T extends Map<Enum<?>, V>> implements Converter<String,T> {
public StringToMapConverter(Class<T> targetType) {
}
#SuppressWarnings("unchecked")
#Override
public T convert(String source) {
checkArg(source);
String[] keyValuePairs = source.split(COMMA);
// value at index 0 is assumed as the key
// value at index 0 is assumed as the value
Map<Enum<?>,V> resultMap = new HashMap<Enum<?>, V>();
String[] keyValueArr = null;
String key = null;
String value = null;
for (String keyValuePair : keyValuePairs) {
keyValueArr = keyValuePair.split(COLON);
key = keyValueArr[0];
value = keyValueArr[1];
// TODO: How to specify the enumType here ??
resultMap.put(Enum.valueOf(enumType, key.trim()), (V) value);
}
return resultMap;
}
private void checkArg(String source) {
// In the spec, null input is not allowed
if (source == null) {
throw new IllegalArgumentException("null source is in allowed");
}
}
}
}
I am stuck at how to specify the enum type when putting the key-value pair in resultMap in my code.
Also have I taken the correct approach to implement my requirement or there is a better way in which this can be achieved?
Thanks,
Jignesh
You can implement GenericConverter instead of Converter, and access key type as TypeDescriptor.getMapKeyType().

Resources