Lucene query with numeric field does not find anything - search

I try to understand how the lucene query syntax works so I wrote this small program.
When using a NumericRangeQuery I can find the documents I want but when trying to parse a search condition, it can't find any hits, although I'm using the same conditions.
i understand the difference can be explained by the analyzer but the StandardAnalyzer is used which does not remove numeric values.
Can someone tell me what I'm doing wrong ?
Thanks.
package org.burre.lucene.matching;
import java.io.IOException;
import org.apache.lucene.analysis.standard.StandardAnalyzer;
import org.apache.lucene.document.*;
import org.apache.lucene.index.*;
import org.apache.lucene.queryparser.classic.ParseException;
import org.apache.lucene.queryparser.classic.QueryParser;
import org.apache.lucene.search.IndexSearcher;
import org.apache.lucene.search.NumericRangeQuery;
import org.apache.lucene.search.Query;
import org.apache.lucene.search.ScoreDoc;
import org.apache.lucene.store.*;
import org.apache.lucene.util.Version;
public class SmallestEngine {
private static final Version VERSION=Version.LUCENE_48;
private StandardAnalyzer analyzer = new StandardAnalyzer(VERSION);
private Directory index = new RAMDirectory();
private Document buildDoc(String name, int beds) {
Document doc = new Document();
doc.add(new StringField("name", name, Field.Store.YES));
doc.add(new IntField("beds", beds, Field.Store.YES));
return doc;
}
public void buildSearchEngine() throws IOException {
IndexWriterConfig config = new IndexWriterConfig(VERSION,
analyzer);
IndexWriter w = new IndexWriter(index, config);
// Generate 10 houses with 0 to 3 beds
for (int i=0;i<10;i++)
w.addDocument(buildDoc("house"+(100+i),i % 4));
w.close();
}
/**
* Execute the query and show the result
*/
public void search(Query q) throws IOException {
System.out.println("executing query\""+q+"\"");
IndexReader reader = DirectoryReader.open(index);
try {
IndexSearcher searcher = new IndexSearcher(reader);
ScoreDoc[] hits = searcher.search(q, 10).scoreDocs;
System.out.println("Found " + hits.length + " hits.");
for (int i = 0; i < hits.length; ++i) {
int docId = hits[i].doc;
Document d = searcher.doc(docId);
System.out.println(""+(i+1)+". " + d.get("name") + ", beds:"
+ d.get("beds"));
}
} finally {
if (reader != null)
reader.close();
}
}
public static void main(String[] args) throws IOException, ParseException {
SmallestEngine me = new SmallestEngine();
me.buildSearchEngine();
System.out.println("SearchByRange");
me.search(NumericRangeQuery.newIntRange("beds", 3, 3,true,true));
System.out.println("-----------------");
System.out.println("SearchName");
me.search(new QueryParser(VERSION,"name",me.analyzer).parse("house107"));
System.out.println("-----------------");
System.out.println("Search3Beds");
me.search(new QueryParser(VERSION,"beds",me.analyzer).parse("3"));
System.out.println("-----------------");
System.out.println("Search3BedsInRange");
me.search(new QueryParser(VERSION,"name",me.analyzer).parse("beds:[3 TO 3]"));
}
}
The output of this program is:
SearchByRange
executing query"beds:[3 TO 3]"
Found 2 hits.
1. house103, beds:3
2. house107, beds:3
-----------------
SearchName
executing query"name:house107"
Found 1 hits.
1. house107, beds:3
-----------------
Search3Beds
executing query"beds:3"
Found 0 hits.
-----------------
Search3BedsInRange
executing query"beds:[3 TO 3]"
Found 0 hits.

You need to use NumericRangeQuery to perform a search on the numeric field.
The answer here could give you some insight.
Also the answer here says
for numeric values (longs, dates, floats, etc.) you need to have NumericRangeQuery. Otherwise Lucene has no idea how do you want to define similarity.

What you need to do is to write your own QueryParser:
public class CustomQueryParser extends QueryParser {
// ctor omitted
#Override
public Query newTermQuery(Term term) {
if (term.field().equals("beds")) {
// manually construct and return non-range query for numeric value
} else {
return super.newTermQuery(term);
}
}
#Override
public Query newRangeQuery(String field, String part1, String part2, boolean startInclusive, boolean endInclusive) {
if (field.equals("beds")) {
// manually construct and return range query for numeric value
} else {
return super.newRangeQuery(field, part1, part2, startInclusive, endInclusive);
}
}
}

It seems like you always have to use the NumericRangeQuery for numeric conditions. (thanks to Mindas) so as he suggested I created My own more intelligent QueryParser.
Using the Apache commons-lang function StringUtils.isNumeric() I can create a more generic QueryParser:
public class IntelligentQueryParser extends QueryParser {
// take over super constructors
#Override
protected org.apache.lucene.search.Query newRangeQuery(String field,
String part1, String part2, boolean part1Inclusive, boolean part2Inclusive) {
if(StringUtils.isNumeric(part1))
{
return NumericRangeQuery.newIntRange(field, Integer.parseInt(part1),Integer.parseInt(part2),part1Inclusive,part2Inclusive);
}
return super.newRangeQuery(field, part1, part2, part1Inclusive, part2Inclusive);
}
#Override
protected org.apache.lucene.search.Query newTermQuery(
org.apache.lucene.index.Term term) {
if(StringUtils.isNumeric(term.text()))
{
return NumericRangeQuery.newIntRange(term.field(), Integer.parseInt(term.text()),Integer.parseInt(term.text()),true,true);
}
return super.newTermQuery(term);
}
}
Just wanted to share this.

Related

Generate a script to create a table from the entity definition

Is there a way to generate the statement CREATE TABLE from an entity definition? I know it is possible using Achilles but I want to use the regular Cassandra entity.
The target is getting the following script from the entity class below.
Statement
CREATE TABLE user (userId uuid PRIMARY KEY, name text);
Entity
#Table(keyspace = "ks", name = "users",
readConsistency = "QUORUM",
writeConsistency = "QUORUM",
caseSensitiveKeyspace = false,
caseSensitiveTable = false)
public static class User {
#PartitionKey
private UUID userId;
private String name;
// ... constructors / getters / setters
}
Create a class named Utility with package name com.datastax.driver.mapping to access some utils method from that package.
package com.datastax.driver.mapping;
import com.datastax.driver.core.*;
import com.datastax.driver.core.utils.UUIDs;
import com.datastax.driver.mapping.annotations.ClusteringColumn;
import com.datastax.driver.mapping.annotations.Column;
import com.datastax.driver.mapping.annotations.PartitionKey;
import com.datastax.driver.mapping.annotations.Table;
import java.net.InetAddress;
import java.nio.ByteBuffer;
import java.util.*;
/**
* Created by Ashraful Islam
*/
public class Utility {
private static final Map<Class, DataType.Name> BUILT_IN_CODECS_MAP = new HashMap<>();
static {
BUILT_IN_CODECS_MAP.put(Long.class, DataType.Name.BIGINT);
BUILT_IN_CODECS_MAP.put(Boolean.class, DataType.Name.BOOLEAN);
BUILT_IN_CODECS_MAP.put(Double.class, DataType.Name.DOUBLE);
BUILT_IN_CODECS_MAP.put(Float.class, DataType.Name.FLOAT);
BUILT_IN_CODECS_MAP.put(Integer.class, DataType.Name.INT);
BUILT_IN_CODECS_MAP.put(Short.class, DataType.Name.SMALLINT);
BUILT_IN_CODECS_MAP.put(Byte.class, DataType.Name.TINYINT);
BUILT_IN_CODECS_MAP.put(long.class, DataType.Name.BIGINT);
BUILT_IN_CODECS_MAP.put(boolean.class, DataType.Name.BOOLEAN);
BUILT_IN_CODECS_MAP.put(double.class, DataType.Name.DOUBLE);
BUILT_IN_CODECS_MAP.put(float.class, DataType.Name.FLOAT);
BUILT_IN_CODECS_MAP.put(int.class, DataType.Name.INT);
BUILT_IN_CODECS_MAP.put(short.class, DataType.Name.SMALLINT);
BUILT_IN_CODECS_MAP.put(byte.class, DataType.Name.TINYINT);
BUILT_IN_CODECS_MAP.put(ByteBuffer.class, DataType.Name.BLOB);
BUILT_IN_CODECS_MAP.put(InetAddress.class, DataType.Name.INET);
BUILT_IN_CODECS_MAP.put(String.class, DataType.Name.TEXT);
BUILT_IN_CODECS_MAP.put(Date.class, DataType.Name.TIMESTAMP);
BUILT_IN_CODECS_MAP.put(UUID.class, DataType.Name.UUID);
BUILT_IN_CODECS_MAP.put(LocalDate.class, DataType.Name.DATE);
BUILT_IN_CODECS_MAP.put(Duration.class, DataType.Name.DURATION);
}
private static final Comparator<MappedProperty<?>> POSITION_COMPARATOR = new Comparator<MappedProperty<?>>() {
#Override
public int compare(MappedProperty<?> o1, MappedProperty<?> o2) {
return o1.getPosition() - o2.getPosition();
}
};
public static String convertEntityToSchema(Class<?> entityClass) {
Table table = AnnotationChecks.getTypeAnnotation(Table.class, entityClass);
String ksName = table.caseSensitiveKeyspace() ? Metadata.quote(table.keyspace()) : table.keyspace().toLowerCase();
String tableName = table.caseSensitiveTable() ? Metadata.quote(table.name()) : table.name().toLowerCase();
List<MappedProperty<?>> pks = new ArrayList<>();
List<MappedProperty<?>> ccs = new ArrayList<>();
List<MappedProperty<?>> rgs = new ArrayList<>();
Set<? extends MappedProperty<?>> properties = MappingConfiguration.builder().build().getPropertyMapper().mapTable(entityClass);
for (MappedProperty<?> mappedProperty : properties) {
if (mappedProperty.isComputed())
continue; //Skip Computed
if (mappedProperty.isPartitionKey())
pks.add(mappedProperty);
else if (mappedProperty.isClusteringColumn())
ccs.add(mappedProperty);
else
rgs.add(mappedProperty);
}
if (pks.isEmpty()) {
throw new IllegalArgumentException("No Partition Key define");
}
Collections.sort(pks, POSITION_COMPARATOR);
Collections.sort(ccs, POSITION_COMPARATOR);
StringBuilder query = new StringBuilder("CREATE TABLE ");
if (!ksName.isEmpty()) {
query.append(ksName).append('.');
}
query.append(tableName).append('(').append(toSchema(pks));
if (!ccs.isEmpty()) {
query.append(',').append(toSchema(ccs));
}
if (!rgs.isEmpty()) {
query.append(',').append(toSchema(rgs));
}
query.append(',').append("PRIMARY KEY(");
query.append('(').append(join(pks, ",")).append(')');
if (!ccs.isEmpty()) {
query.append(',').append(join(ccs, ","));
}
query.append(')').append(");");
return query.toString();
}
private static String toSchema(List<MappedProperty<?>> list) {
StringBuilder sb = new StringBuilder();
if (!list.isEmpty()) {
MappedProperty<?> first = list.get(0);
sb.append(first.getMappedName()).append(' ').append(BUILT_IN_CODECS_MAP.get(first.getPropertyType().getRawType()));
for (int i = 1; i < list.size(); i++) {
MappedProperty<?> field = list.get(i);
sb.append(',').append(field.getMappedName()).append(' ').append(BUILT_IN_CODECS_MAP.get(field.getPropertyType().getRawType()));
}
}
return sb.toString();
}
private static String join(List<MappedProperty<?>> list, String separator) {
StringBuilder sb = new StringBuilder();
if (!list.isEmpty()) {
sb.append(list.get(0).getMappedName());
for (int i = 1; i < list.size(); i++) {
sb.append(separator).append(list.get(i).getMappedName());
}
}
return sb.toString();
}
}
How to use it ?
System.out.println(convertEntityToSchema(User.class));
Output :
CREATE TABLE ks.users(userid uuid,name text,PRIMARY KEY((userid)));
Limitation :
UDT, collection not supported
Only support and distinguish these data type long,boolean,double,float,int,short,byte,ByteBuffer,InetAddress,String,Date,UUID,LocalDate,Duration
From the answer of Ashraful Islam, I have made a functional version in case someone is interested (#Ashraful Islam please feel free to add it to your answer if you prefer).
I also have added the support to ZonedDateTime following the recommendations of Datastax to use a type tuple<timestamp,varchar> (see their documentation).
import com.datastax.driver.core.*;
import com.datastax.driver.mapping.MappedProperty;
import com.datastax.driver.mapping.MappingConfiguration;
import com.datastax.driver.mapping.annotations.Table;
import com.google.common.collect.ImmutableMap;
import java.net.InetAddress;
import java.nio.ByteBuffer;
import java.time.ZonedDateTime;
import java.util.*;
import java.util.function.Predicate;
import java.util.stream.Collectors;
/**
* Inspired by Ashraful Islam
* https://stackoverflow.com/questions/44950245/generate-a-script-to-create-a-table-from-the-entity-definition/45039182#45039182
*/
public class CassandraScriptGeneratorFromEntities {
private static final Map<Class, DataType> BUILT_IN_CODECS_MAP = ImmutableMap.<Class, DataType>builder()
.put(Long.class, DataType.bigint())
.put(Boolean.class, DataType.cboolean())
.put(Double.class, DataType.cdouble())
.put(Float.class, DataType.cfloat())
.put(Integer.class, DataType.cint())
.put(Short.class, DataType.smallint())
.put(Byte.class, DataType.tinyint())
.put(long.class, DataType.bigint())
.put(boolean.class, DataType.cboolean())
.put(double.class, DataType.cdouble())
.put(float.class, DataType.cfloat())
.put(int.class, DataType.cint())
.put(short.class, DataType.smallint())
.put(byte.class, DataType.tinyint())
.put(ByteBuffer.class, DataType.blob())
.put(InetAddress.class, DataType.inet())
.put(String.class, DataType.text())
.put(Date.class, DataType.timestamp())
.put(UUID.class, DataType.uuid())
.put(LocalDate.class, DataType.date())
.put(Duration.class, DataType.duration())
.put(ZonedDateTime.class, TupleType.of(ProtocolVersion.NEWEST_SUPPORTED, CodecRegistry.DEFAULT_INSTANCE, DataType.timestamp(), DataType.text()))
.build();
private static final Predicate<List<?>> IS_NOT_EMPTY = ((Predicate<List<?>>) List::isEmpty).negate();
public static StringBuilder convertEntityToSchema(final Class<?> entityClass, final String defaultKeyspace, final long ttl) {
final Table table = Objects.requireNonNull(entityClass.getAnnotation(Table.class), () -> "The given entity " + entityClass + " is not annotated with #Table");
final String keyspace = Optional.of(table.keyspace())
.filter(((Predicate<String>) String::isEmpty).negate())
.orElse(defaultKeyspace);
final String ksName = table.caseSensitiveKeyspace() ? Metadata.quote(keyspace) : keyspace.toLowerCase(Locale.ROOT);
final String tableName = table.caseSensitiveTable() ? Metadata.quote(table.name()) : table.name().toLowerCase(Locale.ROOT);
final Set<? extends MappedProperty<?>> properties = MappingConfiguration.builder().build().getPropertyMapper().mapTable(entityClass);
final List<? extends MappedProperty<?>> partitionKeys = Optional.of(
properties.stream()
.filter(((Predicate<MappedProperty<?>>) MappedProperty::isComputed).negate())
.filter(MappedProperty::isPartitionKey)
.sorted(Comparator.comparingInt(MappedProperty::getPosition))
.collect(Collectors.toList())
).filter(IS_NOT_EMPTY).orElseThrow(() -> new IllegalArgumentException("No Partition Key define in the given entity"));
final List<MappedProperty<?>> clusteringColumns = properties.stream()
.filter(((Predicate<MappedProperty<?>>) MappedProperty::isComputed).negate())
.filter(MappedProperty::isClusteringColumn)
.sorted(Comparator.comparingInt(MappedProperty::getPosition))
.collect(Collectors.toList());
final List<MappedProperty<?>> otherColumns = properties.stream()
.filter(((Predicate<MappedProperty<?>>) MappedProperty::isComputed).negate())
.filter(((Predicate<MappedProperty<?>>) MappedProperty::isPartitionKey).negate())
.filter(((Predicate<MappedProperty<?>>) MappedProperty::isClusteringColumn).negate())
.sorted(Comparator.comparing(MappedProperty::getPropertyName))
.collect(Collectors.toList());
final StringBuilder query = new StringBuilder("CREATE TABLE IF NOT EXISTS ");
Optional.of(ksName).filter(((Predicate<String>) String::isEmpty).negate()).ifPresent(ks -> query.append(ks).append('.'));
query.append(tableName).append("(\n").append(toSchema(partitionKeys));
Optional.of(clusteringColumns).filter(IS_NOT_EMPTY).ifPresent(list -> query.append(",\n").append(toSchema(list)));
Optional.of(otherColumns).filter(IS_NOT_EMPTY).ifPresent(list -> query.append(",\n").append(toSchema(list)));
query.append(',').append("\nPRIMARY KEY(");
query.append('(').append(join(partitionKeys)).append(')');
Optional.of(clusteringColumns).filter(IS_NOT_EMPTY).ifPresent(list -> query.append(", ").append(join(list)));
query.append(')').append(") with default_time_to_live = ").append(ttl);
return query;
}
private static String toSchema(final List<? extends MappedProperty<?>> list) {
return list.stream()
.map(property -> property.getMappedName() + ' ' + BUILT_IN_CODECS_MAP.getOrDefault(property.getPropertyType().getRawType(), DataType.text()))
.collect(Collectors.joining(",\n"));
}
private static String join(final List<? extends MappedProperty<?>> list) {
return list.stream().map(MappedProperty::getMappedName).collect(Collectors.joining(", "));
}

How do I create Enumerable<Func<>> out of method instances

I am creating a rule set engine that looks kinda like a unit test framework.
[RuleSet(ContextA)]
public class RuleSet1
{
[Rule(TargetingA)]
public Conclusion Rule1(SubjectA subject)
{ Create conclusion }
[Rule(TargetingA)]
public Conclusion Rule2(SubjectA subject)
{ Create conclusion }
[Rule(TargetingB)]
public Conclusion Rule3(SubjectB subject)
{ Create conclusion }
}
[RuleSet(ContextB)]
public class RuleSet2
{
[Rule(TargetingB)]
public Conclusion Rule1(SubjectB subject)
{ Create conclusion }
[Rule(TargetingA)]
public Conclusion Rule2(SubjectA subject)
{ Create conclusion }
[Rule(TargetingB)]
public Conclusion Rule3(SubjectB subject)
{ Create conclusion }
}
public class Conclusion()
{
// Errorcode, Description and such
}
// contexts and targeting info are enums.
The goal is to create an extensible ruleset that doesn't alter the API from consumer POV while having good separation-of-concerns within the code files. Again: like a unit test framework.
I am trying to create a library of these that expose the following API
public static class RuleEngine
{
public static IEnumerable<IRuleSet> RuleSets(contextFlags contexts)
{
{
return from type in Assembly.GetExecutingAssembly().GetTypes()
let attribute =
type.GetCustomAttributes(typeof (RuleSetAttribute), true)
.OfType<RuleSetAttribute>()
.FirstOrDefault()
where attribute != null
select ?? I don't know how to convert the individual methods to Func's.
}
}
}
internal interface IRuleset
{
IEnumerable<Func<SubjectA, Conclusion>> SubjectARules { get; }
IEnumerable<Func<SubjectB, Conclusion>> SubjectBRules { get; }
}
...which allows consumers to simply use like this (using foreach instead of LINQ for readability in this example)
foreach (var ruleset in RuleEgine.RuleSets(context))
{
foreach (var rule in ruleset.SubjectARules)
{
var conclusion = rule(myContextA);
//handle the conclusion
}
}
Also, it would be very helpful if you could tell me how to get rid of "TargetingA" and "TargetingB" as RuleAttribute parameters and instead use reflection to inspect the parameter type of the decorated method directly. All the while maintaining the same simple external API.
You can use Delegate.CreateDelegate and the GetParameters method to do what you want.
public class RuleSet : IRuleSet
{
public IEnumerable<Func<SubjectA, Conclusion>> SubjectARules { get; set; }
public IEnumerable<Func<SubjectB, Conclusion>> SubjectBRules { get; set; }
}
public static class RuleEngine
{
public static IEnumerable<IRuleSet> RuleSets() // removed contexts parameter for brevity
{
var result = from t in Assembly.GetExecutingAssembly().GetTypes()
where t.GetCustomAttributes(typeof(RuleSetAttribute), true).Any()
let m = t.GetMethods().Where(m => m.GetCustomAttributes(typeof(RuleAttribute)).Any()).ToArray()
select new RuleSet
{
SubjectARules = CreateFuncs<SubjectA>(m).ToList(),
SubjectBRules = CreateFuncs<SubjectB>(m).ToList()
};
return result;
}
}
// no error checking for brevity
// TODO: use better variable names
public static IEnumerable<Func<T, Conclusion>> CreateFuncs<T>(MethodInfo[] m)
{
return from x in m
where x.GetParameters()[0].ParameterType == typeof(T)
select (Func<T, Conclusion>)Delegate.CreateDelegate(typeof(Func<T, Conclusion>), null, x);
}
Then you can use it like this:
var sa = new SubjectA();
foreach (var ruleset in RuleEngine.RuleSets())
{
foreach (var rule in ruleset.SubjectARules)
{
var conclusion = rule(sa);
// do something with conclusion
}
}
In your LINQ query you headed straight for RuleSetAttribute, and so lost other information. If you break the query in several lines of code you can get methods from the type with GetMethods(), and then you can call GetCustomAttribute<RuleAttribute>().

Island Solution with ANTLR4

I'd like to share with you a island solution I had to implement in ANTLR4.
Structure of the language. The language I had to write the grammar for is derived from
PL/SQL with some additional constructs. I won't go into more details here as this is off topic.
The language defines a special command PUT with the following structure:
PUT [<SPECIALISED LANGUAGE>].
My solution was:
Override Lexer's nextToken method:
public Token nextToken() {
if (f_current_idx != -1) {
_input.seek(f_current_idx); f_current_idx = -1;
}
Token l_token = super.nextToken(); return l_token;
}
Add some code in the Lexer:
PUT :
'PUT'
{
f_current_idx = _input.index(); ((ANTLRStringStream) _input).rewind();
SRC_PUTLexer l_put_lexer = new SRC_PUTLexer(_input);
UnbufferedTokenStream<Token> l_tokenStream = new UnbufferedTokenStream<Token>(l_put_lexer);
if (l_tokenStream.LA(2) == SRC_PUTLexer.LBRACK) {
new SRC_PUTParser(l_tokenStream).start_rule(); f_current_idx = _input.index();
}
};
Furthermore the class ANTLRStringStream which has disappeared in ANTLR 4 had to be defined:
public class ANTLRStringStream extends ANTLRInputStream {
protected int markDepth = 0;
protected int lastMarker;
protected ArrayList<Integer> markers;
public ANTLRStringStream() {
super();
}
public ANTLRStringStream(String input) {
super(input);
}
public int mark() {
if ( markers==null ) { markers = new ArrayList<Integer>(); }
markers.add(markDepth, index()); markDepth++; lastMarker = markDepth;
return markDepth;
}
public void rewind(int m) {
int state = (int) markers.get(m); seek(state); release(m);
}
public void rewind() { rewind(lastMarker); }
public void release(int marker) {
markDepth = marker; markDepth--;
}
}
Any feedback would be very welcome!
Kind regards, Wolfgang Hämmer
This should really be a community wiki.
My first major comment is you need to get rid of the ANTLRStringStream class. The ANTLRInputStream class provided by ANTLR 4 provides the functionality of ANTLRStringStream in ANTLR 3. The IntStream and CharStream interfaces were revised and extensively documented in ANTLR 4 to get rid of the problematic rewind methods and other undefined behavior. You should not reintroduce them.

Iterating through a HashMap

Okay so i'm currently working on a searching method, the terms searched are ran through the database and the matching products are added to a hashMap with 2 Integer fields.
then after the hashmap is made, the items are to be shown, however i'm having trouble getting the hashmap to print out the details
here's my code
public HashMap<Integer, Integer> bankSearch = new HashMap<Integer, Integer>();
and the use
Iterator it = bankSearch.entrySet().iterator();
while (it.hasNext()) {
HashMap.Entry pairs = (HashMap.Entry)it.next();
System.out.println(pairs.getKey() + " = " + pairs.getValue());
if (bankItemsN[i] > 254) {
outStream.writeByte(255);
outStream.writeDWord_v2(pairs.getValue());
} else {
outStream.writeByte(pairs.getValue()); // amount
}
if (bankItemsN[i] < 1) {
bankItems[i] = 0;
}
outStream.writeWordBigEndianA(pairs.getKey()); // itemID
}
current errors
.\src\client.java:75: cannot find symbol
symbol : class Iterator
location: class client
Iterator it = bankSearch.entrySet().iterator();
^
.\src\client.java:77: java.util.HashMap.Entry is not public in java.util.HashMap
; cannot be accessed from outside package
HashMap.Entry pairs = (HashMap.Entry)it.next();
^
.\src\client.java:77: java.util.HashMap.Entry is not public in java.util.HashMap
; cannot be accessed from outside package
HashMap.Entry pairs = (HashMap.Entry)it.next();
^
3 errors
Press any key to continue . . .
The errors you are getting are due to:
You did not import java.util.Iterator
HashMap.Entry is a private inner class. You should use Map.Entry
Also you should, as templatetypedef says, use the generic version of Iterator, or use a for-each construct.
ADDENDUM
Here is some actual code, demonstrating both approaches:
import java.util.Map;
import java.util.HashMap;
import java.util.Iterator;
public class MapExample {
public static void main(String[] args) {
Map<String, Integer> m = new HashMap<String, Integer>();
m.put("One", 1);
m.put("Two", 2);
m.put("Three", 3);
// Using a for-each
for (Map.Entry<String, Integer> e: m.entrySet()) {
System.out.println(e.getKey() + " => " + e.getValue());
}
// Using an iterator
Iterator<Map.Entry<String, Integer>> it = m.entrySet().iterator();
while (it.hasNext()) {
Map.Entry e = (Map.Entry<String, Integer>)it.next();
System.out.println(e.getKey() + " => " + e.getValue());
}
}
}

EF Code First - Include(x => x.Properties.Entity) a 1 : Many association

Given a EF-Code First CTP5 entity layout like:
public class Person { ... }
which has a collection of:
public class Address { ... }
which has a single association of:
public class Mailbox { ... }
I want to do:
PersonQuery.Include(x => x.Addresses).Include("Addresses.Mailbox")
WITHOUT using a magic string. I want to do it using a lambda expression.
I am aware what I typed above will compile and will bring back all Persons matching the search criteria with their addresses and each addresses' mailbox eager loaded, but it's in a string which irritates me.
How do I do it without a string?
Thanks Stack!
For that you can use the Select method:
PersonQuery.Include(x => x.Addresses.Select(a => a.Mailbox));
You can find other examples in here and here.
For any one thats still looking for a solution to this, the Lambda includes is part of EF 4+ and it is in the System.Data.Entity namespace; examples here
http://romiller.com/2010/07/14/ef-ctp4-tips-tricks-include-with-lambda/
It is described in this post: http://www.thomaslevesque.com/2010/10/03/entity-framework-using-include-with-lambda-expressions/
Edit (By Asker for readability):
The part you are looking for is below:
public static class ObjectQueryExtensions
{
public static ObjectQuery<T> Include<T>(this ObjectQuery<T> query, Expression<Func<T, object>> selector)
{
string path = new PropertyPathVisitor().GetPropertyPath(selector);
return query.Include(path);
}
class PropertyPathVisitor : ExpressionVisitor
{
private Stack<string> _stack;
public string GetPropertyPath(Expression expression)
{
_stack = new Stack<string>();
Visit(expression);
return _stack
.Aggregate(
new StringBuilder(),
(sb, name) =>
(sb.Length > 0 ? sb.Append(".") : sb).Append(name))
.ToString();
}
protected override Expression VisitMember(MemberExpression expression)
{
if (_stack != null)
_stack.Push(expression.Member.Name);
return base.VisitMember(expression);
}
protected override Expression VisitMethodCall(MethodCallExpression expression)
{
if (IsLinqOperator(expression.Method))
{
for (int i = 1; i < expression.Arguments.Count; i++)
{
Visit(expression.Arguments[i]);
}
Visit(expression.Arguments[0]);
return expression;
}
return base.VisitMethodCall(expression);
}
private static bool IsLinqOperator(MethodInfo method)
{
if (method.DeclaringType != typeof(Queryable) && method.DeclaringType != typeof(Enumerable))
return false;
return Attribute.GetCustomAttribute(method, typeof(ExtensionAttribute)) != null;
}
}
}

Resources