After reading "Asynchronous queries with the Java driver" article in the datastax blog, I was trying to implement a solution similar to the one in the section called - 'Case study: multi-partition query, a.k.a. “client-side SELECT...IN“'.
I currently have code that looks something like this:
public Future<List<ResultSet>> executeMultipleAsync(final BoundStatement statement, final Object... partitionKeys) {
List<Future<ResultSet>> futures = Lists.newArrayListWithExpectedSize(partitionKeys.length);
for (Object partitionKey : partitionKeys) {
Statement bs = statement.bind(partitionKey);
futures.add(executeWithRetry(bs));
}
return Futures.successfulAsList(futures);
}
But, I'd like to improve on that. In the cql query this BoundStatement holds, I'd like to have something that looks like this:
SELECT * FROM <column_family_name> WHERE <param1> = :p1_name AND param2 = :p2_name AND <partiotion_key_name> = ?;
I'd like the clients of this method to give me a BoundStatement with an already bound parameters (two parameters in this case) and a list of partition keys. In this case, all I need to do, is bind the partition keys and execute the queries. Unfortunately, when I bind the key to this statement I fail with an error - com.datastax.driver.core.exceptions.InvalidTypeException: Invalid type for value 0 of CQL type varchar, expecting class java.lang.String but class java.lang.Long provided. The problem is, that I try to bind the key to the first parameter and not the last. Which is a string and not a long.
I can solve this by either giving the partition parameter a name but then I'd have to get the name via method parameters, or by specifying it's index which again will require an additional method parameter. Either way, if I use the name or the index I have to bind it with a specific type. For instance: bs.setLong("<key_name>", partitionKey);. For some reason, I can't leave it to the BoundStatement to interpret the type of the last parameter.
I'd like to avoid passing the parameter name explicitly and bypass the type problem. Is there anything that can be done?
Thanks!
I've posted the same question in 'DataStax Java Driver for Apache Cassandra User Mailing List' and got an answer saying the functionality that I'm missing may be added in the next version (2.2) of the datastax java driver.
In JAVA-721 (to be introduced in 2.2) we are tentatively planning on
adding the following methods with the signature to BoundStatement:
public BoundStatement setObject(int i, V v) public
BoundStatement setObject(String name, V v)
and
You can emulate setObject in 2.1:
void setObject(BoundStatement bs, int position, Object object,
ProtocolVersion protocolVersion) {
DataType type = bs.preparedStatement().getVariables().getType(position);
ByteBuffer buffer = type.serialize(object, protocolVersion);
bs.setBytesUnsafe(position, buffer);
}
To avoid passing the parameter name, one thing you could do is look
for a position that isn't bound yet:
int findUnsetPosition(BoundStatement bs) {
int size = bs.preparedStatement().getVariables().size();
for (int i = 0; i < size; i++)
if (!bs.isSet(i))
return i;
throw new IllegalArgumentException("found no unset position");
}
I don't recommend it though, because it's ugly and unpredictable if
the user forgot to bind one of the non-PK variables.
The way I would do it is require the user to pass a callback that sets
the PK:
interface PKBinder<T> {
void bind(BoundStatement bs, T pk);
}
public <T> Future<List<ResultSet>> executeMultipleAsync(final BoundStatement statement, PKBinder<T> pkBinder, final T...
partitionKeys)
As a bonus, this will also work with composite partition keys.
Related
Suppose I have two tables USER_GROUP and USER_GROUP_DATASOURCE. I have a classic relation where one userGroup can have multiple dataSources and one DataSource simply is a String.
Due to some reasons, I have a custom RecordMapper creating a Java UserGroup POJO. (Mainly compatibility with the other code in the codebase, always being explicit on whats happening). This mapper sometimes creates simply POJOs containing data only from the USER_GROUP table, sometimes also the left joined dataSources.
Currently, I am trying to write the Multiset query along with the custom record mapper. My query thus far looks like this:
List<UserGroup> = ctx
.select(
asterisk(),
multiset(select(USER_GROUP_DATASOURCE.DATASOURCE_ID)
.from(USER_GROUP_DATASOURCE)
.where(USER_GROUP.ID.eq(USER_GROUP_DATASOURCE.USER_GROUP_ID))
).as("datasources").convertFrom(r -> r.map(Record1::value1))
)
.from(USER_GROUP)
.where(condition)
.fetch(new UserGroupMapper()))
Now my question is: How to create the UserGroupMapper? I am stuck right here:
public class UserGroupMapper implements RecordMapper<Record, UserGroup> {
#Override
public UserGroup map(Record rec) {
UserGroup grp = new UserGroup(rec.getValue(USER_GROUP.ID),
rec.getValue(USER_GROUP.NAME),
rec.getValue(USER_GROUP.DESCRIPTION)
javaParseTags(USER_GROUP.TAGS)
);
// Convention: if we have an additional field "datasources", we assume it to be a list of dataSources to be filled in
if (rec.indexOf("datasources") >= 0) {
// How to make `rec.getValue` return my List<String>????
List<String> dataSources = ?????
grp.dataSources.addAll(dataSources);
}
}
My guess is to have something like List<String> dataSources = rec.getValue(..) where I pass in a Field<List<String>> but I have no clue how I could create such Field<List<String>> with something like DSL.field().
How to get a type safe reference to your field from your RecordMapper
There are mostly two ways to do this:
Keep a reference to your multiset() field definition somewhere, and reuse that. Keep in mind that every jOOQ query is a dynamic SQL query, so you can use this feature of jOOQ to assign arbitrary query fragments to local variables (or return them from methods), in order to improve code reuse
You can just raw type cast the value, and not care about type safety. It's always an option, evne if not the cleanest one.
How to improve your query
Unless you're re-using that RecordMapper several times for different types of queries, why not do use Java's type inference instead? The main reason why you're not getting type information in your output is because of your asterisk() usage. But what if you did this instead:
List<UserGroup> = ctx
.select(
USER_GROUP, // Instead of asterisk()
multiset(
select(USER_GROUP_DATASOURCE.DATASOURCE_ID)
.from(USER_GROUP_DATASOURCE)
.where(USER_GROUP.ID.eq(USER_GROUP_DATASOURCE.USER_GROUP_ID))
).as("datasources").convertFrom(r -> r.map(Record1::value1))
)
.from(USER_GROUP)
.where(condition)
.fetch(r -> {
UserGroupRecord ug = r.value1();
List<String> list = r.value2(); // Type information available now
// ...
})
There are other ways than the above, which is using jOOQ 3.17+'s support for Table as SelectField. E.g. in jOOQ 3.16+, you can use row(USER_GROUP.fields()).
The important part is that you avoid the asterisk() expression, which removes type safety. You could even convert the USER_GROUP to your UserGroup type using USER_GROUP.convertFrom(r -> ...) when you project it:
List<UserGroup> = ctx
.select(
USER_GROUP.convertFrom(r -> ...),
// ...
I am trying to replicate the following type of SQL query that you can perform in SQL Server...the important part here is the WHERE clause:
Select InventoryCD from InventoryItem WHERE InventoryCD IN ('123123', '154677', '445899', '998766')
It works perfectly using the IN3<> operator and a series of string constants:
i.e. And<InventoryItem.inventoryCD, In3<constantA,constantB,constantC>,
However, I need to be able to do this with an arbitrarily long list of values in an array, and I need to be able to set the values dynamically at runtime.
I'm not sure what type I need to pass in to the IN<> statement in my PXProjection query. I have been playing around with the following approach, but this throws a compiler error.
public class SOSiteStatusFilterExt : PXCacheExtension<SOSiteStatusFilter>
{
public static bool IsActive()
{
return true;
}
public abstract class searchitemsarray : PX.Data.IBqlField
{
}
[PXUnboundDefault()]
public virtual string[] Searchitemsarray { get; set; }
}
I think maybe I need an array of PXString objects? I'm really not sure, and there isn't any documentation that is helpful. Can anyone help?
This shows how to do it with a regular PXSelect: https://asiablog.acumatica.com/2017/11/sql-in-operator-in-bql.html
But I need to be able to pass in the correct type using Select2...
For reference I will post here the example mentioned by Hugues in the comments.
If you need to generate a query with an arbitrary list of values generated at runtime like this:
Select * from InventoryItem InventoryItem
Where InventoryItem.InventoryCD IN ('123123', '154677', '445899', '998766')
Order by InventoryItem.InventoryCD
You would write something like this:
Object[] values = new String[] { "123123", "154677", "445899", "998766" };
InventoryItem item = PXSelect<InventoryItem,
Where<InventoryItem.inventoryCD,
In<Required<InventoryItem.inventoryCD>>>>.Select(Base, values);
Please note that In<> operator is available only with Required<> parameter and you need to pass array of possible values manually to Select(…) method parameters. So you need to fill this array with your list before calling the Select method.
Also, the Required<> parameter should be used only in the BQL statements that are directly executed in the application code. The data views that are queried from the UI will not work if they contain Required<> parameters.
I ended up creating 100 variables and using the BQL OR operator, i.e.
And2<Where<InventoryItem.inventoryCD, Equal<CurrentValue<SOSiteStatusFilterExt.Pagefilter1>>,
Or<InventoryItem.inventoryCD, Equal<CurrentValue<SOSiteStatusFilterExt.Pagefilter2>>,
Or<InventoryItem.inventoryCD, Equal<CurrentValue<SOSiteStatusFilterExt.Pagefilter3>>,
Or<InventoryItem.inventoryCD, Equal<CurrentValue<SOSiteStatusFilterExt.Pagefilter4>>,
etc...etc...
You can then set the value of Pagefilter1, 2, etc inside of the FieldSelecting event for SOSiteStatusFilter.inventory, as an example. The key insight here, which isn't that obvious to the uninitiated in Acumatica, is that all variables parameterized in SQL Server via BQL are nullable. If the variable is null when the query is run, SQL Server automatically disables that variable using a "bit flipping" approach to disable that variable in the SQL procedure call. In normal T-SQL, this would throw an error. But Acumatica's framework handles the case of a NULL field equality by disabling that variable inside the SQL procedure before the equality is evaluated.
Performance with this approach was very good, especially because we are querying on an indexed key field (InventoryCD isn't technically the primary key but it is basically a close second).
I dont know how to describe the problem, so weird. I have function like this:
long getPersonId(...){
//...
}
The above function returns Id of a person based on some arguments.
So I logged the return value of the function and it is 1.
Then I have code like this:
person = myMap.get(getPersonId(..))
which returns null object but this returns a valid Person object, why?:
person = myMap.get(1)
But as I described before getPersonId(..) returns 1, which basically means
myMap.get(getPersonId(..)) == myMap.get(1)
myMap is typed as Map<Long, Person> myMap
What is happening here?
In Groovy, as in Java, 1 is an int literal, not a long, so
myMap.get(1)
is attempting to look up the key Integer.valueOf(1), whereas
myMap.get(getPersonId(..))
is looking up the key Long.valueOf(getPersonId(...)). You need to make sure that when you populate the map you are definitely using Long keys rather than Integer ones, e.g.
myMap.put(1L, somePerson)
In your original version of this question you were calling the GORM get method on a domain class rather than the java.util.Map.get method, and that should work as required as the GORM method call converts the ID to the appropriate type for you before passing it on to Hibernate.
I am so sorry the problem was when I initialize the map myMap
Map<Long, Person> myMap = [1, new Person()]
when you say something like this the key is an integerbut not a long still groovy not complaining.
So the problem is my method was returning a long value (1L) but my actual key on the map is integer value(1).
So changing my map init to Map<Long, Person> myMap = [1L, new Person()] solved the problem.
Probably this due to dynamic nature groovy but irritating unless you know how dynamic langs behave lol.
I have a cache that looks like this:
Key: UUID
Value: Array[Long]
I want to get the key corresponding to a specific value. How does the where-part look like?
I have tried "value = ?" and "_value = ?" but these obviously doesnt work.
Yo!
What do you mean by Array[Long]? Is it just long[] or may be you are using Scala?
The simplest (but not the most effective) way would be using a scan query with predicate (docs here).
To achieve the best performance it is preferable to have your value indexed, so I'd recommend to have a special wrapper class for your array like this:
class MyArray implements Comparable<MyArray>, Serializable {
private long[] arr;
// implement compareTo,hashCode,equals
#GridCacheQuerySqlField(index = true)
public MyArray indexedArray() {
return this;
}
}
then the query will look like
"indexedArray = ?"
where as parameter you have to pass MyArray instance. It is a bit dirty way to index value but this is the only way right now.
By the way column names for cache key and value in SQL are _key and _val but not _value and you can write queries like
"_val = ? and _key > ?"
of course you have to make sure that your key and value at least implement hashCode and equals correctly, implementing Comparable is preferable.
I am trying to pass a string as value in the mapper, but getting error that it is not Writable. How to resolve?
public void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
String TempString = value.toString();
String[] SingleRecord = TempString.split("\t");
//using Integer.parseInt to calculate profit
int Amount = Integer.parseInt(SingleRecord[7]);
int Asset = Integer.parseInt(SingleRecord[8]);
int SalesPrice = Integer.parseInt(SingleRecord[9]);
int Profit = Amount*(SalesPrice-Asset);
String ValueProfit = String.valueOf(Profit);
String ValueOne = String.valueOf(one);
custID.set(SingleRecord[2]);
data.set(ValueOne + ValueProfit);
context.write(custID, data);
}
Yahoo's tutorial says :
Objects which can be marshaled to or from files and across the network must obey a particular interface, called Writable, which allows Hadoop to read and write the data in a serialized form for transmission.
From Cloudera site :
The key and value classes must be serializable by the framework and hence must implement the Writable interface. Additionally, the key classes must implement the WritableComparable interface to facilitate sorting.
So you need an implementation of Writable to write it as a value in the context. Hadoop ships with a few stock classes such as IntWritable. The String counterpart you are looking for is the Text class. It can be used as :
context.write(custID, new Text(data));
OR
Text outValue = new Text();
val.set(data);
context.write(custID, outValue)
I case, you need specialized functionality in the value class, you may implement Writable (not a big deal after all). However seems like Text is just enough for you.
you havent set data in map function according to import text in above,and TextWritable is wrong just use Text as well.