Hazelcast IMap LRU Eviction Policy always evicting the latest entry - hazelcast

We were exploring the different eviction policy options and found out that the LRU Eviction policy is evicting the newly added entry instead of Least Recently Used entry. It behaves the same way for LFU option as well.
Ideally, the first entry should get evicted first before the second or third gets evicted.
Is this a bug in Hazelcast or am I missing any option/configuration?
Hazelcast Version - 3.6.2
Here is the code sample to replicate the issue(Run with -Xmx512m):
import com.hazelcast.config.Config;
import com.hazelcast.config.EvictionPolicy;
import com.hazelcast.config.MapConfig;
import com.hazelcast.config.MaxSizeConfig;
import com.hazelcast.core.EntryEvent;
import com.hazelcast.core.Hazelcast;
import com.hazelcast.core.HazelcastInstance;
import com.hazelcast.core.IMap;
import com.hazelcast.map.listener.EntryEvictedListener;
import java.util.concurrent.ConcurrentHashMap;
public class HazelcastMaxSizeTest {
private static final String GROUP_NAME = "TEST";
private static final String MAP_NAME = "test";
private static final ConcurrentHashMap<Long, byte[]> theDataMap = new ConcurrentHashMap<Long, byte[]>();
private static MaxSizeConfig USED_HEAP_PERCENTAGE = new MaxSizeConfig(5, MaxSizeConfig.MaxSizePolicy.USED_HEAP_PERCENTAGE);
private static MaxSizeConfig USED_HEAP_SIZE = new MaxSizeConfig(128, MaxSizeConfig.MaxSizePolicy.USED_HEAP_SIZE);
private static MaxSizeConfig FREE_HEAP_SIZE = new MaxSizeConfig(128, MaxSizeConfig.MaxSizePolicy.FREE_HEAP_SIZE);
private static MaxSizeConfig FREE_HEAP_PERCENTAGE = new MaxSizeConfig(30, MaxSizeConfig.MaxSizePolicy.FREE_HEAP_PERCENTAGE);
public static void main(String[] args) throws Exception {
boolean storeOutSideHazelcast = true;
HazelcastInstance instance = startHazelcast("hazelcast1", FREE_HEAP_PERCENTAGE);
System.out.println("started " + instance.getName());
IMap<Long, byte[]> map = createMap(instance, MAP_NAME, storeOutSideHazelcast);
System.out.println("map size: " + map.size());
instance.shutdown();
System.exit(0);
}
private static HazelcastInstance startHazelcast(String instanceName, MaxSizeConfig maxSizeConfig) {
MapConfig mapConfig = new MapConfig(MAP_NAME);
mapConfig.setMaxSizeConfig(maxSizeConfig);
mapConfig.setStatisticsEnabled(true);
mapConfig.setEvictionPolicy(EvictionPolicy.LRU);
mapConfig.setEvictionPercentage(10);
mapConfig.setMinEvictionCheckMillis(0L);
mapConfig.setBackupCount(0);
mapConfig.setTimeToLiveSeconds(600);
Config config = new Config(instanceName);
config.addMapConfig(mapConfig);
config.getManagementCenterConfig().setEnabled(true);
config.getManagementCenterConfig().setUrl("http://localhost:8080/mancenter-3.6.2.war");
config.getGroupConfig().setName(GROUP_NAME).setPassword(GROUP_NAME);
return Hazelcast.getOrCreateHazelcastInstance(config);
}
private static IMap<Long, byte[]> createMap(HazelcastInstance instance,
String mapname
, boolean storeOutsideHZMap ) {
IMap<Long, byte[]> map = instance.getMap(mapname);
map.addEntryListener(new EntryEvictedListener<Long, byte[]>() {
#Override
public void entryEvicted(EntryEvent<Long, byte[]> event) {
System.out.println("Evicted Key" + ": " + event.getKey());
theDataMap.remove(event.getKey());
}
}, false);
for (long i = 1; i <= 1000; i++) {
if(storeOutsideHZMap == true) {
theDataMap.put(i, new byte[50 * 50000]);
map.set(i, new byte[10]);
System.out.println("Adding Key " + ": " + i);
map.get(i);
}else {
map.set(i, new byte[50*50000]);
}
if (i % 100 == 0) {
// System.out.println("set " + map.getName() + ": " + i);
try {
Thread.sleep(5000l);
}catch (Exception e){}
}
}
return map;
}
}
Sample Output:
Adding Key : 147
Evicted Key: 147
Adding Key : 148
Evicted Key: 148
Adding Key : 149
Evicted Key: 149
Adding Key : 150
Evicted Key: 150
Adding Key : 151
Evicted Key: 151
Adding Key : 152
Evicted Key: 152
Adding Key : 153
Evicted Key: 153
Adding Key : 154
Evicted Key: 154
Adding Key : 155
Evicted Key: 155
Adding Key : 156
Evicted Key: 156
Adding Key : 157
Evicted Key: 157

Related

Cassandra 3.6.0 bug: StackOverflowError thrown by HashedWheelTimer on Connection.release

When running some inserts and updates on Cassandra database via the java driver version 3.6.0, I get the following StackOverflowError, of which I am showing here just the top, but it repeats the last 10 rows endlessly.
There is no mention of any line in my code, so I don't know what was the specific operation that invoked this.
2018-09-03 00:19:58,294 WARN {cluster1-timeouter-0} [c.d.s.n.u.HashedWheelTimer] : An exception was thrown by TimerTask.
java.lang.StackOverflowError: null
at java.util.regex.Pattern$Branch.match(Pattern.java:4604)
at java.util.regex.Pattern$BranchConn.match(Pattern.java:4568)
at java.util.regex.Pattern$GroupTail.match(Pattern.java:4717)
at java.util.regex.Pattern$Curly.match0(Pattern.java:4279)
at java.util.regex.Pattern$Curly.match(Pattern.java:4234)
at java.util.regex.Pattern$GroupHead.match(Pattern.java:4658)
at java.util.regex.Pattern$Branch.match(Pattern.java:4604)
at java.util.regex.Pattern$Branch.match(Pattern.java:4602)
at java.util.regex.Pattern$BmpCharProperty.match(Pattern.java:3798)
at java.util.regex.Pattern$Start.match(Pattern.java:3461)
at java.util.regex.Matcher.search(Matcher.java:1248)
at java.util.regex.Matcher.find(Matcher.java:664)
at java.util.Formatter.parse(Formatter.java:2549)
at java.util.Formatter.format(Formatter.java:2501)
at java.util.Formatter.format(Formatter.java:2455)
at java.lang.String.format(String.java:2940)
at com.datastax.driver.core.exceptions.BusyConnectionException.<init>(BusyConnectionException.java:29)
at com.datastax.driver.core.Connection$ResponseHandler.<init>(Connection.java:1538)
at com.datastax.driver.core.Connection.write(Connection.java:711)
at com.datastax.driver.core.RequestHandler$SpeculativeExecution.write(RequestHandler.java:451)
at com.datastax.driver.core.RequestHandler$SpeculativeExecution.access$1600(RequestHandler.java:307)
at com.datastax.driver.core.RequestHandler$SpeculativeExecution$1.onSuccess(RequestHandler.java:397)
at com.datastax.driver.core.RequestHandler$SpeculativeExecution$1.onSuccess(RequestHandler.java:384)
at com.google.common.util.concurrent.Futures$CallbackListener.run(Futures.java:1355)
at com.google.common.util.concurrent.MoreExecutors$DirectExecutor.execute(MoreExecutors.java:398)
at com.google.common.util.concurrent.AbstractFuture.executeListener(AbstractFuture.java:1024)
at com.google.common.util.concurrent.AbstractFuture.complete(AbstractFuture.java:866)
at com.google.common.util.concurrent.AbstractFuture.set(AbstractFuture.java:689)
at com.google.common.util.concurrent.SettableFuture.set(SettableFuture.java:48)
at com.datastax.driver.core.HostConnectionPool$PendingBorrow.set(HostConnectionPool.java:755)
at com.datastax.driver.core.HostConnectionPool.dequeue(HostConnectionPool.java:407)
at com.datastax.driver.core.HostConnectionPool.returnConnection(HostConnectionPool.java:366)
at com.datastax.driver.core.Connection.release(Connection.java:810)
at com.datastax.driver.core.RequestHandler$SpeculativeExecution$1.onSuccess(RequestHandler.java:407)
at com.datastax.driver.core.RequestHandler$SpeculativeExecution$1.onSuccess(RequestHandler.java:384)
at com.google.common.util.concurrent.Futures$CallbackListener.run(Futures.java:1355)
at com.google.common.util.concurrent.MoreExecutors$DirectExecutor.execute(MoreExecutors.java:398)
at com.google.common.util.concurrent.AbstractFuture.executeListener(AbstractFuture.java:1024)
at com.google.common.util.concurrent.AbstractFuture.complete(AbstractFuture.java:866)
at com.google.common.util.concurrent.AbstractFuture.set(AbstractFuture.java:689)
at com.google.common.util.concurrent.SettableFuture.set(SettableFuture.java:48)
at com.datastax.driver.core.HostConnectionPool$PendingBorrow.set(HostConnectionPool.java:755)
at com.datastax.driver.core.HostConnectionPool.dequeue(HostConnectionPool.java:407)
at com.datastax.driver.core.HostConnectionPool.returnConnection(HostConnectionPool.java:366)
at com.datastax.driver.core.Connection.release(Connection.java:810)
I do not use any UDTs.
Here are the keyspace and table creation code:
session.execute(session.prepare(
"CREATE KEYSPACE IF NOT EXISTS myspace WITH REPLICATION = {'class': 'NetworkTopologyStrategy', 'dc1': '3'} AND DURABLE_WRITES = true;").bind());
session.execute(session.prepare("CREATE TABLE IF NOT EXISTS myspace.tasks (myId TEXT PRIMARY KEY, pointer BIGINT)").bind());
session.execute(session.prepare("CREATE TABLE IF NOT EXISTS myspace.counters (key TEXT PRIMARY KEY, cnt COUNTER)").bind());
This is the prepared statement that I use:
PreparedStatement quickSearchTasksInsert = session.prepare("INSERT INTO myspace.tasks (myId, pointer) VALUES (:oid,:loc)");
The code that reproduces the issue does the following:
Runs about 10,000 times the method writeTask() with different values such as the following example rows which are selecteed from a SQL database:
05043FA57ECEAABC3E096B281A55356B, 1678192046
5DE661E77D19C157C31EB7309494EA89, 3959390363
85D6211384E6E190299093E501169625, 3146521416
0327817F8BD59039069C13D581E8EBBE, 2907072247
D913FA0F306D6516D8DF87EB0CB1EE9B, 2507147331
DC946B409CD1E59F560A0ED75559CB16, 2810148057
2A24B1DC71D395938BA77C6CA822A5F7, 1182061065
F70705303980DA40D125CC3497174A5D, 1735385855
runs the setLocNum() method with some Long number.
Loop back to (1) above.
public void writeTask(String myId, long pointer) {
try {
session.executeAsync(quickSearchTasksInsert.bind().setString("oid",myId).setLong("loc", pointer));
incrementCounter("tasks_count", 1);
} catch (OperationTimedOutException | NoHostAvailableException e) {
// some error handling ommitted from post
}
}
public synchronized void setLocNum(long num) {
setCounter("loc_num", num);
}
public void incrementCounter(String key, long incVal) {
try {
session.executeAsync(
"UPDATE myspace.counters SET cnt = cnt + " + incVal + " WHERE key = '" + key.toLowerCase() + "'");
} catch (OperationTimedOutException | NoHostAvailableException e) {
// some error handling ommitted from post
}
}
public void decrementCounter(String key, long decVal) {
try {
session.executeAsync(
"UPDATE myspace.counters SET cnt = cnt - " + decVal + " WHERE key = '" + key.toLowerCase() + "'");
} catch (OperationTimedOutException | NoHostAvailableException e) {
// some error handling ommitted from post
}
}
public synchronized void setCounter(String key, long newVal) {
try {
Long prevCounterValue = countersCache.get(key);
long oldCounter = prevCounterValue == null ? readCounter(key) : prevCounterValue.longValue();
decrementCounter(key, oldCounter);
incrementCounter(key, newVal);
countersCache.put(key, newVal);
} catch (OperationTimedOutException | NoHostAvailableException e) {
// some error handling ommitted from post
}
}

How can I generate a valid ECDSA EC key pair?

I am trying to generate ECDSA key pair using SpongyCastle in Android.
This is the code:
static {
Security.insertProviderAt(new org.spongycastle.jce.provider.BouncyCastleProvider(), 1);
}
public static KeyPair generate() {
ECParameterSpec ecSpec = ECNamedCurveTable.getParameterSpec("prime256v1");
KeyPairGenerator generator = KeyPairGenerator.getInstance("ECDSA", "SC");
generator.initialize(ecSpec, new SecureRandom());
KeyPair keyPair = g.generateKeyPair();
Log.i(TAG, "EC Pub Key generated: " + utils.bytesToHex(keyPair.getPublic().getEncoded()));
Log.i(TAG, "EC Private Key generated: " + utils.bytesToHex(keyPair.getPrivate().getEncoded()));
return generator.generateKeyPair();
}
Something is wrong since I always get something like that example of
Public Key:
3059301306072A8648CE3D020106082A8648CE3D03010703420004483ABA9F322240010ECF00E818C041A60FE71A2BD64C64CD5A60519985F110AEDE6308027D2730303F5E2478F083C7F5BB683DCAC22BFEB62F3A48BD01009F40
and Private Key:
308193020100301306072A8648CE3D020106082A8648CE3D030107047930770201010420219AB4B3701630973A4B2917D53F69A4BE6DAD61F48016BFEF147B2999575CB2A00A06082A8648CE3D030107A14403420004483ABA9F322240010ECF00E818C041A60FE71A2BD64C64CD5A60519985F110AEDE6308027D2730303F5E2478F083C7F5BB683DCAC22BFEB62F3A48BD01009F40
The site ECDSA sample give me "Invalid ECDSA signature message", and them seems really very different from that smaller Private Key and always starting with "04" Public Key generated in the same site.
Also, my backend verification gives me the error "Invalid point encoding 0x30"
The backend Java method check is:
public ECPublicKey getPublicKeyFromHex(String publicKeyHex)
throws NoSuchAlgorithmException, DecoderException, ApplicationGenericException {
byte[] rawPublicKey = Hex.decodeHex(publicKeyHex.toCharArray());
ECPublicKey ecPublicKey = null;
KeyFactory kf = null;
ECNamedCurveParameterSpec ecNamedCurveParameterSpec = ECNamedCurveTable.getParameterSpec("prime256v1");
ECCurve curve = ecNamedCurveParameterSpec.getCurve();
EllipticCurve ellipticCurve = EC5Util.convertCurve(curve, ecNamedCurveParameterSpec.getSeed());
java.security.spec.ECPoint ecPoint = ECPointUtil.decodePoint(ellipticCurve, rawPublicKey);
java.security.spec.ECParameterSpec ecParameterSpec = EC5Util.convertSpec(ellipticCurve,
ecNamedCurveParameterSpec);
java.security.spec.ECPublicKeySpec publicKeySpec = new java.security.spec.ECPublicKeySpec(ecPoint,
ecParameterSpec);
kf = KeyFactory.getInstance("ECDSA", new BouncyCastleProvider());
try {
ecPublicKey = (ECPublicKey) kf.generatePublic(publicKeySpec);
} catch (Exception e) {
throw new ApplicationGenericException(e.getMessage(), e.getCause());
}
return ecPublicKey;
}
Java's default encoding for a PublicKey is "X.509" which is not just the EC point; it is an ASN.1 structure identifying the algorithm (EC) and parameters (here prime256v1) PLUS a BIT STRING wrapping the point; see rfc5280 section 4.2.1.7 and rfc3279 section 2.3.5.
Similarly the default encoding for PrivateKey is "PKCS#8" (unencrypted) which is a structure containing an AlgorithmIdentifier plus an OCTET STRING wrapping the data which in this case contains both the private key value and a copy of the public key, see rfc5208 section 5 and C.4 of document SEC 1 at http://www.secg.org with tag [0] omitted but tag [1] present.
To read (either or both of) them back in to Java, get a KeyFactory.getInstance("EC") and use generate{Public,Private} on an X509EncodedKeySpec or PKCS8EncodedKeySpec respectively.
ECDSA and ECDH (and ECMQV etc) use the same key structures, unlike classic integer DSA and DH which use the same mathematical structure ($Z_p^*$) but slightly different representations.
PS: the javadoc for java.security.Key tells you most of this.
More practical example. Convert generated public key to decoded bytes array or hex string:
public String getPublicKeyAsHex(PublicKey publicKey){
ECPublicKey ecPublicKey = (ECPublicKey)publicKey;
ECPoint ecPoint = ecPublicKey.getW();
byte[] publicKeyBytes = new byte[PUBLIC_KEY_LENGTH];
writeToStream(publicKeyBytes, 0, ecPoint.getAffineX(), PRIVATE_KEY_LENGTH);
writeToStream(publicKeyBytes, PRIVATE_KEY_LENGTH, ecPoint.getAffineY(), PRIVATE_KEY_LENGTH);
String hex = Hex.toHexString(publicKeyBytes);
logger.debug("Public key bytes: " + Arrays.toString(publicKeyBytes));
logger.debug("Public key hex: " + hex);
return hex;
}
private void writeToStream(byte[] stream, int start, BigInteger value, int size) {
byte[] data = value.toByteArray();
int length = Math.min(size, data.length);
int writeStart = start + size - length;
int readStart = data.length - length;
System.arraycopy(data, readStart, stream, writeStart, length);
}
Convert decoded bytes array back to PublicKey:
KeyFactory factory = KeyFactory.getInstance(ALGORITHM, ALGORITHM_PROVIDER);
ECNamedCurveParameterSpec spec = ECNamedCurveTable.getParameterSpec(CURVE);
ECNamedCurveSpec params = new ECNamedCurveSpec(CURVE, spec.getCurve(), spec.getG(), spec.getN());
BigInteger xCoordinate = new BigInteger(1, Arrays.copyOfRange(decodedPublicKey, 0, PRIVATE_KEY_LENGTH));
BigInteger yCoordinate = new BigInteger(1, Arrays.copyOfRange(decodedPublicKey, PRIVATE_KEY_LENGTH, PUBLIC_KEY_LENGTH));
java.security.spec.ECPoint w = new java.security.spec.ECPoint(xCoordinate, yCoordinate);
PublicKey encodedPublicKey = factory.generatePublic(new java.security.spec.ECPublicKeySpec(w, params));

Haxe – Proper way to implement Map with Int64 keys that can be serialized (native target)

I need to know, what would be proper way to implement Maps with 64 bit keys. There will not be so many items in them, I just need to use various bits of the key for various things with large enough address space and I need it to be very fast, so String keys would probably be too slow. So far I tried:
import haxe.Int64;
import haxe.Unserializer;
import haxe.Serializer;
class Test {
static function main () {
var key:Int64 = 1 << 63 | 0x00000001;
var omap:Map<Int64, String> = new Map<Int64, String>();
omap.set(key, "test");
var smap:Map<Int64, String> = Unserializer.run(Serializer.run(omap));
var key2:Int64 = 1 << 63 | 0x00000001;
trace(key+" "+smap.get(key2));
}
}
http://try.haxe.org/#7CDb2
which obviously doesn't work, because haxe.Int64 creates an object instance. Using cpp.Int64 works, because it for some reason falls back to 32 bit integer in my cpp code and I don't know what am I doing wrong. How can I force it to "stay" 64 bit, or should I do it some other way?
EDIT: This is currently not working on native targets due to bug / current implementation in hxcpp: https://github.com/HaxeFoundation/hxcpp/issues/523
I figured out this workaround / wrapper, which may not be the most efficient solution possible, but it seems to work.
import haxe.Int64;
import haxe.Unserializer;
import haxe.Serializer;
class Test {
static function main () {
var key:Int64 = Int64.make(1000,1);
var omap:Int64Map<String> = new Int64Map();
omap.set(key, "test");
var smap:Int64Map<String> = Unserializer.run(Serializer.run(omap));
var key2:Int64 = Int64.make(1000,1);
trace(key+" "+smap.get(key2));
}
}
class Int64Map<V> {
private var map:Map<Int64,V>;
public function new() : Void {
this.map = new Map<Int64,V>();
}
public function set(key:Int64, value:V):Void {
this.map.set(key, value);
}
public inline function get(key:Int64):Null<V> {
var skey:Null<Int64> = getMapKey(key);
if (skey != null) return this.map.get(skey);
return null;
}
public inline function exists(key:Int64):Bool {
return (getMapKey(key) != null);
}
public function remove( key : Int64 ) : Bool {
var skey:Null<Int64> = getMapKey(key);
if (skey != null) return this.map.remove(skey);
return false;
}
public function keys() : Iterator<Int64> {
return this.map.keys();
}
public function toString() : String {
return this.map.toString();
}
public function iterator() : Iterator<V> {
return this.map.iterator();
}
private function getMapKey(key:Int64):Null<Int64> {
for (ikey in this.map.keys()){
if (Int64.eq(key, ikey)){
return ikey;
}
}
return null;
}
}
http://try.haxe.org/#57686

cassandra trigger on composite blob key

I use Cassandra 2.1.9 and have table like
create table "Keyspace1"."Standard4" ( id blob, user_name blob, data blob, primary key(id, user_name));
and I follow the post in Cassandra Sample Trigger Code to get inserted value and do trigger code like
public class InvertedIndex implements ITrigger
{
private static final Logger logger = LoggerFactory.getLogger(InvertedIndex.class);
public Collection augment(ByteBuffer key, ColumnFamily update)
{
CFMetaData cfm = update.metadata();
ByteBuffer id_bb = key;
String id_Value = new String(id_bb.array());
Iterator col_itr=update.iterator();
Cell username_col=(Cell)col_itr.next();
ByteBuffer username_bb=CompositeType.extractComponent(username_col.name().collectionElement(),0);
String username_Value = new String(username_bb.array());
Cell data_col=(Cell)col_itr.next();
ByteBuffer data_bb=BytesType.instance.compose(data_col.value());
String data_Value = new String(data_bb.array());
logger.info(" id --> "+id_Value);
logger.info(" username-->"+username_Value);
logger.info(" data ---> "+data_Value);
return null;
}
}
I tried insert into "Keyspace1"."Standard4" (id, user_name, data) values (textAsBlob('id1'), textAsBlob('user_name1'), textAsBlob('data1'));
and got run time exception in ByteBuffer username_bb=CompositeType.extractComponent(username_col.name().collectionElement(),0);
Caused by: java.lang.NullPointerException: null
at org.apache.cassandra.db.marshal.CompositeType.extractComponent(CompositeType.java:191) ~[apache-cassandra-2.1.9.jar:2.1.9]
at org.apache.cassandra.triggers.InvertedIndex.augment(InvertedIndex.java:52) ~[na:na]
at org.apache.cassandra.triggers.TriggerExecutor.executeInternal(TriggerExecutor.java:223) ~[apache-cassandra-2.1.9.jar:2.1.9]
... 17 common frames omitted
Can anybody tell me how to correct?
You are trying to show all the inserted column name and value right ?
Here is the code:
#Override
public Collection<Mutation> augment(ByteBuffer key, ColumnFamily update) {
CFMetaData cfm = update.metadata();
System.out.println("key => " + ByteBufferUtil.toInt(key));
for (Cell cell : update) {
if (cell.value().remaining() > 0) {
try {
String name = cfm.comparator.getString(cell.name());
String value = cfm.getValueValidator(cell.name()).getString(cell.value());
System.out.println("Column Name => " + name + " Value => " + value);
} catch (Exception e) {
System.out.println("Exception : " + e.getMessage());
}
}
}
return null;
}

Spring data Cassandra 2.0 Select BLOB column returns incorrect ByteBuffer data

Context: Spring data cassandra official 1.0.2.RELEASE from Maven Central repo, CQL3, cassandra 2.0, datastax driver 2.0.4
Background: The cassandra blob data type is mapped to a Java ByteBuffer.
The sample code below demonstrates that you won't retrieve the correct bytes using select next to an equivalent insert. The data actually retrieved is prefixed by numerous garbage bytes that actually looks like a serialization of the entire row.
This older post relating to Cassandra 1.2 suggested that we may have to start at ByteBuffer.arrayOffset() of length ByteBuffer.remaining(), but a the arrayOffset value is actually 0.
I discovered a spring-data-cassandra 2.0.0. SNAPSHOT but the CassandraOperations API is much different, and its package name too: org.springdata... versus org.springframework...
Help in fixing this will be much welcome.
In the mean time it looks like I have to encode/decode Base64 my binary data to/from a text data type column.
--- here is the simple table CQL meta data I use -------------
CREATE TABLE person (
id text,
age int,
name text,
pict blob,
PRIMARY KEY (id)
) ;
--- follows the simple data object mapped to a CQL table ---
package org.spring.cassandra.example;
import java.nio.ByteBuffer;
import org.springframework.data.cassandra.mapping.PrimaryKey;
import org.springframework.data.cassandra.mapping.Table;
#Table
public class Person {
#PrimaryKey
private String id;
private int age;
private String name;
private ByteBuffer pict;
public Person(String id, int age, String name, ByteBuffer pict) {
this.id = id; this.name = name; this.age = age; this.pict = pict;
}
public String getId() { return id; }
public String getName() { return name; }
public int getAge() { return age; }
public ByteBuffer getPict() { return pict; }
}
}
--- and the plain java application code that simply inserts and retrieves a person object --
package org.spring.cassandra.example;
import java.nio.ByteBuffer;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.context.ApplicationContext;
import org.springframework.context.support.ClassPathXmlApplicationContext;
import org.springframework.data.cassandra.core.CassandraOperations;
import com.datastax.driver.core.ResultSet;
import com.datastax.driver.core.Row;
import com.datastax.driver.core.querybuilder.QueryBuilder;
import com.datastax.driver.core.querybuilder.Select;
public class CassandraApp {
private static final Logger logger = LoggerFactory
.getLogger(CassandraApp.class);
public static String hexDump(ByteBuffer bb) {
char[] hexArray = "0123456789ABCDEF".toCharArray();
bb.rewind();
char[] hexChars = new char[bb.limit() * 2];
for ( int j = 0; j < bb.limit(); j++ ) {
int v = bb.get() & 0xFF;
hexChars[j * 2] = hexArray[v >>> 4];
hexChars[j * 2 + 1] = hexArray[v & 0x0F];
}
bb.rewind();
return new String(hexChars);
}
public static void main(String[] args) {
ApplicationContext applicationContext = new ClassPathXmlApplicationContext("app-context.xml");
try {
CassandraOperations cassandraOps = applicationContext.getBean(
"cassandraTemplate", CassandraOperations.class);
cassandraOps.truncate("person");
// prepare data
byte[] ba = { 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x11, 0x22, 0x33, 0x44, 0x55, (byte) 0xAA, (byte) 0xCC, (byte) 0xFF };
ByteBuffer myPict = ByteBuffer.wrap(ba);
String myId = "1234567890";
String myName = "mickey";
int myAge = 50;
logger.info("We try id=" + myId + ", name=" + myName + ", age=" + myAge +", pict=" + hexDump(myPict));
cassandraOps.insert(new Person(myId, myAge, myName, myPict ));
Select s = QueryBuilder.select("id","name","age","pict").from("person");
s.where(QueryBuilder.eq("id", myId));
ResultSet rs = cassandraOps.query(s);
Row r = rs.one();
logger.info("We got id=" + r.getString(0) + ", name=" + r.getString(1) + ", age=" + r.getInt(2) +", pict=" + hexDump(r.getBytes(3)));
} catch (Exception e) {
e.printStackTrace();
}
}
}
--- assuming you have configured a simple Spring project for cassandra
as explained at http://projects.spring.io/spring-data-cassandra/
The actual execution yields:
[main] INFO org.spring.cassandra.example.CassandraApp - We try id=1234567890, name=mickey, age=50, pict= 0001020304051122334455AACCFF
[main] INFO org.spring.cassandra.example.CassandraApp - We got id=1234567890, name=mickey, age=50, pict=8200000800000073000000020000000100000004000A6D796B657973706163650006706572736F6E00026964000D00046E616D65000D000361676500090004706963740003000000010000000A31323334353637383930000000066D69636B657900000004000000320000000E 0001020304051122334455AACCFF
although the insert looks correct in the database itself, as seen from cqlsh command line:
cqlsh:mykeyspace> select * from person;
id | age | name | pict
------------+-----+--------+--------------------------------
1234567890 | 50 | mickey | 0x0001020304051122334455aaccff
(1 rows)
I had exactly the same problem but have fortunately found a solution.
The problem is that ByteBuffer use is confusing. Try doing something like:
ByteBuffer bb = resultSet.one().getBytes("column_name");
byte[] data = new byte[bb.remaining()];
bb.get(data);
Thanks to Sylvain's for this suggestion here: http://grokbase.com/t/cassandra/user/134brvqzd3/blobs-in-cql
Give a look at the Bytes class of the Datastax Java Driver, it provides what you need to encode/decode your data.
In this post I wrote an usage example.
HTH,
Carlo

Resources