I have following query regarding Hazelcast Jet
The use-case as follows
There is one application (Application 'A', deployed in cluster) uses Hazelcast IMDG and puts millions of records / transactions in hazelcast IMap.
The Event Journal has been configured for this IMap.
There is another application (Application B, deployed in cluster) instantiates JetInstance and runs the job individually on each node to process the records.
Currently, this job reads data from event journal and adds into IList (Reference - hazelcast-jet-0.5.1\code-samples\streaming\map-journal-source\src\main\java\RemoteMapJournalSource.java)
As the job is running on multiple nodes,the records from Event Journal are processed by multiple nodes. This results in multiple entries in the IList.
Is it possible to ensure, a record is processed by only one node of the 'Application B' and not processed by other nodes to avoid duplicates ?
If not, does this mean the job would be run by single node of the 'Application B' cluster ?
Here is a sample code (Application B)
Pipeline p = Pipeline.create();
p.drawFrom(Sources.<Integer, Integer, Integer>remoteMapJournal(MAP_NAME, clientConfig,
e -> e.getType() == EntryEventType.ADDED, EventJournalMapEvent::getNewValue, true))
.peek()
.drainTo(Sinks.list(SINK_NAME));
JobConfig jc= new JobConfig();
jc.setProcessingGuarantee(ProcessingGuarantee.EXACTLY_ONCE);
localJet.newJob(p,jc);
Here is a complete code.
Application A Source Code.
public class RemoteMapJournalSourceSrv1 {
private static final String MAP_NAME = "map";
private static final String SINK_NAME = "list";
public static void main(String[] args) throws Exception {
System.setProperty("remoteHz.logging.type", "log4j");
Config hzConfig = getConfig();
HazelcastInstance remoteHz = startRemoteHzCluster(hzConfig);
try {
IMap<Integer, Integer> map = remoteHz.getMap(MAP_NAME);
System.out.println("*************** Initial Map address " + map.size() );
while(true) {
System.out.println("***************map size "+map.size());
TimeUnit.SECONDS.sleep(20);
}
} finally {
Hazelcast.shutdownAll();
}
}
private static HazelcastInstance startRemoteHzCluster(Config config) {
HazelcastInstance remoteHz = Hazelcast.newHazelcastInstance(config);
return remoteHz;
}
private static Config getConfig() {
Config config = new Config();
// Add an event journal config for map which has custom capacity of 1000 (default 10_000)
// and time to live seconds as 10 seconds (default 0 which means infinite)
config.addEventJournalConfig(new EventJournalConfig().setEnabled(true)
.setMapName(MAP_NAME)
.setCapacity(10000)
.setTimeToLiveSeconds(100));
return config;
}
Here is Application B - Node 1 Sample Code
public class RemoteMapJournalSourceCL1 {
private static final String MAP_NAME = "map";
private static final String SINK_NAME = "list";
public static void main(String[] args) throws Exception {
System.setProperty("remoteHz.logging.type", "log4j");
JetInstance localJet = startLocalJetCluster();
try {
ClientConfig clientConfig = new ClientConfig();
GroupConfig groupConfig = new GroupConfig();
clientConfig.getNetworkConfig().addAddress("localhost:5701");
clientConfig.setGroupConfig(groupConfig);
IList list1 = localJet.getList(SINK_NAME);
int size1 = list1.size();
System.out.println("***************List Initial size "+size1);
Pipeline p = Pipeline.create();
p.drawFrom(Sources.<Integer, Integer, Integer>remoteMapJournal(MAP_NAME, clientConfig,
e -> e.getType() == EntryEventType.ADDED, EventJournalMapEvent::getNewValue, false))
.peek()
.drainTo(Sinks.list(SINK_NAME));
JobConfig jc= new JobConfig();
jc.setProcessingGuarantee(ProcessingGuarantee.EXACTLY_ONCE);
localJet.newJob(p,jc);
while(true){
TimeUnit.SECONDS.sleep(10);
System.out.println("***************Read " + list1.size() + " entries from remote map journal.");
}
} finally {
Hazelcast.shutdownAll();
Jet.shutdownAll();
}
}
private static String getAddress(HazelcastInstance remoteHz) {
Address address = remoteHz.getCluster().getLocalMember().getAddress();
System.out.println("***************Remote address " + address.getHost() + ":" + address.getPort() );
return address.getHost() + ":" + address.getPort();
}
private static JetInstance startLocalJetCluster() {
JetInstance localJet = Jet.newJetInstance();
return localJet;
}
Here is Application B - Node 2 Sample code
public class RemoteMapJournalSourceCL2 {
private static final String MAP_NAME = "map";
private static final String SINK_NAME = "list";
public static void main(String[] args) throws Exception {
System.setProperty("remoteHz.logging.type", "log4j");
JetInstance localJet = startLocalJetCluster();
try {
ClientConfig clientConfig = new ClientConfig();
GroupConfig groupConfig = new GroupConfig();
clientConfig.getNetworkConfig().addAddress("localhost:5701");
clientConfig.setGroupConfig(groupConfig);
IList list1 = localJet.getList(SINK_NAME);
int size1 = list1.size();
System.out.println("***************List Initial size "+size1);
Pipeline p = Pipeline.create();
p.drawFrom(Sources.<Integer, Integer, Integer>remoteMapJournal(MAP_NAME, clientConfig,
e -> e.getType() == EntryEventType.ADDED, EventJournalMapEvent::getNewValue, true))
.peek()
.drainTo(Sinks.list(SINK_NAME));
JobConfig jc= new JobConfig();
jc.setProcessingGuarantee(ProcessingGuarantee.EXACTLY_ONCE);
localJet.newJob(p,jc);
while(true){
TimeUnit.SECONDS.sleep(10);
System.out.println("***************Read " + list1.size() + " entries from remote map journal.");
}
} finally {
Hazelcast.shutdownAll();
Jet.shutdownAll();
}
}
private static JetInstance startLocalJetCluster() {
JetInstance localJet = Jet.newJetInstance();
return localJet;
}
Hazelcast Client - Puts entries in Hazelcast Map (Application A)
public class HZClient {
public static void main(String[] args) {
ClientConfig clientConfig = new ClientConfig();
GroupConfig groupConfig = new GroupConfig();
clientConfig.getNetworkConfig().addAddress("localhost:5701");
clientConfig.setGroupConfig(groupConfig);
HazelcastInstance client = HazelcastClient.newHazelcastClient(clientConfig);
IMap<Integer, Integer> map = client.getMap("map");
Scanner in = new Scanner(System.in);
int startIndex= 0;
int endIndex= 0;
while(true) {
if(args !=null && args.length > 0 && args[0].equals("BATCH")) {
System.out.println("Please input the batch size");
int b = in.nextInt();
startIndex= endIndex + 1;
endIndex+= b;
System.out.println("Batch starts from "+ startIndex +"ends at"+endIndex);
putBatch(map,startIndex,endIndex);
}
else {
System.out.println("Please input the map entry");
int a = in.nextInt();
System.out.println("You entered integer "+a);
put(map,a,a);
}
}
}
public static void putBatch(IMap map,int startIndex, int endIndex) {
int index= startIndex;
System.out.println("Start Index" + startIndex +"End Index"+endIndex );
while(index<=endIndex){
System.out.println("Map Values"+ index);
put(map,index,index);
index+=1;
}
}
public static void put(IMap map,int key,int value) {
map.set(key, value);
}
Here are the steps to execute this.
Run Application A - Java program RemoteMapJournalSourceSrv1
Run Application B Node 1 - Java program RemoteMapJournalSourceCL1
Run Application B Node 2 - Java program RemoteMapJournalSourceCL2
Run Hazelcast Client for Application A - Java program HZClient
This client program puts entries into the map based on console input. Please provide integer input.
Observations
On execution, the .peek() logs values for both nodes of Application B and the list count becomes 2 on insertion of 1 entry in the Application A map.
It appears that you are submitting two independent jobs from two Jet clients. Each job receives all the IMap event journal items and pushes them to the same IList, therefore the expected outcome is for the IList to contain two instances of each item.
Remember that you only submit the job from a Jet client, but it actually runs inside the Jet cluster, on all its members simultaneously. Do not submit the same job twice if you want just one copy of the data in the sink.
Related
I have written a jar which has jedis connection pool feature, by using which I have written the groovy script in nifi for redis location search. But it is behaving stangely, sometimes it is working and sometimes not.
Redis.java
public class Redis {
private static Object staticLock = new Object();
private static JedisPool pool;
private static String host;
private static int port;
private static int connectTimeout;
private static int operationTimeout;
private static String password;
private static JedisPoolConfig config;
public static void initializeSettings(String host, int port, String password, int connectTimeout, int operationTimeout) {
Redis.host = host;
Redis.port = port;
Redis.password = password;
Redis.connectTimeout = connectTimeout;
Redis.operationTimeout = operationTimeout;
}
public static JedisPool getPoolInstance() {
if (pool == null) { // avoid synchronization lock if initialization has already happened
synchronized(staticLock) {
if (pool == null) { // don't re-initialize if another thread beat us to it.
JedisPoolConfig poolConfig = getPoolConfig();
boolean useSsl = port == 6380 ? true : false;
int db = 0;
String clientName = "MyClientName"; // null means use default
SSLSocketFactory sslSocketFactory = null; // null means use default
SSLParameters sslParameters = null; // null means use default
HostnameVerifier hostnameVerifier = new SimpleHostNameVerifier(host);
pool = new JedisPool(poolConfig, host, port);
//(poolConfig, host, port, connectTimeout,operationTimeout,password, db,
// clientName, useSsl, sslSocketFactory, sslParameters, hostnameVerifier);
}
}
}
return pool;
}
public static JedisPoolConfig getPoolConfig() {
if (config == null) {
JedisPoolConfig poolConfig = new JedisPoolConfig();
int maxConnections = 200;
poolConfig.setMaxTotal(maxConnections);
poolConfig.setMaxIdle(maxConnections);
poolConfig.setBlockWhenExhausted(true);
poolConfig.setMaxWaitMillis(operationTimeout);
poolConfig.setMinIdle(50);
Redis.config = poolConfig;
}
return config;
}
public static String getPoolCurrentUsage()
{
JedisPool jedisPool = getPoolInstance();
JedisPoolConfig poolConfig = getPoolConfig();
int active = jedisPool.getNumActive();
int idle = jedisPool.getNumIdle();
int total = active + idle;
String log = String.format(
"JedisPool: Active=%d, Idle=%d, Waiters=%d, total=%d, maxTotal=%d, minIdle=%d, maxIdle=%d",
active,
idle,
jedisPool.getNumWaiters(),
total,
poolConfig.getMaxTotal(),
poolConfig.getMinIdle(),
poolConfig.getMaxIdle()
);
return log;
}
private static class SimpleHostNameVerifier implements HostnameVerifier {
private String exactCN;
private String wildCardCN;
public SimpleHostNameVerifier(String cacheHostname)
{
exactCN = "CN=" + cacheHostname;
wildCardCN = "CN=*" + cacheHostname.substring(cacheHostname.indexOf('.'));
}
public boolean verify(String s, SSLSession sslSession) {
try {
String cn = sslSession.getPeerPrincipal().getName();
return cn.equalsIgnoreCase(wildCardCN) || cn.equalsIgnoreCase(exactCN);
} catch (SSLPeerUnverifiedException ex) {
return false;
}
}
}
}
CustomFunction:
public class Functions {
SecureRandom rand = new SecureRandom();
private static final String UTF8= "UTF-8";
public static JedisPool jedisPool=null;
public static String searchPlace(double lattitude,double longitude) {
try(Jedis jedis = jedisPool.getResource()) {
}
catch(Exception e){
log.error('execption',e);
}
}
}
Groovyscript:
import org.apache.nifi.processor.ProcessContext;
import com.customlib.functions.*;
def flowFile = session.get();
if (flowFile == null) {
return;
}
def flowFiles = [] as List<FlowFile>
def failflowFiles = [] as List<FlowFile>
def input=null;
def data=null;
static onStart(ProcessContext context){
Redis.initializeSettings("host", 6379, null,0,0);
Functions.jedisPool= Redis.getPoolInstance();
}
static onStop(ProcessContext context){
Functions.jedisPool.destroy();
}
try{
log.warn('is jedispool connected::::'+Functions.jedisPool.isClosed());
def inputStream = session.read(flowFile)
def writer = new StringWriter();
IOUtils.copy(inputStream, writer, "UTF-8");
data=writer.toString();
input = new JsonSlurper().parseText( data );
log.warn('place is::::'+Functions.getLocationByLatLong(input["data"]["lat"], input["data"]["longi"]);
.......
...........
}
catch(Exception e){
}
newFlowFile = session.write(newFlowFile, { outputStream ->
outputStream.write( data.getBytes(StandardCharsets.UTF_8) )
} as OutputStreamCallback)
failflowFiles<< newFlowFile;
}
session.transfer(flowFiles, REL_SUCCESS)
session.transfer(failflowFiles, REL_FAILURE)
session.remove(flowFile)
The nifi is in 3 node cluster. The function lib is configured in groovyscript module directory.In the above groovy script processor, the log statement is jedispool connected:::: is sometimes printing false,sometimes true but after deploying for the first time jar every time works. But later it is unpredictable, I am not getting what is wrong in the code. How the groovyscript will load the jar. How can I acheive the lib based search using groovy script.
Redis.pool never gets null after initialization. You are calling pool.destroy() but not setting it to null.
getPoolInstance() checks if pool is null only then it creates a new pool.
I don't see any reason to have 2 variables to hold reference to the same pool: in Redis and in Functions class.
I'm trying to implement a producer-consumer model by using Hazelcast.
The producer puts an item to queue and the consumer consumes it using take() method.
I close the consumer application and start again. The consumer retrieves the previously consumed item from the queue.
I tried the Hazelcast Ringbuffer and I see the same behavior.
Is there a way to force to remove the consumed item from the queue in Hazelcast?
Thanks in advance
Producer.java:
public class Producer implements MembershipListener {
private HazelcastInstance hzInstance;
private Cluster cluster;
private IAtomicLong counter;
private IQueue<Data> dataQueue;
private IMap<String, List<Data>> dataByConsumerId;
public static void main(String[] args) {
Producer producer = new Producer();
Scanner scanIn = new Scanner(System.in);
while (true) {
String cmd = scanIn.nextLine();
if (cmd.equals("QUIT")) {
break;
} else if (cmd.equals("ADD")) {
long x = producer.counter.addAndGet(1);
producer.dataQueue.add(new Data(x, x + 1));
}
}
scanIn.close();
}
public Producer() {
hzInstance = Hazelcast.newHazelcastInstance(configuration());
counter = hzInstance.getCPSubsystem().getAtomicLong("COUNTER");
dataByConsumerId = hzInstance.getMap("CONSUMER_DATA");
dataQueue = hzInstance.getQueue("DATA_QUEUE");
cluster = hzInstance.getCluster();
cluster.addMembershipListener(this);
}
public Config configuration() {
Config config = new Config();
config.setInstanceName("hazelcast-instance");
MapConfig mapConfig = new MapConfig();
mapConfig.setName("configuration");
mapConfig.setTimeToLiveSeconds(-1);
config.addMapConfig(mapConfig);
return config;
}
#Override
public void memberAdded(MembershipEvent membershipEvent) {
}
#Override
public void memberRemoved(MembershipEvent membershipEvent) {
String removedConsumerId = membershipEvent.getMember().getUuid().toString();
List<Data> items = dataByConsumerId.remove(removedConsumerId);
if (items == null)
return;
items.forEach(item -> {
System.out.println("Push data to recover :" + item.toString());
dataQueue.add(item);
});
}
}
Consumer.java:
public class Consumer {
private String id;
private HazelcastInstance hzInstance;
private IMap<String, List<Data>> dataByConsumerId;
private IQueue<Data> dataQueue;
public Consumer() {
hzInstance = Hazelcast.newHazelcastInstance(configuration());
id = hzInstance.getLocalEndpoint().getUuid().toString();
dataByConsumerId = hzInstance.getMap("CONSUMER_DATA");
dataByConsumerId.put(id, new ArrayList<Data>());
dataQueue = hzInstance.getQueue("DATA_QUEUE");
}
public Config configuration() {
Config config = new Config();
config.setInstanceName("hazelcast-instance");
MapConfig mapConfig = new MapConfig();
mapConfig.setName("configuration");
mapConfig.setTimeToLiveSeconds(-1);
config.addMapConfig(mapConfig);
return config;
}
public static void main(String[] args) {
Consumer consumer = new Consumer();
try {
consumer.run();
System.in.read();
} catch (IOException e) {
e.printStackTrace();
}
}
private void run() {
while (true) {
System.out.println("Take queue item...");
try {
var item = dataQueue.take();
System.out.println("New item taken:" + item.toString());
var dataInCluster = dataByConsumerId.get(id);
dataInCluster.add(item);
dataByConsumerId.put(id, dataInCluster);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
}
Data.java:
public class Data implements Serializable {
private static final long serialVersionUID = -975095628505008933L;
private long x, y;
public Data(long x, long y) {
super();
this.x = x;
this.y = y;
}
public long getX() {
return x;
}
public void setX(long x) {
this.x = x;
}
public long getY() {
return y;
}
public void setY(long y) {
this.y = y;
}
#Override
public String toString() {
return "Data [x=" + x + ", y=" + y + "]";
}
}
How can i configure Atomikos for HazelCast instance.As per the mastering-hazel cast we can only do it in java.How can i configure like i do for databases.If configuring is java is the way,then how i can make use of TransactionalTask to remove the boilerplate code starting and committing the transactions.i have tried like
public void insertIntoGridJTA( final List<String> list)
throws NotSupportedException, SystemException,
IllegalStateException, RollbackException {
HazelcastInstance hazelcast = Hazelcast.newHazelcastInstance();
HazelcastXAResource xaResource = hazelcast.getXAResource();
TransactionContext context = xaResource.getTransactionContext();
hazelcast.executeTransaction(new TransactionalTask<Object>() {
public Object execute(TransactionalTaskContext context)
throws TransactionException {
// TODO Auto-generated method stub
TransactionalMap<Integer, String> map = context.getMap("demo");
System.out.println("map"+map.getName());
for (int i = 0; i < list.size(); i++) {
map.put(i, list.get(i));
}
return null;
}
});
}
But the transaction is not starting if i am using TransactionalTask
Did you had a look at the Atomikos example in our examples repo? https://github.com/hazelcast/hazelcast-code-samples/blob/master/transactions/xa-transactions/src/main/java/XATransaction.java
In addition to #noctarius, I have done it like this in the past
#Autowired
private JdbcTemplate jdbcTemplate;
#Autowired
private HazelcastXAResource hzXAResource;
#Autowired
private UserTransactionManager userTransactionManager;
#Transactional
public void insert() throws SystemException, RollbackException {
final Transaction transaction = userTransactionManager.getTransaction();
transaction.enlistResource(hzXAResource);
final TransactionContext hzTransactionContext = hzXAResource.getTransactionContext();
final TransactionalMap<Long, String> hzCustomerMap = hzTransactionContext.getMap("hzCustomerMap");
// Use a Java 8 stream to print out each tuple of the list
CUSTOMERS_TEST_DATA.forEach(customer -> {
log.info("Inserting customer record for {}", customer);
hzCustomerMap.put(customer.getId(), customer.toString());
});
jdbcTemplate.batchUpdate(Sql.SQL_INSERT.getSql(), new BatchPreparedStatementSetter() {
#Override
public void setValues(PreparedStatement ps, int i) throws SQLException {
ps.setString(1, CUSTOMERS_TEST_DATA.get(i).getFirstName());
ps.setString(2, CUSTOMERS_TEST_DATA.get(i).getLastName());
}
#Override
public int getBatchSize() {
return CUSTOMERS_TEST_DATA.size();
}
});
// Uncomment this to test the failure of the transaction
// hzCustomerMap.values((Predicate) entry -> {
// throw new RuntimeException();
// });
transaction.delistResource(hzXAResource, XAResource.TMSUCCESS);
}
Context:
I am running a jUnit test in eclipse by using embedded Cassandra to test my DAO class which is using an Astyanax client configured for JavaDriver. When DAO object instance insert into Cassandra I am getting this exception com.datastax.driver.core.exceptions.InvalidQueryException: Multiple definitions found for column ..columnname
TestClass
public class LeaderBoardDaoTest {
private static LeaderBoardDao dao;
public static CassandraCQLUnit cassandraCQLUnit;
private String hostIp = "127.0.0.1";
private int port = 9142;
public Session session;
public Cluster cluster;
#BeforeClass
public static void startCassandra() throws IOException, TTransportException, ConfigurationException, InterruptedException {
System.setProperty("archaius.deployment.applicationId", "leaderboardapi");
System.setProperty("archaius.deployment.environment", "test");
EmbeddedCassandraServerHelper.startEmbeddedCassandra("cassandra.yaml");
// cassandraCQLUnit = new CassandraCQLUnit(new
// ClassPathCQLDataSet("simple.cql", "lbapi"), "cassandra.yaml");
Injector injector = Guice.createInjector(new TestModule());
dao = injector.getInstance(LeaderBoardDao.class);
}
#Before
public void load() {
cluster = new Cluster.Builder().withClusterName("leaderboardcassandra").addContactPoints(hostIp).withPort(port).build();
session = cluster.connect();
CQLDataLoader dataLoader = new CQLDataLoader(session);
dataLoader.load(new ClassPathCQLDataSet("simple.cql", "lbapi"));
session = dataLoader.getSession();
}
#Test
public void test() {
ResultSet result = session.execute("select * from mytable WHERE id='myKey01'");
Assert.assertEquals(result.iterator().next().getString("value"), "myValue01");
}
#Test
public void testInsert() {
LeaderBoard lb = new LeaderBoard();
lb.setName("name-1");
lb.setDescription("description-1");
lb.setActivityType(ActivityType.FUEL);
lb.setImage("http:/");
lb.setLbId(UUID.fromString("3F2504E0-4F89-41D3-9A0C-0305E82C3301"));
lb.setStartTime(new Date());
lb.setEndTime(new Date());
dao.insert(lb);
ResultSet resultSet = session.execute("select * from leaderboards WHERE leaderboardid='3F2504E0-4F89-41D3-9A0C-0305E82C3301'");
}
#After
public void clearCassandra() {
EmbeddedCassandraServerHelper.cleanEmbeddedCassandra();
}
#AfterClass
public static void stopCassandra() {
EmbeddedCassandraServerHelper.stopEmbeddedCassandra();
}
}
Class under test
#Singleton
public class LeaderBoardDao {
private static final Logger log = LoggerFactory.getLogger(LeaderBoardDao.class);
#Inject
private AstyanaxMutationsJavaDriverClient client;
private static final String END_TIME = "end_time";
private static final String START_TIME = "start_time";
private static final String IMAGE = "image";
private static final String ACTIVITY_TYPE = "activity_type";
private static final String DESCRIPTION = "description";
private static final String NAME = "name";
private static final String LEADERBOARD_ID = "leaderboardID";
private static final String COLUMN_FAMILY_NAME = "leaderboards";
private ColumnFamily<UUID, String> cf;
public LeaderBoardDao() throws ConnectionException {
cf = ColumnFamily.newColumnFamily(COLUMN_FAMILY_NAME, UUIDSerializer.get(), StringSerializer.get());
}
/**
* Writes the Leaderboard to the database.
*
* #param lb
*/
public void insert(LeaderBoard lb) {
try {
MutationBatch m = client.getKeyspace().prepareMutationBatch();
cf.describe(client.getKeyspace());
m.withRow(cf, lb.getLbId()).putColumn(LEADERBOARD_ID, UUIDUtil.asByteArray(lb.getLbId()), null).putColumn(NAME, lb.getName(), null).putColumn(DESCRIPTION, lb.getDescription(), null)
.putColumn(ACTIVITY_TYPE, lb.getActivityType().name(), null).putColumn(IMAGE, lb.getImage()).putColumn(START_TIME, lb.getStartTime()).putColumn(END_TIME, lb.getEndTime());
m.execute();
} catch (ConnectionException e) {
Throwables.propagate(e);
}
}
/**
* Reads leaderboard from database
*
* #param id
* #return {#link LeaderBoard}
*/
public LeaderBoard read(UUID id) {
OperationResult<ColumnList<String>> result;
LeaderBoard lb = null;
try {
result = client.getKeyspace().prepareQuery(cf).getKey(id).execute();
ColumnList<String> cols = result.getResult();
if (!cols.isEmpty()) {
lb = new LeaderBoard();
lb.setLbId(cols.getUUIDValue(LEADERBOARD_ID, null));
lb.setName(cols.getStringValue(NAME, null));
lb.setActivityType(ActivityType.valueOf(cols.getStringValue(ACTIVITY_TYPE, null)));
lb.setDescription(cols.getStringValue(DESCRIPTION, null));
lb.setEndTime(cols.getDateValue(END_TIME, null));
lb.setStartTime(cols.getDateValue(START_TIME, null));
lb.setImage(cols.getStringValue(IMAGE, null));
} else {
log.warn("read: is empty: no record found for " + id);
}
return lb;
} catch (ConnectionException e) {
log.error("failed to read from C*", e);
throw new RuntimeException("failed to read from C*", e);
}
}
}
When the Java driver throws an InvalidQueryException, it's rethrowing an error from Cassandra. The error "Multiple definitions found for column..." indicates that a column is mentioned more than once in an update statement. You can simulate it in cqlsh:
cqlsh> create table test(i int primary key);
cqlsh> insert into test (i, i) values (1, 2);
code=2200 [Invalid query] message="Multiple definitions found for column i"
I'm not familiar with Astyanax, but my guess is that it already adds the id to the query when you call withRow, so you don't need to add it again with putColumn. Try removing that call (second line in reformatted sample below):
m.withRow(cf, lb.getLbId())
.putColumn(LEADERBOARD_ID, UUIDUtil.asByteArray(lb.getLbId()), null)
... // other putColumn calls
I am using entlib 5.0.1. I have created a cache using code configuration
IContainerConfigurator configurator =
new UnityContainerConfigurator(_unityContainer);
configurator.ConfigureCache(item.PartitionName, item.MaxItemNumber);
CacheBlockImp block = new CacheBlockImp(
_unityContainer.Resolve<ICacheManager>(item.PartitionName),
item.PartitionType);
I saw a strange behavior with the size limit, I have configured the cache to hold 15 items, I am adding in a loop 18 items, I would suspect afer the addition to have only 15 items, but I get 8, when I have added a refresh action - to be notify when 1 item was evacuated, I really see that 7 of them were evacuated, all the items have the same priority.
class Program
{
static void Main(string[] args)
{
var unityContainer = new UnityContainer();
IContainerConfigurator configurator = new UnityContainerConfigurator(unityContainer);
configurator.ConfigureCache("Test", 10);
var cache = unityContainer.Resolve<ICacheManager>("Test");
for (int i = 0; i < 18; i++)
{
var dummy = new Dummy()
{
ID = i,
Data = "hello " + i.ToString()
};
cache.Add(i.ToString(), dummy, CacheItemPriority.Normal, null, null);
}
Thread.Sleep(1000);
int count = cache.CachedItems().Count;
}
public class Dummy
{
public int ID { get; set; }
public string Data { get; set; }
}
}
public static class CacheBlockExtension
{
public static void ConfigureCache(this IContainerConfigurator configurator, string configKey,
int maxNumOfItems)
{
ConfigurationSourceBuilder builder = new ConfigurationSourceBuilder();
DictionaryConfigurationSource configSource = new DictionaryConfigurationSource();
// simple inmemory cache configuration
builder.ConfigureCaching().ForCacheManagerNamed(configKey).WithOptions
.StartScavengingAfterItemCount(maxNumOfItems)
.StoreInMemory();
builder.UpdateConfigurationWithReplace(configSource);
EnterpriseLibraryContainer.ConfigureContainer(configurator, configSource);
}
public static List<object> CachedItems(this ICacheManager cachemanger)
{
Microsoft.Practices.EnterpriseLibrary.Caching.Cache cache =
(Microsoft.Practices.EnterpriseLibrary.Caching.Cache)cachemanger.GetType().GetField("realCache", System.Reflection.BindingFlags.Instance |
System.Reflection.BindingFlags.NonPublic).GetValue(cachemanger);
List<object> tmpret = new List<object>();
foreach (DictionaryEntry Item in cache.CurrentCacheState)
{
Object key = Item.Key;
CacheItem cacheItem = (CacheItem)Item.Value;
tmpret.Add(cacheItem.Value);
}
return tmpret;
}