How can i configure Atomikos for HazelCast instance.As per the mastering-hazel cast we can only do it in java.How can i configure like i do for databases.If configuring is java is the way,then how i can make use of TransactionalTask to remove the boilerplate code starting and committing the transactions.i have tried like
public void insertIntoGridJTA( final List<String> list)
throws NotSupportedException, SystemException,
IllegalStateException, RollbackException {
HazelcastInstance hazelcast = Hazelcast.newHazelcastInstance();
HazelcastXAResource xaResource = hazelcast.getXAResource();
TransactionContext context = xaResource.getTransactionContext();
hazelcast.executeTransaction(new TransactionalTask<Object>() {
public Object execute(TransactionalTaskContext context)
throws TransactionException {
// TODO Auto-generated method stub
TransactionalMap<Integer, String> map = context.getMap("demo");
System.out.println("map"+map.getName());
for (int i = 0; i < list.size(); i++) {
map.put(i, list.get(i));
}
return null;
}
});
}
But the transaction is not starting if i am using TransactionalTask
Did you had a look at the Atomikos example in our examples repo? https://github.com/hazelcast/hazelcast-code-samples/blob/master/transactions/xa-transactions/src/main/java/XATransaction.java
In addition to #noctarius, I have done it like this in the past
#Autowired
private JdbcTemplate jdbcTemplate;
#Autowired
private HazelcastXAResource hzXAResource;
#Autowired
private UserTransactionManager userTransactionManager;
#Transactional
public void insert() throws SystemException, RollbackException {
final Transaction transaction = userTransactionManager.getTransaction();
transaction.enlistResource(hzXAResource);
final TransactionContext hzTransactionContext = hzXAResource.getTransactionContext();
final TransactionalMap<Long, String> hzCustomerMap = hzTransactionContext.getMap("hzCustomerMap");
// Use a Java 8 stream to print out each tuple of the list
CUSTOMERS_TEST_DATA.forEach(customer -> {
log.info("Inserting customer record for {}", customer);
hzCustomerMap.put(customer.getId(), customer.toString());
});
jdbcTemplate.batchUpdate(Sql.SQL_INSERT.getSql(), new BatchPreparedStatementSetter() {
#Override
public void setValues(PreparedStatement ps, int i) throws SQLException {
ps.setString(1, CUSTOMERS_TEST_DATA.get(i).getFirstName());
ps.setString(2, CUSTOMERS_TEST_DATA.get(i).getLastName());
}
#Override
public int getBatchSize() {
return CUSTOMERS_TEST_DATA.size();
}
});
// Uncomment this to test the failure of the transaction
// hzCustomerMap.values((Predicate) entry -> {
// throw new RuntimeException();
// });
transaction.delistResource(hzXAResource, XAResource.TMSUCCESS);
}
Related
This Is Main Fragment
Fragment:
private void getStock() {
dialog.show();
Retrofit retrofit = RetrofitClient.getRetrofitInstance();
apiInterface api = retrofit.create(apiInterface.class);
Call<List<Blocks>>call = api.getVaccineBlocks();
call.enqueue(new Callback<List<Blocks>>() {
#Override
public void onResponse(Call<List<Blocks>>call, Response<List<Blocks>> response) {
if (response.code() == 200) {
block = response.body();
spinnerada();
dialog.cancel();
}else{
dialog.cancel();
}
}
#Override
public void onFailure(Call<List<Blocks>> call, Throwable t) {
dialog.cancel();
}
});
}
private void spinnerada() {
String[] s = new String[block.size()];
for (int i = 0; i < block.size(); i++) {
s[i] = block.get(i).getBlockName();
final ArrayAdapter a = new ArrayAdapter(getContext(), android.R.layout.simple_spinner_item, s);
a.setDropDownViewResource(android.R.layout.simple_spinner_dropdown_item);
//Setting the ArrayAdapter data on the Spinner
spinner.setAdapter(a);
}
}
This Is Blocks Model
model:
package com.smmtn.book.models;
import java.io.Serializable;
public class Blocks implements Serializable {
public String id;
public String blockName;
public String blockSlug;
public String getId() {
return id;
}
public void setId(String id) {
this.id = id;
}
public String getBlockName() {
return blockName;
}
public void setBlockName(String blockName) {
this.blockName = blockName;
}
public String getBlockSlug() {
return blockSlug;
}
public void setBlockSlug(String blockSlug) {
this.blockSlug = blockSlug;
}
}
here i need onitemclick with blockslug please any one can help, am new to android so i need some example.when on click i want take blockslug and load another method with that blockslug,like will get data from u "http://example.com/block/"+blockslug
i want to get blockslug from selected block
i hope guys i will get help
and sorry for my bad English,
First of all, you need to implement setOnItemSelectedListener. Refer to this https://stackoverflow.com/a/20151596/9346054
Once you selected the item, you can call them by making a new method. Example like below
public void onItemSelected(AdapterView<?> parent, View view, int pos,long id) {
Toast.makeText(parent.getContext(),
"OnItemSelectedListener : " + parent.getItemAtPosition(pos).toString(),
Toast.LENGTH_SHORT).show();
final String itemSelected = parent.getItemAtPosition(pos).toString();
showBlockSlug(itemSelected);
}
And then, at the method showBlockSlug() , you can call Retrofit.
private void showBlockSlug(final String blockslug){
final String url = "http://example.com/block/"+ blockslug;
//Do your stuff...
}
I have done some tests on these two classes. Could someone please help to determine if these two classes are threadsafe? Could someone help to identify if not using concurrentHashMap, but use HashMap would it cause any concurrent issue. How can I make it more threadsafe? What is the best approach to testing it with concurrent testing?
I tested it with Hashmap only and it works fine. However, my scale of test is around 20 req/s for 2 mins.
Can anyone suggest if I should increase the req rate and try again or can point somewhere that must require fix.
#Component
public class TestLonggersImpl
implements TestSLongger {
#Autowired
YamlReader yamlReader;
#Autowired
TestSCatalog gSCatalog;
#Autowired
ApplicationContext applicationContext;
private static HashMap<String, TestLonggerImpl> gImplHashMap = new HashMap<>();
private static final Longger LONGER = LonggerFactory.getLongger(AbstractSLongger.class);
#PostConstruct
public void init() {
final String[] sts = yamlReader.getTestStreamNames();
for (String st : sts) {
System.out.println(st);
LONGER.info(st);
}
HashMap<String, BSCatalog> statsCatalogHashMap = gSCatalog.getCatalogHashMap();
for (Map.Entry<String, BSCatalog> entry : statsCatalogHashMap.entrySet()) {
BSCatalog bCatalog = statsCatalogHashMap.get(entry.getKey());
//Issue on creating the basicCategory
SProperties sProperties = yamlReader.getTestMap().get(entry.getKey());
Category category = new BasicCategory(sProperties.getSDefinitions(),
bCatalog.getVersion(),
bCatalog.getDescription(), new HashSet<>());
final int version = statsCatalogHashMap.get(entry.getKey()).getVersion();
getTestImplHashMap().put(entry.getKey(),
applicationContext.getBean(TestLonggerImpl.class, category,
entry.getKey(),
version));
}
}
#Override
public void logMessage(String st, String message) {
if (getTestImplHashMap() != null && getTestImplHashMap().get(st) != null) {
getTestImplHashMap().get(st).log(message);
}
}
#VisibleForTesting
static HashMap<String, TestLonggerImpl> getTestImplHashMap() {
return gImplHashMap;
}
}
*** 2nd class
#Component
public class GStatsCatalog {
#Autowired
YamlReader yamlReader;
private static HashMap<String, BStatsCatalog> stCatalogHashMap = new HashMap<>();
#PostConstruct
public void init() {
String[] streams = yamlReader.getGSNames();
for (String stream : streams) {
BStatsCatalog bCatalog = new BStatsCatalog();
SProperties streamProperties = yamlReader.getGMap().get(stream);
bCatalog.setSName(stream);
int version = VERSION;
try {
version = Integer.parseInt(streamProperties.getVersion());
} catch (Exception e) {
System.out.println(e.getMessage());
}
bCatalog.setVersion(version);
bCatalog.setDescription(streamProperties.getDescription());
stCatalogHashMap.put(stream, bCatalog);
}
}
public static HashMap<String, BStatsCatalog> getCatalogHashMap() {
return stCatalogHashMap;
}
public void setYamlReader(YamlReader yamlReader) {
this.yamlReader = yamlReader;
}
}
I think the methods under #postconstruct are threadsafe. It only runs once after the bean created in the whole lifecircle of the bean.
This is my source,how can i print message if some of the members is shut down for some reason?I think i can some event or some kind of action listener but how...
import com.hazelcast.core.*;
import com.hazelcast.config.*;
import java.util.Map;
/** * * #author alvel */ public class ShutDown {
public static void main(String[] args) {
Config cfg = new Config();
HazelcastInstance memberOne = Hazelcast.newHazelcastInstance(cfg);
HazelcastInstance memberTwo = Hazelcast.newHazelcastInstance(cfg);
Map<Integer, String> customerMap = memberOne.getMap("customers");
customerMap.put(1, "google");
customerMap.put(2, "apple");
customerMap.put(3, "yahoo");
customerMap.put(4, "microsoft");
System.out.println("Hazelcast Nodes in this cluster"+Hazelcast.getAllHazelcastInstances().size());
memberOne.shutdown();
System.out.println("Hazelcast Nodes in this cluster After shutdown"+Hazelcast.getAllHazelcastInstances().size());
Map<Integer, String> customerRestored = memberTwo.getMap("customers");
for(String val:customerRestored.values()){
System.out.println("-"+val);
}
} }
Try this, it adds a few lines into your code and a new class
public class ShutDown {
static {
// ONLY TEMPORARY
System.setProperty("hazelcast.logging.type", "none");
}
public static void main(String[] args) {
Config cfg = new Config();
HazelcastInstance memberOne = Hazelcast.newHazelcastInstance(cfg);
//ADDED TO MEMBER ONE
memberOne.getCluster().addMembershipListener(new ShutDownMembershipListener());
HazelcastInstance memberTwo = Hazelcast.newHazelcastInstance(cfg);
//ADDED TO MEMBER TWO
memberTwo.getCluster().addMembershipListener(new ShutDownMembershipListener());
Map<Integer, String> customerMap = memberOne.getMap("customers");
customerMap.put(1, "google");
customerMap.put(2, "apple");
customerMap.put(3, "yahoo");
customerMap.put(4, "microsoft");
System.out.println("Hazelcast Nodes in this cluster"+Hazelcast.getAllHazelcastInstances().size());
memberOne.shutdown();
System.out.println("Hazelcast Nodes in this cluster After shutdown"+Hazelcast.getAllHazelcastInstances().size());
Map<Integer, String> customerRestored = memberTwo.getMap("customers");
for(String val:customerRestored.values()){
System.out.println("-"+val);
}
}
static class ShutDownMembershipListener implements MembershipListener {
#Override
public void memberAdded(MembershipEvent membershipEvent) {
System.out.println(this + membershipEvent.toString());
}
#Override
public void memberAttributeChanged(MemberAttributeEvent arg0) {
}
#Override
public void memberRemoved(MembershipEvent membershipEvent) {
System.out.println(this + membershipEvent.toString());
}
}
}
The line System.setProperty("hazelcast.logging.type", "none") is just for testing to make it simpler to see what is happening.
It seems that the MapStore (write-behind mode) does not work properly when replacing some map items.
I expected that map items which have been replaced, are not processed anymore.
I am using Hazelcast 3.2.5
Did i miss something here?
Please see Server Test Class, Client Test Class and Output as well to demonstrate the problem.
Server Class:
public class HazelcastInstanceTest {
public static void main(String[] args) {
Config cfg = new Config();
MapConfig mapConfig = new MapConfig();
MapStoreConfig mapStoreConfig = new MapStoreConfig();
mapStoreConfig.setEnabled(true);
mapStoreConfig.setClassName("com.test.TestMapStore");
mapStoreConfig.setWriteDelaySeconds(15);
mapConfig.setMapStoreConfig(mapStoreConfig);
mapConfig.setName("customers");
cfg.addMapConfig(mapConfig);
HazelcastInstance instance = Hazelcast.newHazelcastInstance(cfg);
}
}
MapStore Impl Class
public class TestMapStore implements MapStore {
#Override
public Object load(Object arg0) {
System.out.println("--> LOAD");
return null;
}
#Override
public Map loadAll(Collection arg0) {
System.out.println("--> LOAD ALL");
return null;
}
#Override
public Set loadAllKeys() {
System.out.println("--> LOAD ALL KEYS");
return null;
}
#Override
public void delete(Object arg0) {
System.out.println("--> DELETE");
}
#Override
public void deleteAll(Collection arg0) {
System.out.println("--> DELETE ALL");
}
#Override
public void store(Object arg0, Object arg1) {
System.out.println("--> STORE " + arg1.toString());
}
#Override
public void storeAll(Map arg0) {
System.out.println("--> STORE ALL");
}
}
Client Class
public class HazelcastClientTest {
public static void main(String[] args) throws Exception {
ClientConfig clientConfig = new ClientConfig();
HazelcastInstance client = HazelcastClient.newHazelcastClient(clientConfig);
IMap mapCustomers = client.getMap("customers");
System.out.println("Map Size:" + mapCustomers.size());
mapCustomers.put(1, "Item A");
mapCustomers.replace(1, "Item B");
mapCustomers.replace(1, "Item C");
System.out.println("Map Size:" + mapCustomers.size());
}
}
Client Output (which is ok):
Map Size:0
Map Size:1
Server Output (which is not ok, as i suppose. I expected only item C)
--> LOAD ALL KEYS
--> LOAD
--> STORE Item A
--> STORE ALL
--> STORE Item B
--> STORE Item C
Any help is appreciate.
Many thanks
Context:
I am running a jUnit test in eclipse by using embedded Cassandra to test my DAO class which is using an Astyanax client configured for JavaDriver. When DAO object instance insert into Cassandra I am getting this exception com.datastax.driver.core.exceptions.InvalidQueryException: Multiple definitions found for column ..columnname
TestClass
public class LeaderBoardDaoTest {
private static LeaderBoardDao dao;
public static CassandraCQLUnit cassandraCQLUnit;
private String hostIp = "127.0.0.1";
private int port = 9142;
public Session session;
public Cluster cluster;
#BeforeClass
public static void startCassandra() throws IOException, TTransportException, ConfigurationException, InterruptedException {
System.setProperty("archaius.deployment.applicationId", "leaderboardapi");
System.setProperty("archaius.deployment.environment", "test");
EmbeddedCassandraServerHelper.startEmbeddedCassandra("cassandra.yaml");
// cassandraCQLUnit = new CassandraCQLUnit(new
// ClassPathCQLDataSet("simple.cql", "lbapi"), "cassandra.yaml");
Injector injector = Guice.createInjector(new TestModule());
dao = injector.getInstance(LeaderBoardDao.class);
}
#Before
public void load() {
cluster = new Cluster.Builder().withClusterName("leaderboardcassandra").addContactPoints(hostIp).withPort(port).build();
session = cluster.connect();
CQLDataLoader dataLoader = new CQLDataLoader(session);
dataLoader.load(new ClassPathCQLDataSet("simple.cql", "lbapi"));
session = dataLoader.getSession();
}
#Test
public void test() {
ResultSet result = session.execute("select * from mytable WHERE id='myKey01'");
Assert.assertEquals(result.iterator().next().getString("value"), "myValue01");
}
#Test
public void testInsert() {
LeaderBoard lb = new LeaderBoard();
lb.setName("name-1");
lb.setDescription("description-1");
lb.setActivityType(ActivityType.FUEL);
lb.setImage("http:/");
lb.setLbId(UUID.fromString("3F2504E0-4F89-41D3-9A0C-0305E82C3301"));
lb.setStartTime(new Date());
lb.setEndTime(new Date());
dao.insert(lb);
ResultSet resultSet = session.execute("select * from leaderboards WHERE leaderboardid='3F2504E0-4F89-41D3-9A0C-0305E82C3301'");
}
#After
public void clearCassandra() {
EmbeddedCassandraServerHelper.cleanEmbeddedCassandra();
}
#AfterClass
public static void stopCassandra() {
EmbeddedCassandraServerHelper.stopEmbeddedCassandra();
}
}
Class under test
#Singleton
public class LeaderBoardDao {
private static final Logger log = LoggerFactory.getLogger(LeaderBoardDao.class);
#Inject
private AstyanaxMutationsJavaDriverClient client;
private static final String END_TIME = "end_time";
private static final String START_TIME = "start_time";
private static final String IMAGE = "image";
private static final String ACTIVITY_TYPE = "activity_type";
private static final String DESCRIPTION = "description";
private static final String NAME = "name";
private static final String LEADERBOARD_ID = "leaderboardID";
private static final String COLUMN_FAMILY_NAME = "leaderboards";
private ColumnFamily<UUID, String> cf;
public LeaderBoardDao() throws ConnectionException {
cf = ColumnFamily.newColumnFamily(COLUMN_FAMILY_NAME, UUIDSerializer.get(), StringSerializer.get());
}
/**
* Writes the Leaderboard to the database.
*
* #param lb
*/
public void insert(LeaderBoard lb) {
try {
MutationBatch m = client.getKeyspace().prepareMutationBatch();
cf.describe(client.getKeyspace());
m.withRow(cf, lb.getLbId()).putColumn(LEADERBOARD_ID, UUIDUtil.asByteArray(lb.getLbId()), null).putColumn(NAME, lb.getName(), null).putColumn(DESCRIPTION, lb.getDescription(), null)
.putColumn(ACTIVITY_TYPE, lb.getActivityType().name(), null).putColumn(IMAGE, lb.getImage()).putColumn(START_TIME, lb.getStartTime()).putColumn(END_TIME, lb.getEndTime());
m.execute();
} catch (ConnectionException e) {
Throwables.propagate(e);
}
}
/**
* Reads leaderboard from database
*
* #param id
* #return {#link LeaderBoard}
*/
public LeaderBoard read(UUID id) {
OperationResult<ColumnList<String>> result;
LeaderBoard lb = null;
try {
result = client.getKeyspace().prepareQuery(cf).getKey(id).execute();
ColumnList<String> cols = result.getResult();
if (!cols.isEmpty()) {
lb = new LeaderBoard();
lb.setLbId(cols.getUUIDValue(LEADERBOARD_ID, null));
lb.setName(cols.getStringValue(NAME, null));
lb.setActivityType(ActivityType.valueOf(cols.getStringValue(ACTIVITY_TYPE, null)));
lb.setDescription(cols.getStringValue(DESCRIPTION, null));
lb.setEndTime(cols.getDateValue(END_TIME, null));
lb.setStartTime(cols.getDateValue(START_TIME, null));
lb.setImage(cols.getStringValue(IMAGE, null));
} else {
log.warn("read: is empty: no record found for " + id);
}
return lb;
} catch (ConnectionException e) {
log.error("failed to read from C*", e);
throw new RuntimeException("failed to read from C*", e);
}
}
}
When the Java driver throws an InvalidQueryException, it's rethrowing an error from Cassandra. The error "Multiple definitions found for column..." indicates that a column is mentioned more than once in an update statement. You can simulate it in cqlsh:
cqlsh> create table test(i int primary key);
cqlsh> insert into test (i, i) values (1, 2);
code=2200 [Invalid query] message="Multiple definitions found for column i"
I'm not familiar with Astyanax, but my guess is that it already adds the id to the query when you call withRow, so you don't need to add it again with putColumn. Try removing that call (second line in reformatted sample below):
m.withRow(cf, lb.getLbId())
.putColumn(LEADERBOARD_ID, UUIDUtil.asByteArray(lb.getLbId()), null)
... // other putColumn calls