How to print column values in Mutator before execute? - cassandra

Below is the code I going to insert into cassandra
Set<String> keys = MY_KEYS;
Map<String, String> pairsOfNameValues = MY_MUTATION_BY_NAME_AND_VALUE;
Set<HColumn<String, String>> colums = new HashSet<HColumn<String,String>>();
for (Entry<String, String> pair : pairsOfNameValues.entrySet()) {
colums.add(HFactory.createStringColumn(pair.getKey(), pair.getValue()));
}
Mutator<String> mutator = template.createMutator();
String column_family_name = template.getColumnFamily();
for (String key : keys) {
for (HColumn<String, String> column : colums) {
mutator.addInsertion(key, BASIC_COLUMN_FAMILY, column);
}
}
mutator.execute();
There are some cases where I don't know how many columns are inserted into the mutator. Is there any to print the data before/after the execution method.
I tried Mutationresult.tostring(). It gives the following response.
MutationResult took (3750us) for query (n/a) on host:
localhost(127.0.0.1):9160
Also Mutator to String didn't give me desired result.
Please help.

Yep, try mutator.getPendingMutationCount() before executing the query.
Other than that, you'll have to push the logic of counting what columns you are adding to the mutator manually. The toString() doesn't give you what you want as you are supposed to bind the mutator.execute() to a MutationResult. E.g:
MutationResult mr = mutator.execute();
But the mutation result doesn't give you much more either. You can know these 3 things (2 really...)
// the execution time
long getExecutionTimeMicro();
long getExecutionTimeNano();
// host used for the exec.
CassandraHost getHostUsed();

Related

HashMap loop is printing values twice

I'm very new to coding. I'm using simple code to print values from the cucumber feature file using HashMap
public void verifyIconText (List<MobileElement> element,List<List<String>> data) {
HashMap<String, String> key = new LinkedHashMap();
List<List<String>> datavalue = data;
for (List<String> param : datavalue) {
String paramvalue = param.get(1).toString().trim();
System.out.println("param is " +param);
System.out.println("datavalue is " +datavalue);
System.out.println("paramvalue is " +paramvalue);
}
}
The output is printing twice. I'm unable to get why it's printing twice. Thanks in advance
Feature file value
Then verify Enter your Mobile Number screen
|MobileNumber_Header_Text|Enter your Mobile Number|
|MobileNumber_Body_Text|You'll receive an access code through text. Standard message & data rates apply.|
|ContinueWithEmail_Button|TRUE|
|Mobilenumber_Field|TRUE|

Datastax Java Driver failed to scan an entire table

I iterated over the entire table and received less partitions than expected.
Initially, I thought that it must be something wrong on my end, but after checking the existence of every row (I have a list of billions of keys with which I used) by using simple where query, and also verifying the expected number with the spark connector, I conclude that it can't be anything other than the driver.
I have billions of data rows, yet receiving half a billion less.
anyone else encountered this issue and was able to resolve it?
adding code snippet
The structure of the table is a simple counter table ,
CREATE TABLE counter_data (
id text,
name text,
count_val counter,
PRIMARY KEY (id, name)
) ;
public class CountTable {
private Session session;
private Statement countQuery;
public void initSession(String table) {
QueryOptions queryOptions = new QueryOptions();
queryOptions.setConsistencyLevel(ConsistencyLevel.ONE);
queryOptions.setFetchSize(100);
QueryLogger queryLogger = QueryLogger.builder().build();
Cluster cluster = Cluster.builder().addContactPoints("ip").withPort(9042)
.build();
cluster.register(queryLogger);
this.session = cluster.connect("ks");
this.countQuery = QueryBuilder.select("id").from(table);
}
public void performCount(){
ResultSet results = session.execute(countQuery);
int count = 0;
String lastKey = "";
results.iterator();
for (Row row : results) {
String key = row.getString(0);
if (!key.equals(lastKey)) {
lastKey = key;
count++;
}
}
session.close();
System.out.println("count is "+count);
}
public static void main(String[] args) {
CountTable countTable = new CountTable();
countTable.initSession("counter_data");
countTable.performCount();
}
}
Upon checking your code, the consistency level requested is ONE, compared to a dirty read in RDBMS world.
queryOptions.setConsistencyLevel(ConsistencyLevel.ONE);
For stronger consistency, that is to get back all records use local_quorum. Update your code as follows
queryOptions.setConsistencyLevel(ConsistencyLevel.LOCAL_QUORUM);
local_quorum guarantees that majority of the nodes in the replica (in your case 2 out of 3) respond to the read request and hence stronger consistency resulting in accurate number of rows. Here is documentation reference on consistency.

Lose Properties when convert Cassandra column to java object

I use spring-data-cassandra-1.2.1.RELEASE to operate Cassandra database. Things all go well .But recent days I got a problem, when I using the code to get data:
public UserInfoCassandra selectUserInfo(String passport) {
Select select = QueryBuilder.select().from("userinfo");
select.setConsistencyLevel(ConsistencyLevel.QUORUM);
select.where(QueryBuilder.eq("passport", passport));
UserInfoCassandra userinfo = operations.selectOne(select,
UserInfoCassandra.class);
return userinfo;
}
there were many properties in userinfo , but I just get two the passport and uid properties.
I debug into the method,got that the data getting from db is right,all properties were ready.but when converting them to a java object ,some disappear.. the converting code:
protected <T> T selectOne(Select query, CassandraConverterRowCallback<T> readRowCallback) {
ResultSet resultSet = query(query);
Iterator<Row> iterator = resultSet.iterator();
if (iterator.hasNext()) {
Row row = iterator.next();
T result = readRowCallback.doWith(row);
if (iterator.hasNext()) {
throw new DuplicateKeyException("found two or more results in query " + query);
}
return result;
}
return null;
}
the row data is right ,but the result is wrong, who can help ?
Most probably your entity class and it's corresponding relational model are mismatched.

j2me - How to store custom objects using RMS

In the applications I'm developing I need to store data for Customer,Products and their Prices.
In order to persist that data I use RMS, but knowing that RMS doesn't support object serializing directly and since that data I read already comes in json format, I store every JSONObject as its string version, like this:
rs = RecordStore.openRecordStore(mRecordStoreName, true);
JSONArray jsArray = new JSONArray(data);
for (int i = 0; i < jsArray.length(); i++) {
JSONObject jsObj = jsArray.getJSONObject(i);
stringJSON = jsObj.toString();
addRecord(stringJSON, rs);
}
The addRecord Method
public int addRecord(String stringJSON, RecordStore rs) throws JSONException,RecordStoreException {
int id = -1;
byte[] raw = stringJSON.getBytes();
id= rs.addRecord(raw, 0, raw.length);
return id;
}
So I have three RecordStores (Customer,Products and their Prices) and for each of them I do the save as shown above to save their corresponding data.
I know this might be a possible to solution, but I'm sure there's gotta be a better implementation. Even more,considering that over those three "tables" I'm going to perform searching, sorting,etc.
In those cases, having to deserialize before proceeding to search or sort doesn't seem a very good idea.
That's why I want to ask you guys. In your experience, how do store custom objects in RMS in way that is easy to work with them later??
I really appreciate all your comments and suggestions.
EDIT
It seems that it's easier to work with records when you define a fixed max length for each field. So here's what I tried:
1) First all, this is the class I use to retrieve the values from the record store:
public class Customer {
public int idCust;
public String name;
public String IDNumber;
public String address;
}
2) This is the code I use to save every jsonObject to the record store:
RecordStore rs = null;
try {
rs = RecordStore.openRecordStore(mRecordStoreName, true);
JSONArray js = new JSONArray(data);
for (int i = 0; i < js.length(); i++) {
JSONObject jsObj = js.getJSONObject(i);
byte[] record = packRecord(jsObj);
rs.addRecord(record, 0, record.length);
}
} finally {
if (rs != null) {
rs.closeRecordStore();
}
}
The packRecord method :
private byte[] packRecord(JSONObject jsonObj) throws IOException, JSONException {
ByteArrayOutputStream raw = new ByteArrayOutputStream();
DataOutputStream out = new DataOutputStream(raw);
out.writeInt(jsonObj.getInt("idCust"));
out.writeUTF(jsonObj.getString("name"));
out.writeUTF(jsonObj.getString("IDNumber"));
out.writeUTF(jsonObj.getString("address"));
return raw.toByteArray();
}
3) This is how I pull all the records from the record store :
RecordStore rs = null;
RecordEnumeration re = null;
try {
rs = RecordStore.openRecordStore(mRecordStoreName, true);
re = rs.enumerateRecords(null, null, false);
while (re.hasNextElement()) {
Customer c;
int idRecord = re.nextRecordId();
byte[] record = rs.getRecord(idRecord);
c = parseRecord(record);
//Do something with the parsed object (Customer)
}
} finally {
if (re != null) {
re.destroy();
}
if (rs != null) {
rs.closeRecordStore();
}
}
The parseRecord Method :
private Customer parseRecord(byte[] record) throws IOException {
Customer cust = new Customer();
ByteArrayInputStream raw = new ByteArrayInputStream(record);
DataInputStream in = new DataInputStream(raw);
cust.idCust = in.readInt();
cust.name = in.readUTF();
cust.IDNumber = in.readUTF();
cust.address = in.readUTF();
return cust;
}
This is how I implemented what Mister Smith suggested(hope it's what he had in mind). However, I'm still not very sure about how to implement the searchs.
I almost forget to mention that before I made theses changes to my code, the size of my RecordStore was 229048 bytes, now it is only 158872 bytes :)
RMS is nothing of the sort of a database. You have to think of it as a record set, where each record is a byte array.
Because of this, it is easier to work with it when you define a fixed max length for each field in the record. For instance, a record could be some info about a player in a game (max level reached, score, player name, etc). You could define the level field as 4 bytes long (int), then a score field of 8 bytes (a long), then the name as a 100 bytes field (string). This is tricky because strings usually will be of variable length, but you would probably like to have a fixed max length for this field, and if some string is shorter than that, you'd use a string terminator char to delimite it. (This example is actually bad because the string is the last field, so it would have been easier to keep it variable length. Just imagine you have several consecutive fields of type string.)
To help you with serialization/deserialization, you can use DataOutputstream and DataInputStream. With these classes you can read/write strings in UTF and they will insert the string delimiters for you. But this means that when you need a field, as you don't know exactly where it is located, you'll have to read the array up to that position first.
The advantage of fixed lengths is that you could later use a RecordFilter and if you wanted to retrieve recors of players that have reached a score greater than 10000, you can look at the "points" field in exactly the same position (an offset of 4 bytes from the start of the byte array).
So it's a tradeoff. Fixed lengths means faster access to fields (faster searches), but potential waste of space. Variable lengths means minimum storage space but slower searches. What is best for your case will depend on the number of records and the kind of searches you need.
You have a good collection of tutorials in the net. Just to name a few:
http://developer.samsung.com/java/technical-docs/Java-ME-Record-Management-System
http://developer.nokia.com/community/wiki/Persistent_Data_in_Java_ME

SqlDataReader and method scope

Why is data read from SqlDataReader not available to a method call?
I have a table, with 'id' as column in it.
When I make a query to database, it returns rows.
This code doesnt work (Says 'id' column doesnt exist):
con.Open();
SqlDataReader requestReader = cmd.ExecuteReader();
if (requestReader.HasRows)
{
DataTable requestTable = requestReader.GetSchemaTable();
request = ReadRequest(requestTable.Rows[0]);
}
con.Close();
while this one works:
con.Open();
SqlDataReader requestReader = cmd.ExecuteReader();
if (requestReader.HasRows)
{
DataTable requestTable = requestReader.GetSchemaTable();
var requestRow = requestTable.Rows[0];
request = new Request();
request.UniqueId = (string)requestRow["id"];
}
con.Close();
You are using DataReader.GetSchemaTable which returns a DataTable with all schema informations for a given table.
It has following columns:
ColumnName
ColumnOrdinal
ColumnSize
NumericPrecision
// .. 26 others
So you don't find your id-column which belongs to your table. That's why you get the error "'id' column doesnt exist". I doubt that your second approach works. I don't see why you need GetSchemaTable at all. You just have to advance the reader to the next record:
if (requestReader.HasRows && requestReader.Read())
{
int id = requestReader.GetInt32(requestReader.GetOrdinal("id"));
// ...
}

Resources