How to create hibernate configuration file for mysqlcluster? - mysql-cluster

Need Hibernate configuration (hibernate.cfg.xml) file for mysql cluster.
[Hibernate] Auto Generate POJO Classes and *.hbm.xml files.
I am able to access Mysql Database using following configuration.
And i am also able to access MYSQL NDB Cluster database using simple JDBC connectivity.
Problem is when i am using MYSQL NDB Cluster database credentials that time i am not able to access Database using Hibernate.
Please suggest any other configuration for Connect MYSQL NDB CLuster database using Hibernate configuration file (hibernate.cfg.xml).
I think the soluation is new dialect is required for MySQL NDB clustered table types.
Otherwise any changes in configuration file
<property name="hibernate.bytecode.use_reflection_optimizer">false</property>
<property name="hibernate.connection.driver_class">com.mysql.jdbc.Driver</property>
<property name="hibernate.connection.password">HAZE#rt!f!c!aldb</property>
<property name="hibernate.connection.pool_size">10</property>
<property name="hibernate.connection.url">jdbc:mysql://192.168.1.187:3306/haze_videocon_v0.8</property>
<property name="hibernate.connection.username">haze</property>
<property name="hibernate.current_session_context_class">thread</property>
<property name="hibernate.dialect">org.hibernate.dialect.MySQLDialect</property>
<property name="hibernate.search.autoregister_listeners">false</property>
<property name="hibernate.show_sql">true</property>
<property name="hibernate.validator.apply_to_ddl">false</property>
</session-factory>

It is necessary to have MySQL Cluster up and running. For simplicity all of the nodes (processes) making up the Cluster will be run on the same physical host, along with the application.
These are the MySQL Cluster configuration files being used :
config.ini:
[ndbd default]noofreplicas=2
datadir=/home/billy/mysql/my_cluster/data
[ndbd]
hostname=localhost
id=3
[ndbd]
hostname=localhost
id=4
[ndb_mgmd]
id = 1
hostname=localhost
datadir=/home/billy/mysql/my_cluster/data
[mysqld]
hostname=localhost
id=101
[api]
hostname=localhost
my.cnf:
[mysqld]
ndbcluster
datadir=/home/billy/mysql/my_cluster/data
basedir=/usr/local/mysql
This focuses on ClusterJ rather than on running MySQL Cluster; if you are new to MySQL Cluster then refer to running a simple Cluster before trying this.
ClusterJ needs to be told how to connect to our MySQL Cluster database; including the connect string (the address/port for the management node), the database to use, the user to login as and attributes for the connection such as the timeout values. If these parameters aren’t defined then ClusterJ will fail with run-time exceptions. This information represents the “configuration properties” shown in Figure 3. These parameters can be hard coded in the application code but it is more maintainable to create a clusterj.properties file that will be imported by the application. This file should be stored in the same directory as your application source code.
clusterj.properties:
com.mysql.clusterj.connectstring=localhost:1186
com.mysql.clusterj.database=clusterdb
com.mysql.clusterj.connect.retries=4
com.mysql.clusterj.connect.delay=5
com.mysql.clusterj.connect.verbose=1
com.mysql.clusterj.connect.timeout.before=30
com.mysql.clusterj.connect.timeout.after=20
com.mysql.clusterj.max.transactions=1024
As ClusterJ will not create tables automatically, the next step is to create ‘clusterdb’ database (referred to in clusterj.properties) and the ‘employee’ table:
[dadaso#ubuntu14-lts-server ~]$ mysql -u root -h 127.0.0.1 -P 3306 -u root
mysql> create database clusterdb;use clusterdb;
mysql> CREATE TABLE employee (
-> id INT NOT NULL PRIMARY KEY,
-> first VARCHAR(64) DEFAULT NULL,
-> last VARCHAR(64) DEFAULT NULL,
-> municipality VARCHAR(64) DEFAULT NULL,
-> started VARCHAR(64) DEFAULT NULL,
-> ended VARCHAR(64) DEFAULT NULL,
-> department INT NOT NULL DEFAULT 1,
-> UNIQUE KEY idx_u_hash (first,last) USING HASH,
-> KEY idx_municipality (municipality)
-> ) ENGINE=NDBCLUSTER;
The next step is to create the annotated interface:
Employee.java:
import com.mysql.clusterj.annotation.Column;
import com.mysql.clusterj.annotation.Index;
import com.mysql.clusterj.annotation.PersistenceCapable;
import com.mysql.clusterj.annotation.PrimaryKey;
#PersistenceCapable(table="employee")
#Index(name="idx_uhash")
public interface Employee {
#PrimaryKey
int getId();
void setId(int id);
String getFirst();
void setFirst(String first);
String getLast();
void setLast(String last);
#Column(name="municipality")
#Index(name="idx_municipality")
String getCity();
void setCity(String city);
String getStarted();
void setStarted(String date);
String getEnded();
void setEnded(String date);
Integer getDepartment();
void setDepartment(Integer department);
}
The name of the table is specified in the annotation #PersistenceCapable(table=”employee”) and then each column from the employee table has an associated getter and setter method defined in the interface. By default, the property name in the interface is the same as the column name in the table – the column name has been overridden for the City property by explicitly including the #Column(name=”municipality”) annotation just before the associated getter method. The #PrimaryKey annotation is used to identify the property whose associated column is the Primary Key in the table. ClusterJ is made aware of the existence of indexes in the database using the #Index annotation.
The next step is to write the application code which we step through here block by block; the first of which simply contains the import statements and then loads the contents of the clusterj.properties defined above:
Main.java (part 1):
import com.mysql.clusterj.ClusterJHelper;
import com.mysql.clusterj.SessionFactory;
import com.mysql.clusterj.Session;
import com.mysql.clusterj.Query;
import com.mysql.clusterj.query.QueryBuilder;
import com.mysql.clusterj.query.QueryDomainType;
import java.io.File;
import java.io.InputStream;
import java.io.FileInputStream;
import java.io.*;
import java.util.Properties;
import java.util.List;
public class Main {
public static void main (String[] args) throws java.io.FileNotFoundException,java.io.IOException {
// Load the properties from the clusterj.properties file
File propsFile = new File("clusterj.properties");
InputStream inStream = new FileInputStream(propsFile);
Properties props = new Properties();
props.load(inStream);
//Used later to get userinput
BufferedReader br = new BufferedReader(new
InputStreamReader(System.in));
The next step is to get a handle for a SessionFactory from the ClusterJHelper class and then use that factory to create a session (based on the properties imported from clusterj.properties file.
Main.java (part 2):
// Create a session (connection to the database)
SessionFactory factory = ClusterJHelper.getSessionFactory(props);
Session session = factory.getSession();
Now that we have a session, it is possible to instantiate new Employee objects and then persist them to the database. Where there are no transaction begin() or commit() statements, each operation involving the database is treated as a separate transaction.
Main.java (part 3):
/
/ Create and initialise an Employee
Employee newEmployee = session.newInstance(Employee.class);
newEmployee.setId(988);
newEmployee.setFirst("John");
newEmployee.setLast("Jones");
newEmployee.setStarted("1 February 2009");
newEmployee.setDepartment(666);
// Write the Employee to the database
session.persist(newEmployee);
At this point, a row will have been added to the ‘employee’ table. To verify this, a new Employee object is created and used to read the data back from the ‘employee’ table using the primary key (Id) value of 998:
Main.java (part 4):
// Fetch the Employee from the database
Employee theEmployee = session.find(Employee.class, 988);
if (theEmployee == null)
{System.out.println("Could not find employee");}
else
{System.out.println ("ID: " + theEmployee.getId() + "; Name: " +
theEmployee.getFirst() + " " + theEmployee.getLast());
System.out.println ("Location: " + theEmployee.getCity());
System.out.println ("Department: " + theEmployee.getDepartment());
System.out.println ("Started: " + theEmployee.getStarted());
System.out.println ("Left: " + theEmployee.getEnded());
}
This is the output seen at this point:
ID: 988; Name: John Jones
Location: null
Department: 666
Started: 1 February 2009
Left: null
Check the database before I change the Employee - hit return when you are done
The next step is to modify this data but it does not write it back to the database yet:
Main.java (part 5):
// Make some changes to the Employee & write back to the database
theEmployee.setDepartment(777);
theEmployee.setCity("London");
System.out.println("Check the database before I change the Employee -
hit return when you are done");
String ignore = br.readLine();
The application will pause at this point and give you chance to check the database to confirm that the original data has been added as a new row but the changes have not been written back yet:
mysql> select * from clusterdb.employee;
+-----+-------+-------+--------------+-----------------+-------+------------+
| id | first | last | municipality | started | ended | department |
+-----+-------+-------+--------------+-----------------+-------+------------+
| 988 | John | Jones | NULL | 1 February 2009 | NULL | 666 |
+-----+-------+-------+--------------+-----------------+-------+------------+
After hitting return, the application will continue and write the changes to the table, using an automatic transaction to perform the update.
Main.java (part 6):
session.updatePersistent(theEmployee);
System.out.println("Check the change in the table before I bulk add
Employees - hit return when you are done");
ignore = br.readLine();
The application will again pause so that we can now check that the change has been written back (persisted) to the database:
mysql> select * from clusterdb.employee;
+-----+-------+-------+--------------+-----------------+-------+------------+
| id | first | last | municipality | started | ended | department |
+-----+-------+-------+--------------+-----------------+-------+------------+
| 988 | John | Jones | London | 1 February 2009 | NULL | 777 |
+-----+-------+-------+--------------+-----------------+-------+------------+
The application then goes onto create and persist 100 new employees. To improve performance, a single transaction is used to that all of the changes can be written to the database at once when the commit() statement is run:
Main.java (part 7):
// Add 100 new Employees - all as part of a single transaction
newEmployee.setFirst("Billy");
newEmployee.setStarted("28 February 2009");
session.currentTransaction().begin();
for (int i=700;i<800;i++) {
newEmployee.setLast("No-Mates"+i);
newEmployee.setId(i+1000);
newEmployee.setDepartment(i);
session.persist(newEmployee);
}
session.currentTransaction().commit();
The 100 new employees will now have been persisted to the database. The next step is to create and execute a query that will search the database for all employees in department 777 by using a QueryBuilder and using that to build a QueryDomain that compares the ‘department’ column with a parameter. After creating the, the department parameter is set to 777 (the query could subsequently be reused with different department numbers). The application then runs the query and iterates through and displays each of employees in the result set:
Main.java (part 8):
// Retrieve the set all of Employees in department 777
QueryBuilder builder = session.getQueryBuilder();
QueryDomainType<Employee> domain =
builder.createQueryDefinition(Employee.class);
domain.where(domain.get("department").equal(domain.param(
"department")));
Query<Employee> query = session.createQuery(domain);
query.setParameter("department",777);
List<Employee> results = query.getResultList();
for (Employee deptEmployee: results) {
System.out.println ("ID: " + deptEmployee.getId() + "; Name: " +
deptEmployee.getFirst() + " " + deptEmployee.getLast());
System.out.println ("Location: " + deptEmployee.getCity());
System.out.println ("Department: " + deptEmployee.getDepartment());
System.out.println ("Started: " + deptEmployee.getStarted());
System.out.println ("Left: " + deptEmployee.getEnded());
}
System.out.println("Last chance to check database before emptying table
- hit return when you are done");
ignore = br.readLine();
At this point, the application will display the following and prompt the user to allow it to continue:
ID: 988; Name: John Jones
Location: London
Department: 777
Started: 1 February 2009
Left: null
ID: 1777; Name: Billy No-Mates777
Location: null
Department: 777
Started: 28 February 2009
Left: null
We can compare that output with an SQL query performed on the database:
mysql> select * from employee where department=777;
+------+-------+-------------+--------------+------------------+-------+------------+
| id | first | last | municipality | started | ended | department |
+------+-------+-------------+--------------+------------------+-------+------------+
| 988 | John | Jones | London | 1 February 2009 | NULL | 777 |
| 1777 | Billy | No-Mates777 | NULL | 28 February 2009 | NULL | 777 |
+------+-------+-------------+--------------+------------------+-------+------------+
Finally, after pressing return again, the application will remove all employees:
Main.java (part 9):
session.deletePersistentAll(Employee.class);
As a final check, an SQL query confirms that all of the rows have been deleted from the ‘employee’ table.
mysql> select * from employee;
Empty set (0.00 sec)

Related

How to check backfill property of materialized view after its creation in database?

I have created the materialized view in ADX with backfill property set . If I have to check the backfill property after its creation, How can I check it using the kusto command .
Example :
.create async ifnotexists materialized-view with (**backfill=true, docString="Asset Trends",effectiveDateTime=datetime(2022-06-08)**) AssetTrend on table Variables {
Variables | summarize Normal = countif(value<=1), CheckSUM = countif(value>1 and value<=250), OutofSpecification = countif(value>250 and value<=500), MaintenanceRequired = countif(value>500 and value<=750), Failure = countif(value>750 and value<=1000) by bin(timestamp,1s) , model, objectId, tenantId, variable }
Update (following Yifat's comment):
.show materialized-view MyMV
| project EffectiveDateTime
EffectiveDateTime
2022-08-29T11:25:48.2667521Z
Search for the relevant ClientActivityId and then run this:
.show commands
| where ClientActivityId == ...
| project ResourcesUtilization.ScannedExtentsStatistics
| evaluate bag_unpack(ResourcesUtilization_ScannedExtentsStatistics)
| distinct *
MaxDataScannedTime
MinDataScannedTime
ScannedExtentsCount
ScannedRowsCount
TotalExtentsCount
TotalRowsCount
2022-08-29T11:25:52.0952791Z
2022-08-29T11:25:48.2667522Z
8
120000000
8
120000000
Here is the information from the table the MV was based on:
.show table r100k details
| project TotalExtents, TotalRowCount, MinExtentsCreationTime, MaxExtentsCreationTime
TotalExtents
TotalRowCount
MinExtentsCreationTime
MaxExtentsCreationTime
8
120000000
2022-08-29T11:25:48.2667522Z
2022-08-29T11:25:52.0952791Z

How to implement pagination for cassandra by using keys?

I'm trying to implement some kind of pagination feature for my app that using cassandra in the backend.
CREATE TABLE sample (
some_pk int,
some_id int,
name1 txt,
name2 text,
value text,
PRIMARY KEY (some_pk, some_id, name1, name2)
)
WITH CLUSTERING ORDER BY(some_id DESC)
I want to query 100 records, then store the last records keys in memory to use them later.
+---------+---------+-------+-------+-------+
| sample_pk| some_id | name1 | name2 | value |
+---------+---------+-------+-------+-------+
| 1 | 125 | x | '' | '' |
+---------+---------+-------+-------+-------+
| 1 | 124 | a | '' | '' |
+---------+---------+-------+-------+-------+
| 1 | 124 | b | '' | '' |
+---------+---------+-------+-------+-------+
| 1 | 123 | y | '' | '' |
+---------+---------+-------+-------+-------+
(for simplicity, i left some columns empty. partition key(sample_pk) is not important)
let's assume my page size is 2.
select * from sample where sample_pk=1 limit 2;
returns first 2 rows. now i store the last record in my query result and run query again to get next 2 rows;
this is the query that does not work because of restriction of a single non-EQ relation
select * from where sample_pk=1 and some_id <= 124 and name1>='a' and name2>='' limit 2;
and this one returns wrong results because some_id is in descending order and name columns are in ascending order.
select * from where sample_pk=1 and (some_id, name1, name2) <= (124, 'a', '') limit 2;
So I'm stuck. How can I implement pagination?
You can run your second query like,
select * from sample where some_pk =1 and some_id <= 124 limit x;
Now after fetching the records ignore the record(s) which you have already read (this can be done because you are storing the last record from the previous select query).
And after ignoring those records if you are end up with empty list of rows/records that means you have iterated over all the records else continue doing this for your pagination task.
You don't have to store any keys in memory, also you don't need to use limit in your cqlsh query. Just use the capabilities of datastax driver in your application code for doing pagination like the following code:
public Response getFromCassandra(Integer itemsPerPage, String pageIndex) {
Response response = new Response();
String query = "select * from sample where sample_pk=1";
Statement statement = new SimpleStatement(query).setFetchSize(itemsPerPage); // set the number of items we want per page (fetch size)
// imagine page '0' indicates the first page, so if pageIndex = '0' then there is no paging state
if (!pageIndex.equals("0")) {
statement.setPagingState(PagingState.fromString(pageIndex));
}
ResultSet rows = session.execute(statement); // execute the query
Integer numberOfRows = rows.getAvailableWithoutFetching(); // this should get only number of rows = fetchSize (itemsPerPage)
Iterator<Row> iterator = rows.iterator();
while (numberOfRows-- != 0) {
response.getRows.add(iterator.next());
}
PagingState pagingState = rows.getExecutionInfo().getPagingState();
if(pagingState != null) { // there is still remaining pages
response.setNextPageIndex(pagingState.toString());
}
return response;
}
note that if you make the while loop like the following:
while(iterator.hasNext()) {
response.getRows.add(iterator.next());
}
it will first fetch number of rows as equal as the fetch size we set, then as long as the query still matches some rows in Cassandra it will go fetch again from cassandra till it fetches all rows matching the query from cassandra which may not be intended if you want to implement a pagination feature
source: https://docs.datastax.com/en/developer/java-driver/3.2/manual/paging/

Hive auto increment UDF doesn't give desired results

I am trying to create a UDF in Hive. This UDF has to auto increment a hive table column called id.
Now the following is the Java code to create the UDF.
package myudf;
import org.apache.hadoop.hive.ql.exec.UDF;
import org.apache.hadoop.hive.ql.udf.UDFType;
#UDFType(deterministic = false, stateful = true)
public class autoincrement extends UDF{
int lastValue;
public int evaluate() {
lastValue++;
return lastValue;
}
}
Now I am able create a jar file and add the jar file to hive like below:
add jar /home/cloudera/Desktop/increment.jar;
Then create a temporary function
create temporary function inc as 'myudf.autoincrement';
Create table like below.
Create table abc(id int, name string)
Insert values:
INSERT into TABLE abc SELECT inc() as id, 'Tim';
Do select statement:
select * from abc;
Output:
1 Tim
Insert values:
INSERT into TABLE abc SELECT inc() as id, 'John';
Do select statement:
select * from abc
Output:
1 Tim
1 John
But what I was expecting was when I insert values for the 2nd time.
My expected output was :
1 Tim
2 John
How to get the expected output. What should I change in the Java code to get the desired result?
And Can I use the same function in Spark as well
In spark when I do
sqlContext.sql("show functions")
It shows the list of all functions available in Hive
But when I do
sqlContext.sql("INSERT into TABLE abc SELECT inc() as id, 'Jim'")
I got the below error
pyspark.sql.utils.AnalysisException: u'undefined function inc; line 1 pos 29'
How to create the same UDF in pyspark and get the desired output
What happens when the insert statements are executed at the same time?
Follow the below steps
change your insert to INSERT into TABLE abc SELECT max(id)+1 as id, 'Tim' from abc;
or
Modify the UDF to take int column as input and return input+1
modify your insert to INSERT into TABLE abc SELECT inc(max(id)) as id, 'Tim' from abc;
You have to try the correctness of the SQL in hive as I have checked and it works in MYSQL.

Cassandra query timestamp column

Using Cassandra 2.28, Drive 3, Sparks2.
I have a timestamp column in Cassandra I need to query it by the date portion only. If I query by date: .where("TRAN_DATE= ?", "2012-01-21" : it does not bring any result. If I include the time portion it says Invalid Date. My data (as I can read in cqlsh) is: 2012-01-21 08:01:00+0000
param: "2012-01-21" > No error but no result
param: "2012-01-21 08:01:00" > Error : Invalid Date
param: "2012-01-21 08:01:00+0000" > Error : Invalid Date
SimpleDateFormat DATE_FORMAT = new SimpleDateFormat("yyyy/mm/dd");
TRAN_DATE = DATE_FORMAT.parse("1/19/2012");
Have used the bulk loader/SSLoader to load the table
Data in table:
tran_date | id
--------------------------+-------
2012-01-14 08:01:00+0000 | ABC
2012-01-24 08:01:00+0000 | ABC
2012-01-23 08:01:00+0000 | ALM
2012-01-29 08:01:00+0000 | ALM
2012-01-13 08:01:00+0000 | ATC
2012-01-15 08:01:00+0000 | ATI
2012-01-18 08:01:00+0000 | FKT
2012-01-05 08:01:00+0000 | NYC
2012-01-11 08:01:00+0000 | JDU
2012-01-04 08:01:00+0000 | LST
How do I solve this.
Thanks
If you insert data into timestamp column without providing timezone like this one :
INSERT INTO timestamp_test (tran_date , id ) VALUES ('2016-12-19','TMP')
Cassandra will choose coordinator timezone
If no time zone is specified, the time zone of the Cassandra coordinator node handing the write request is used. For accuracy, DataStax recommends specifying the time zone rather than relying on the time zone configured on the Cassandra nodes.
If you execute select with Datastax Driver, you Need To Convert the String date into java.util.Date and set the time zone of coordinator node, In my case it was GMT+6
DateFormat dateFormat = new SimpleDateFormat("yyyy-MM-dd");
Date date = dateFormat.parse("2012-01-21");
dateFormat.setTimeZone(TimeZone.getTimeZone("GMT+6")); //Change this time zone
Now You can query with
QueryBuilder.eq("TRAN_DATE", date)
Here is a complete demo :
try (Cluster cluster = Cluster.builder().addContactPoints("127.0.0.1").withCredentials("username", "password").build(); Session session = cluster.connect("tests")) {
session.execute("INSERT INTO test_trans(tran_date , id ) VALUES ('2016-12-19','TMP')");
DateFormat dateFormat = new SimpleDateFormat("yyyy-MM-dd");
dateFormat.setTimeZone(TimeZone.getTimeZone("GMT+6"));
Date date = dateFormat.parse("2016-12-19");
System.out.println(date);
for (Row row : session.execute(QueryBuilder.select().from("timestamp_test").where(QueryBuilder.eq("tran_date", date)))) {
System.out.println(row);
}
}
Source : https://docs.datastax.com/en/cql/3.0/cql/cql_reference/timestamp_type_r.html

cassandra 1.1.x get by composite key

Is it possible using Hector or Astyanax get rows by composite keys (in multiple columns, not the ones serialized to one column)?
In cqlsh i created simple column family:
CREATE COLUMNFAMILY kkvv (x int, y int, val1 varchar, val2 varchar, PRIMARY KEY (x,y));
According to Cassandra Developer Center the rows are stored by x as a key and rest is stored in columns.
I cant' figure how to get columns slice for given x and y.
Executing cql in hector that cql
cqlQuery.setQuery("select * from kkvv")
gives me rows:
Row(2,ColumnSlice([HColumn(x=2)]))
Row(10,ColumnSlice([HColumn(x=10)]))
and console cqlsh gives:
x | y | val1 | val2
----+-----+-------+-----------
2 | 1 | v1_1 | v2_1
10 | 27 | v1_4b | v2_4b
10 | 91 | v1_4a | v2_4a
Anyone has managed to do that in any cassandra client for java?
Can i use thrift for that, or it is cql only feature?
There are two somewhat-different syntaxes at work here: CQL 2 and CQL 3. By default, a Cassandra connection expects CQL 2. CQL 2, though, doesn't understand composite key columnfamilies of the sort you've made here.
So you are apparently correctly using CQL 3 with cqlsh, since it's displaying your columns in a sane way, but you're not using it with Hector. I'm not sure whether Hector or Astyanax even support that yet. The latest release of the cassandra-jdbc driver does, so, if Hector and/or Astyanax use that, then they should work too.
There isn't (and probably won't be) any support in Thrift for treating composite-comparator columnfamilies as tables with multi-component primary keys, the way CQL 3 does it. Use CQL 3 if you want this.
Did you try the CompositeQuery.java example provided in the cassandra-tutorial project?
Also, have you read Introduction to Composite Columns by DataStax?
Good explanation how rows with composite keys are stored in Cassandra is here.
In Astyanax and Hector i noticed funny thing - when a tried to connect - it used CQL2. When i connect to Cassandra with CQL3 with cassandra api (code from example bellow), somewhere was stored this setting, after that Astyanax and Hector used cql3 instead of CQL2. Connections were made as separate executions, so it couldn't be stored on the client side... Someone has any thoughts about it?
CQL version can be set on org.apache.cassandra.thrift.Cassandra.Client with set_cql_version method.
If someone is interested in working example using pure Cassandra api:
import java.io.UnsupportedEncodingException;
import java.nio.ByteBuffer;
import java.util.List;
import org.apache.cassandra.thrift.Cassandra;
import org.apache.cassandra.thrift.Column;
import org.apache.cassandra.thrift.Compression;
import org.apache.cassandra.thrift.CqlResult;
import org.apache.cassandra.thrift.CqlRow;
import org.apache.cassandra.thrift.InvalidRequestException;
import org.apache.cassandra.thrift.SchemaDisagreementException;
import org.apache.cassandra.thrift.TimedOutException;
import org.apache.cassandra.thrift.UnavailableException;
import org.apache.thrift.TException;
import org.apache.thrift.protocol.TBinaryProtocol;
import org.apache.thrift.protocol.TProtocol;
import org.apache.thrift.transport.TFramedTransport;
import org.apache.thrift.transport.TSocket;
import org.apache.thrift.transport.TTransport;
public class KKVVGetter {
private static Cassandra.Client client;
private static TTransport transport;
public static void main(String[] args) throws UnsupportedEncodingException, InvalidRequestException,
UnavailableException, TimedOutException, SchemaDisagreementException, TException {
transport = new TFramedTransport(new TSocket("localhost", 9160));
TProtocol protocol = new TBinaryProtocol(transport);
client = new Cassandra.Client(protocol);
transport.open();
client.set_cql_version("3.0.0");
executeQuery("USE ks_test3");
show("select x,y,val1,val2 from kkvv where x > 1 and x < 11 and y < 100 and y > 2");
System.out.println("\n\n*****************************\n\n");
show("select x,y,val1,val2 from kkvv");
transport.close();
}
private static int toInt(byte[] bytes) {
int result = 0;
for (int i = 0; i < 4; i++) {
result = (result << 4) + (int) bytes[i];
}
return result;
}
private static CqlResult executeQuery(String query) throws UnsupportedEncodingException, InvalidRequestException,
UnavailableException, TimedOutException, SchemaDisagreementException, TException {
return client.execute_cql_query(ByteBuffer.wrap(query.getBytes("UTF-8")), Compression.NONE);
}
private static void show(String query) throws UnsupportedEncodingException, InvalidRequestException,
UnavailableException, TimedOutException, SchemaDisagreementException, TException {
CqlResult result = executeQuery(query);
List<CqlRow> rows = result.getRows();
System.out.println("rows: " + rows.size());
for (CqlRow row : rows) {
System.out.println("columns: " + row.getColumnsSize());
for (Column c : row.getColumns()) {
System.out.print(" " + new String(c.getName()));
switch (new String(c.getName())) {
case "x":
case "y":
System.out.print(" " + toInt(c.getValue()));
break;
case "val1":
case "val2":
System.out.print(" " + new String(c.getValue()));
break;
default:
break;
}
System.out.println();
}
}
}
}
Example for schema in question.

Resources