I am developing an application, where I am trying to create a table if not exists and making a Query on it. It is working fine in normal cases. But for the first time , when the table is created , then when trying to Query the same table, the application is throwing :
IllegalArgumentException: Table xyz does not exist in keyspace my_ks
Same happens if I drop the table, and when my code recreates the table again.
For other cases, when the table exists, it is working fine. Is it some kind of replication issue, or should use a timeout from some time when the table is created for first time.
Following is the code snippet:
// Oredr 1: First this will be called
public boolean isSchemaExists() {
boolean isSchemaExists = false;
Statement statement = QueryBuilder
.select()
.countAll()
.from(keyspace_name, table_name);
statement.setConsistencyLevel(ConsistencyLevel.LOCAL_QUORUM);
try {
Session session = cassandraClient.getSession(someSessionKey);
ResultSet resultSet = null;
resultSet = session.execute(statement);
if (resultSet.one() != null) {
isSchemaExists = true;
}
} catch (all exception handling)
}
return isSchemaExists;
}
// Oredr 2: if previous method returns false then this will be get called
public void createSchema(String createTableScript) {
Session session = cassandraClient.getSession(someSessionKey);
if (isKeySpaceExists(keyspaceName, session)) {
session.execute("USE " + keyspaceName);
}
session.execute(createTableScript);
}
//Oredr 3: Now read the table, this is throwing the exception when the table
// is created for first time
public int readTable(){
Session session = cassandraClient.getSession(someSessionKey);
MappingManager manager = new MappingManager(session);
Mapper<MyPojo> mapper = manager.mapper(MyPojo.class);
Statement statement = QueryBuilder
.select()
.from(keyspaceName, tableName)
.where(eq("col_1", someValue)).and(eq("col_2", someValue));
statement.setConsistencyLevel(ConsistencyLevel.LOCAL_QUORUM);
ResultSet resultSet = session.execute(statement);
result = mapper.map(resultSet);
for (MyPojo myPojo : result) {
return myPojo.getCol1();
}
}
In isSchemaExists function use system.tables.
SELECT * FROM system.tables WHERE keyspace_name='YOUR KEYSPACE' AND table_name='YOUR TABLE'
Corresponding Java Code:
Statement statement = QueryBuilder
.select()
.from("system", "tables")
.where(eq("keyspace_name", keyspace)).and(eq("table_name", table));
It seems like in isSchemaExists you are using actual table and keyspace which will not exist when dropped or not created. That's the reason it is throwing you error table does not exist.
Related
I discovered a new interesting service and I'm trying to understand how it works. Please explain how to connect to my jOOQ database from another program?
MockDataProvider provider = new MyProvider();
MockConnection connection = new MockConnection(provider);
DSLContext create = DSL.using(connection, SQLDialect.H2);
Field<Integer> id = field(name("BOOK", "ID"), SQLDataType.INTEGER);
Field<String> book = field(name("BOOK", "NAME"), SQLDataType.VARCHAR);
So, I create but can I connect to it?
Here I have added your code, Lukas.
try (Statement s = connection.createStatement();
ResultSet rs = s.executeQuery("SELECT ...")
) {
while (rs.next())
System.out.println(rs.getString(1));
}
This example was found here
https://www.jooq.org/doc/3.7/manual/tools/jdbc-mocking/
public class MyProvider implements MockDataProvider {
#Override
public MockResult[] execute(MockExecuteContext ctx) throws SQLException {
// You might need a DSLContext to create org.jooq.Result and org.jooq.Record objects
//DSLContext create = DSL.using(SQLDialect.ORACLE);
DSLContext create = DSL.using(SQLDialect.H2);
MockResult[] mock = new MockResult[1];
// The execute context contains SQL string(s), bind values, and other meta-data
String sql = ctx.sql();
// Dynamic field creation
Field<Integer> id = field(name("AUTHOR", "ID"), SQLDataType.INTEGER);
Field<String> lastName = field(name("AUTHOR", "LAST_NAME"), SQLDataType.VARCHAR);
// Exceptions are propagated through the JDBC and jOOQ APIs
if (sql.toUpperCase().startsWith("DROP")) {
throw new SQLException("Statement not supported: " + sql);
}
// You decide, whether any given statement returns results, and how many
else if (sql.toUpperCase().startsWith("SELECT")) {
// Always return one record
Result<Record2<Integer, String>> result = create.newResult(id, lastName);
result.add(create
.newRecord(id, lastName)
.values(1, "Orwell"));
mock[0] = new MockResult(1, result);
}
// You can detect batch statements easily
else if (ctx.batch()) {
// [...]
}
return mock;
}
}
I'm not sure what lines 3-5 of your example are supposed to do, but if you implement your MockDataProvider and put that into a MockConnection, you just use that like any other JDBC connection, e.g.
try (Statement s = connection.createStatement();
ResultSet rs = s.executeQuery("SELECT ...")
) {
while (rs.next())
System.out.println(rs.getString(1));
}
I iterated over the entire table and received less partitions than expected.
Initially, I thought that it must be something wrong on my end, but after checking the existence of every row (I have a list of billions of keys with which I used) by using simple where query, and also verifying the expected number with the spark connector, I conclude that it can't be anything other than the driver.
I have billions of data rows, yet receiving half a billion less.
anyone else encountered this issue and was able to resolve it?
adding code snippet
The structure of the table is a simple counter table ,
CREATE TABLE counter_data (
id text,
name text,
count_val counter,
PRIMARY KEY (id, name)
) ;
public class CountTable {
private Session session;
private Statement countQuery;
public void initSession(String table) {
QueryOptions queryOptions = new QueryOptions();
queryOptions.setConsistencyLevel(ConsistencyLevel.ONE);
queryOptions.setFetchSize(100);
QueryLogger queryLogger = QueryLogger.builder().build();
Cluster cluster = Cluster.builder().addContactPoints("ip").withPort(9042)
.build();
cluster.register(queryLogger);
this.session = cluster.connect("ks");
this.countQuery = QueryBuilder.select("id").from(table);
}
public void performCount(){
ResultSet results = session.execute(countQuery);
int count = 0;
String lastKey = "";
results.iterator();
for (Row row : results) {
String key = row.getString(0);
if (!key.equals(lastKey)) {
lastKey = key;
count++;
}
}
session.close();
System.out.println("count is "+count);
}
public static void main(String[] args) {
CountTable countTable = new CountTable();
countTable.initSession("counter_data");
countTable.performCount();
}
}
Upon checking your code, the consistency level requested is ONE, compared to a dirty read in RDBMS world.
queryOptions.setConsistencyLevel(ConsistencyLevel.ONE);
For stronger consistency, that is to get back all records use local_quorum. Update your code as follows
queryOptions.setConsistencyLevel(ConsistencyLevel.LOCAL_QUORUM);
local_quorum guarantees that majority of the nodes in the replica (in your case 2 out of 3) respond to the read request and hence stronger consistency resulting in accurate number of rows. Here is documentation reference on consistency.
I use spring-data-cassandra-1.2.1.RELEASE to operate Cassandra database. Things all go well .But recent days I got a problem, when I using the code to get data:
public UserInfoCassandra selectUserInfo(String passport) {
Select select = QueryBuilder.select().from("userinfo");
select.setConsistencyLevel(ConsistencyLevel.QUORUM);
select.where(QueryBuilder.eq("passport", passport));
UserInfoCassandra userinfo = operations.selectOne(select,
UserInfoCassandra.class);
return userinfo;
}
there were many properties in userinfo , but I just get two the passport and uid properties.
I debug into the method,got that the data getting from db is right,all properties were ready.but when converting them to a java object ,some disappear.. the converting code:
protected <T> T selectOne(Select query, CassandraConverterRowCallback<T> readRowCallback) {
ResultSet resultSet = query(query);
Iterator<Row> iterator = resultSet.iterator();
if (iterator.hasNext()) {
Row row = iterator.next();
T result = readRowCallback.doWith(row);
if (iterator.hasNext()) {
throw new DuplicateKeyException("found two or more results in query " + query);
}
return result;
}
return null;
}
the row data is right ,but the result is wrong, who can help ?
Most probably your entity class and it's corresponding relational model are mismatched.
Why is data read from SqlDataReader not available to a method call?
I have a table, with 'id' as column in it.
When I make a query to database, it returns rows.
This code doesnt work (Says 'id' column doesnt exist):
con.Open();
SqlDataReader requestReader = cmd.ExecuteReader();
if (requestReader.HasRows)
{
DataTable requestTable = requestReader.GetSchemaTable();
request = ReadRequest(requestTable.Rows[0]);
}
con.Close();
while this one works:
con.Open();
SqlDataReader requestReader = cmd.ExecuteReader();
if (requestReader.HasRows)
{
DataTable requestTable = requestReader.GetSchemaTable();
var requestRow = requestTable.Rows[0];
request = new Request();
request.UniqueId = (string)requestRow["id"];
}
con.Close();
You are using DataReader.GetSchemaTable which returns a DataTable with all schema informations for a given table.
It has following columns:
ColumnName
ColumnOrdinal
ColumnSize
NumericPrecision
// .. 26 others
So you don't find your id-column which belongs to your table. That's why you get the error "'id' column doesnt exist". I doubt that your second approach works. I don't see why you need GetSchemaTable at all. You just have to advance the reader to the next record:
if (requestReader.HasRows && requestReader.Read())
{
int id = requestReader.GetInt32(requestReader.GetOrdinal("id"));
// ...
}
So I'm working on an app that will select data from one database and update an identical database based on information contained in a 'Publication' Table in the Authoring database. I need to get a single object that is not connected to the 'Authoring' context so I can add it to my 'Delivery' context.
Currently I am using
object authoringRecordVersion = PublishingFactory.AuthoringContext.Set(recordType.Entity.GetType()).Find(record.RecordId);
object deliveryRecordVersion = PublishingFactory.DeliveryContext.Set(recordType.Entity.GetType()).Find(record.RecordId));
to return my records. Then if the 'deliveryRecordVersion' is null, I need to do an Insert of 'authoringRecordVersion' into 'PublishingFactory.DeliveryContext'. However, that object is already connected to the 'PublishingFactory.AuthoringContext' so it won't allow the Add() method to be called on the 'PublishingFactory.DeliveryContext'.
I have access to PublishingFactory.AuthoringContext.Set(recordType.Entity.GetType()).AsNoTracking()
but there is no way to get at the specific record I need from here.
Any suggestions?
UPDATE:
I believe I found the solution. It didn't work the first time because I was referencing the wrong object on when setting .State = EntityState.Detached;
here is the full corrected method that works as expected
private void PushToDelivery(IEnumerable<Mkl.WebTeam.UWManual.Model.Publication> recordsToPublish)
{
string recordEntity = string.Empty;
DbEntityEntry recordType = null;
// Loop through recordsToPublish and see if the record exists in Delivery. If so then 'Update' the record
// else 'Add' the record.
foreach (var record in recordsToPublish)
{
if (recordEntity != record.Entity)
{
recordType = PublishingFactory.DeliveryContext.Entry(ObjectExt.GetEntityOfType(record.Entity));
}
if (recordType == null)
{
continue;
////throw new NullReferenceException(
//// string.Format("Couldn't identify the object type stored in record.Entity : {0}", record.Entity));
}
// get the record from the Authoring context from the appropriate type table
object authoringRecordVersion = PublishingFactory.AuthoringContext.Set(recordType.Entity.GetType()).Find(record.RecordId);
// get the record from the Delivery context from the appropriate type table
object deliveryRecordVersion = PublishingFactory.DeliveryContext.Set(recordType.Entity.GetType()).Find(record.RecordId);
// somthing happened and no records were found meeting the Id and Type from the Publication table in the
// authoring table
if (authoringRecordVersion == null)
{
continue;
}
if (deliveryRecordVersion != null)
{
// update record
PublishingFactory.DeliveryContext.Entry(deliveryRecordVersion).CurrentValues.SetValues(authoringRecordVersion);
PublishingFactory.DeliveryContext.Entry(deliveryRecordVersion).State = EntityState.Modified;
PublishingFactory.DeliveryContext.SaveChanges();
}
else
{
// insert new record
PublishingFactory.AuthoringContext.Entry(authoringRecordVersion).State = EntityState.Detached;
PublishingFactory.DeliveryContext.Entry(authoringRecordVersion).State = EntityState.Added;
PublishingFactory.DeliveryContext.SaveChanges();
}
recordEntity = record.Entity;
}
}
As you say in your comment the reason why you can't use .Single(a => a.ID == record.RecordId) is that the ID property is not known at design time. So what you can do is get the entity by the Find method and then detach it from the context:
PublishingFactory.AuthoringContext
.Entry(authoringRecordVersion).State = EntityState.Detached;