Setting a NULL value in a BoundStatement - cassandra

I'm using Cassandra Driver 2.0.0-beta2 with Cassandra 2.0.1.
I want to set a NULL value to a column of type 'int', in a BoundStatement. I don't think I can with setInt.
This is the code I'm using:
String insertStatementString = "insert into subscribers(subscriber,start_date,subscriber_id)";
PreparedStatement insertStatement = session.prepare(insertStatementString);
BoundStatement bs = new BoundStatement(insertStatement);
bs.setString("subscriber",s.getSubscriberName());
bs.setDate("start_date",startDate);
bs.setInt("subscriber_id",s.getSubscriberID());
The last line throws a null pointer exception, which can be explained because s.getSubscriberID() return an Integer and the BoundStatement accepts only ints, so when the id is null, it can't be converted, thus the exception.
The definition in my opinion should change to:
BoundStatement.setInt(String name, Integer v);
The way it is right now, I can't set NULL values for numbers.
Or am I missing something?
Is there other way to achieve this?
In cqlsh, setting null to a column of type 'int' is possible.

There is no need to bind values where the value will be empty or null. Therefore a null check might be useful, e.g.,
if(null != s.getSubscriberID()){
bs.setInt("subscriber_id",s.getSubscriberID());
}
As to the question of multiple instantiations of BoundStatement, the creation of multiple BoundStatement will be cheap in comparison with PreparedStatements (see the CQL documentation on prepared statements). Therefore the benefit is more clear when you begin to reuse the PreparedStatement, e.g., with a loop
String insertStatementString = "insert into subscribers(subscriber,start_date,subscriber_id)";
PreparedStatement insertStatement = session.prepare(insertStatementString);
// Inside a loop for example
for(Subscriber s: subscribersCollection){
BoundStatement bs = new BoundStatement(insertStatement);
bs.setString("subscriber",s.getSubscriberName());
bs.setDate("start_date",startDate);
if(null != s.getSubscriberID()){
bs.setInt("subscriber_id",s.getSubscriberID());
}
session.execute(bs);
}

I decided not to set the value at all. By default, it is null. It's a weird workaround.
But now I have to instantiate the BoundStatement before every call, because otherwise I risk having a value different than null from a previous call.
It would be great if they added a more comprehensive 'null' support.

Related

Is it possible to insert a record that populates a column with type set<text> using a single prepared statement?

Is it possible to insert a record using prepared statements when that record contains a Set and is intended to be applied to a field with a type of 'set'?
I can see how to do it with QueryBuilder.update -
Update.Where setInsertionQuery = QueryBuilder.update(keyspaceName, tableName)
.with(QueryBuilder.addAll("set_column", QueryBuilder.bindMarker()))
.where(QueryBuilder.eq("id_column", QueryBuilder.bindMarker()));
PreparedStatement preparedStatement = keyspace.prepare(setInsertionQuery.toString());
Set<String> set = new HashSet<>(Collections.singleton("value"));
BoundStatement boundStatement = preparedStatement.bind(set,"id-value");
However that QueryBuilder.addAll() method returns an Assignment, and that appears to be only usable with QueryBuilder.update(), and not with QueryBuilder.insertInto(). Is there any way to insert that record in one step, or do I have to first call QueryBuilder.insertInto() while leaving the set column blank, and then populate it with a subsequent call too QueryBuilder.update() that uses addAll()?
I figured out how to do this. The key thing is to call the no-arg PreparedStatement.bind(), and then chain that with calls to setXXX methods, using setSet() for the set values. So something like this -
Insert insertSet = QueryBuilder.insertInto(tableName)
.value("set_column", QueryBuilder.bindMarker("set_column"))
.value("id_column", QueryBuilder.bindMarker("id_column")));
PreparedStatement preparedStatement = keyspace.prepare(setInsertionQuery.toString());
Set<String> set = new HashSet<>(Collections.singleton("value"));
BoundStatement boundStatement = preparedStatement.bind()
.setSet("id_column",set,String.class))
.setString("id_column", "id value"));

Error while passing multiple parameters as array in voltdb Adhoc stored procedure

I have a scenario where I am getting a SQL query and SQL arguments (to avoid SQL injection) as input.
And I am running that SQL using VoltDB's AdHoc stored procedure using below code.
private static final String voltdbServer = "localhost";
private static final int voltdbPort = 21212;
public ClientResponse runAdHoc(String sql, Object... sqlArgs) throws IOException, ProcCallException
{
ClientConfig clientConfig = new ClientConfig();
Client voltdbClient = ClientFactory.createClient(clientConfig);
voltdbClient.createConnection(voltdbServer, voltdbPort);
return voltdbClient.callProcedure("#AdHoc", sql, sqlArgs);
}
But I get an error org.voltdb.client.ProcCallException: SQL error while compiling query: Incorrect number of parameters passed: expected 2, passed 1
For runAdHoc("select * from table where column1 = ? and column2 = ?", "column1", "column2"), when there are two or more parameters.
And I get error org.voltdb.client.ProcCallException: Unable to execute adhoc sql statement(s): Array / Scalar parameter mismatch ([Ljava.lang.String; to java.lang.String)
For runAdHoc("select * from table where column1 = ?", "column1");, when there is only one parameter.
But I do not face this problem when I directly call voltdbClient.callProcedure("#AdHoc", "select * from table where column1 = ? and column2 = ?", "column1", "column2")
I think VoltDb is not able to treat sqlArgs as separate parameters instead, it is treating them as one array.
One way to solve this problem is parsing the SQL string myself and then passing it but I am posting this to know the efficient way to solve this problem.
Note:- Used SQL is just a test SQL
The #Adhoc system procedure is recognizing the array as one parameter. This kind of thing happens with #Adhoc because there is no planning of the procedure going on where one can explicitly state what each parameter is.
You have the right idea about parsing the sqlArgs array into the actual parameters to pass in separately. You could also concatenate these separate parameters into the SQL statement itself. That way, your adhoc statement will simply be:
voltdbClient.callProcedure("#AdHoc", sql)
Full disclosure: I work at VoltDB.
I posted the same question on VoltDB public slack channel and got one response which solved the problem which is as follows:
The short explanation is that your parameters to #Adhoc are being turned into [sql, sqlArgs] when they need to be [sql, sqlArg1, sqlArg2, …]. You’ll need to create a new array that is sqlArgs.length + 1, put sql at position 0, and copy sqlArgs into the new array starting at position 1. then pass that newly constructed array in the call to client.callProcedure("#AdHoc", newArray)
So I modified my runAdHoc method as below and it solved this problem
public ClientResponse runAdHoc(String sql, Object... sqlArgs) throws IOException, ProcCallException
{
ClientConfig clientConfig = new ClientConfig();
Client voltdbClient = ClientFactory.createClient(clientConfig);
voltdbClient.createConnection(voltdbServer, voltdbPort);
Object[] procArgs;
if (sqlArgs == null || sqlArgs.length == 0)
{
procArgs = new Object[1];
} else
{
procArgs = new Object[sqlArgs.length + 1];
System.arraycopy(sqlArgs, 0, procArgs, 1, sqlArgs.length);
}
procArgs[0] = sql;
return voltdbClient.callProcedure("#AdHoc", procArgs);
}

Cassandra Java Driver BoundStatement.setList works for set as well

I have a table userset
create table IF NOT EXISTS userset (id int primary key, name set, phone set, emails list);
Now I am executing an insert statement through datastax java driver : cassandra-driver-core-3.1.0.jar. Now I have a java.util.List of String say listString
List<String> listString = new ArrayList<String>();
listString.add("NewName");
boundStatement.setList(1, listString);
Here boundStatement is an instance of com.datastax.driver.core.BoundStatement. On index 1 i am setting the value of name in userset.
Even though the backend type of name is set and I am using BoundStatement-> setList it still executes without any errors and inputs the value in the name correctly. Is this a functionality of BoundStatement in datastax driver.
Why doesn't it throw an error when I try to setList for a parameter which is a set in the backend server?
You can say it's a bug in the datastax driver.
When you bind data with boundStatement.setList or boundStatement.setSet both method uses lookupCodec method to find the codec with the column type and don't validate the value.
But If you use statement.bind to bind data, it uses findCodec method to find the codec with column type and it's validate with the given value
if ((cqlType == null || cqlType.getName() == LIST) && value instanceof List) {
//...........
}
if ((cqlType == null || cqlType.getName() == SET) && value instanceof Set) {
//............
}

Cassandra Hector: how to insert null as a column value?

An often use-case with Cassandra is storing the data in the column names of the dynamically created column family. In this situation the row values themselves are not needed, and a usual practice is to store nulls there.
However, when dealing with Hector, it seems like there is no way to insert null value, because Hector HColumnImpl does an explicit null-check in the column's constructor:
public HColumnImpl(N name, V value, long clock, Serializer<N> nameSerializer,
Serializer<V> valueSerializer) {
this(nameSerializer, valueSerializer);
notNull(name, "name is null");
notNull(value, "value is null");
this.column = new Column(nameSerializer.toByteBuffer(name));
this.column.setValue(valueSerializer.toByteBuffer(value));
this.column.setTimestamp(clock);
}
Are there any ways to insert nulls via Hector? If not, what is the best practice in the situation when you don't care about column values and need only their names?
Try using an empty byte[], i.e. new byte[0];

PostgreSQL JDBC Null String taken as a bytea

If entity.getHistory() is null following code snippet:
(getEntityManager() returns spring injected EntityManager, database field history type is: text or varchar2(2000)
Query query = getEntityManager().createNativeQuery("insert into table_name(..., history, ....) values (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)")
[...]
.setParameter(6, entity.getHistory())
[...]
query.executeUpdate();
Gives strange exception:
17/11/11 06:26:09:009 [pool-2-thread-1] WARN util.JDBCExceptionReporter:100 - SQL Error: 0, SQLState: 42804
17/11/11 06:26:09:009 [pool-2-thread-1] ERROR util.JDBCExceptionReporter:101 - ERROR: **column "history" is of type text but expression is of type bytea**
Hint: You will need to rewrite or cast the expression.
Problem occurred only in this configuration:
OS: CentOS release 5.6 (Final)
Java: 1.6.0_26
DB: PostgreSQL 8.1
JDBC Driver: postgresql-9.1-901.jdbc4
Application Server: apache-tomcat-6.0.28
Everything is working fine on few other configurations or when history is empty string.
Same statement executed from pgAdmin works fine.
I guess problem is in PostgreSQL JDBC driver, is there some sensible reason to treat null string as a bytea value? Maybe some strange changes between Postgres 8 and 9?
There's a simple fix: when you're building the query string, if a parameter is going to be null -- use "null" instead of "?".
As #araqnid says, Hibernate is incorrectly casting null values into type bytea because it doesn't know any better.
Since you're calling setParameter(int,Object) with a null value, at a guess the entity manager has no idea which persistence type to use to bind the parameter. Istr Hibernate having a predilection for using SerializableType, which would equate to a bytea.
Can you use the setParameter method that takes a Parameter object instead, so you can specify the type? I haven't really used JPA, so can't give a detailed pointer.
(IMHO this is your just desserts for abusing EntityManager's native-sql query interface to do inserts, rather than actually mapping the entity, or just dropping through to JDBC)
Using Hibernate specific Session API works as a workaround:
String sql = "INSERT INTO person (id, name) VALUES (:id, :name)";
Session session = em.unwrap(Session.class);
SQLQuery insert = session.createSQLQuery(sql);
sql.setInteger("id", 123);
sql.setString("name", null);
insert.executeUpdate();
I've also filed HHH-9165 to report this issue, if it is indeed a bug.
Most of above answers are meaningful and helped me to narrow down issue.
I fixed issue for nulls like this:
setParameter(8, getDate(), TemporalType.DATE);
You can cast the parameter to a proper type in the query. I think this solution should work for other databases, too.
Before:
#Query("SELECT person FROM people person"
+ " WHERE (?1 IS NULL OR person.name = ?1)")
List<Person> getPeopleWithName(final String name);
After:
#Query("SELECT person FROM people person"
+ " WHERE (?1 IS NULL OR person.name = cast(?1 as string))")
List<Person> getPeopleWithName(final String name);
Also works for LIKE statements:
person.name LIKE concat('%', cast(?1 as string), '%')
and with numbers (here of type Double):
person.height >= cast(cast(?1 as string) as double)
If you are willing to use the PreparedStatement class instead of Query:
if (entity.getHistory() == null)
stmt.setNull(6, Types.VARCHAR);
else
stmt.setString(6, entity.getHistory());
(It is possible that using ?::text in your query string would also work, but I've never done it that way myself.)
The JDBC setNull with sql type parameter is fallback for legacy JDBC drivers that do not pass the JDBC Driver Test Suite.
One can guard against null errors within the query as follows:
SELECT a FROM Author a WHERE :lastName IS NULL OR LOWER(a.lastName) = :lastName
This should work in Postgresql after the fix of this issue.
Had this issue in my springboot application, with postgres and hibernate.
I needed to supply date from/to parameters, with possible nulls in every combination.
Excerpt of the query :
...
WHERE f.idf = :idCat
AND f.supplier = TRUE
AND COALESCE (pc.datefrom, CAST('2000-1-1' AS DATE) ) <= COALESCE (:datefrom, now())
AND COALESCE (pc.dateto, CAST('2200-12-31' AS DATE) ) >= COALESCE (:dateto, now())
Dao call :
#Transactional(readOnly = true)
public List<PriceListDto> ReportCatPriceList(Long idk, Date dateFrom, Date dateTo) {
Query query = EM.createNativeQuery(queryReport, PersonProductsByDay.class)
.setParameter("idCat", idk)
.setParameter("datefrom", dateFrom, TemporalType.DATE)
.setParameter("dateto", dateTo, TemporalType.DATE);
return query.getResultList();
}
I had the same problem executing an UPDATE with Oracle Database.
Sometimes a few fields that are updated are null and this issue occurs. Apparently JPA is casting my null value into bytea as already spoke in this topic.
I give up of using the Query object to update my table and used the EntityManager.merge(Entity) method.
I simply create the Entity object with all informations and when I execute the merge(Entity), the JPA executes a SELECT, compares the information already persisted on the Data Base with my Entity object and executes the UPDATE correctly. That is, JPA create a "field = NULL" sentence for the Entity variables that are null.

Resources