--Update: it seems it was a glitch with the Virtual Machine. I restarted Cassandra service and it works as expected.
--Update: It seems that the problem is not in the code, I tried to execute the insert statement in a Cassandra client and I get the same behavior, No error is displayed, nothing is inserted.
The column that causes this behavior is of type timestamp. When I set this column value to some values (ex. '2015-08-25 22:15:12')
The table is:
create table player
(msisdn varchar primary key,
game int,keyword varchar, inserted timestamp,lang int,mo int,mt int,qid int,score int)
I am new to Cassandra, downloaded the VirtualBox snapshot to test it.
I am trying the example code of the batch and it did nothing, so I tried as people suggested to execute the prepared statement directly.
var addPlayer = casDemoSession.Prepare("INSERT INTO player (msisdn,qid,keyword,mo,mt,game,score,lang,inserted) Values (?,?,?,?,?,?,?,?,?)");
for (int i = 0; i < 20; i++) {
var bs = addPlayer.Bind(getRandMSISDN(), 1, "", 1, 0, 0, 10, 0, DateTime.Now);
bs.EnableTracing(true);
casDemoSession.Execute(bs);
}
The code above does not throw any exceptions nor insert any data. I tried to trace the query but it does not show the actual cql query.
PlannetCassandra V0.1 VM running Cassandra 2.0.1
datastax driver 2.6 https://github.com/datastax/csharp-driver
One thing that might be missing is the keyspace name for your player table. Usually you would have "INSERT INTO <keyspace>.name (..."
If you're able to run cqlsh, could you add the output from "DESCRIBE TABLE <keyspace>.player" to your question, and show what happens when you attempt to do the insert in cqlsh.
Related
This is in Cassandra 3.11.4.
I'm running a modified version of a query that previously worked fine in my app. The original query that is fine was like:
SELECT SerializedRecord FROM SxRecord WHERE Mark=?
I modified the query to have a range of a timestamp (which I also added an index for, though I don't think that is relevant):
SELECT SerializedRecord FROM SxRecord WHERE Mark=? AND Timestamp>=? AND Timestamp<=?
This results in:
ResponseError {reHost = datacenter1:rack1:127.0.0.1:9042, reTrace = Nothing, reWarn = [], reCause = ServerError "java.lang.IndexOutOfBoundsException: Index: 1, Size: 1"}
When this occurs, I don't see the query CQL being logged in system_traces.sessions, which is interesting, because if I put a syntax error into the query, it is still logged there.
Additionally, when I run an (as far as I know, identical, up to timestamps) query in cqlsh, there doesn't seem to be a problem:
cqlsh> SELECT SerializedRecord FROM test_fds.SxRecord WHERE Mark=8391 AND Timestamp >= '2021-03-06 00:00:00.000+0000' AND Timestamp <= '2021-03-09 00:00:00.000+0000';
serializedrecord
------------------
This results in the following query trace:
cqlsh> select parameters from system_traces.sessions;
parameters
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
{'consistency_level': 'ONE', 'page_size': '100', 'query': 'SELECT SerializedRecord FROM test_fds.SxRecord WHERE Mark=8391 AND Timestamp >= ''2021-03-06 00:00:00.000+0000'' AND Timestamp <= ''2021-03-09 00:00:00.000+0000'';', 'serial_consistency_level': 'SERIAL'}
null
It seems that the query, executed inside a prepared/bound statement, is not recieving all the parameters needed OR too many (something bound in previous code)
The fact that you don't see the query traced comes from the fact that the driver does not even perform the query as it has unbound parameters
We have an HDInsight cluster running HBase (Ambari)
We have created a table using Pheonix
CREATE TABLE IF NOT EXISTS Results (Col1 VARCHAR(255) NOT NULL,Col1
INTEGER NOT NULL ,Col3 INTEGER NOT NULL,Destination VARCHAR(255)
NOT NULL CONSTRAINT pk PRIMARY KEY (Col1,Col2,Col3) )
IMMUTABLE_ROWS=true
We have filled some data into this table (using some java code)
Later, we decided we want to create a local index on the destination column as follows
CREATE LOCAL INDEX DESTINATION_IDX ON RESULTS (destination) ASYNC
We have run the index tool to fill the index as follows
hbase org.apache.phoenix.mapreduce.index.IndexTool --data-table
RESULTS --index-table DESTINATION_IDX --output-path
DESTINATION_IDX_HFILES
When we run queries and filter using the destination columns everything is ok. For example
select /*+ NO_CACHE, SKIP_SCAN */ COL1,COL2,COL3,DESTINATION from
Results where COL1='data' DESTINATION='some value' ;
But, if we do not use the DESTINATION in the where query, then we will get a NullPointerException in BaseResultIterators.class
(from phoenix-core-4.7.0-HBase-1.1.jar)
This exception is thrown only when we use the new local index. If we query ignoring the index like this
select /*+ NO_CACHE, SKIP_SCAN ,NO_INDEX */ COL1,COL2,COL3,DESTINATION from
Results where COL1='data' DESTINATION='some value' ;
we will not get the exception
Showing some relevant code from the area where we get the exception
...
catch (StaleRegionBoundaryCacheException e2) {
// Catch only to try to recover from region boundary cache being out of date
if (!clearedCache) { // Clear cache once so that we rejigger job based on new boundaries
services.clearTableRegionCache(physicalTableName);
context.getOverallQueryMetrics().cacheRefreshedDueToSplits();
}
// Resubmit just this portion of work again
Scan oldScan = scanPair.getFirst();
byte[] startKey = oldScan.getAttribute(SCAN_ACTUAL_START_ROW);
byte[] endKey = oldScan.getStopRow();
====================Note the isLocalIndex is true ==================
if (isLocalIndex) {
endKey = oldScan.getAttribute(EXPECTED_UPPER_REGION_KEY);
//endKey is null for some reason in this point and the next function
//will fail inside it with NPE
}
List<List<Scan>> newNestedScans = this.getParallelScans(startKey, endKey);
We must use this version of the Jar since we run inside Azure HDInsight
and we can not select a newer jar version
Any ideas how to solve this?
What does "recover from region boundary cache being out of date" mean? it seems to be related to the problem
It appears that the version that azure HDInsight has for phoenix core (phoenix-core-4.7.0.2.6.5.3004-13.jar) has the bug but if i use a bit newer version (phoenix-core-4.7.0.2.6.5.8-2.jar, from http://nexus-private.hortonworks.com:8081/nexus/content/repositories/hwxreleases/org/apache/phoenix/phoenix-core/4.7.0.2.6.5.8-2/) we do not see the bug any more
note that it is not possible to take a much newer version like 4.8.0 since in this case the server will throw a version mismatch error
I create a table with counter column in using com.datastax.driver.core packages, and a function in class:
public void doStartingJob(){
session.execute("CREATE KEYSPACE myks WITH replication "
+ "= {'class':'SimpleStrategy', 'replication_factor':1};");
session.execute("CREATE TABLE myks.clients_count(ip_address text PRIMARY KEY,"
+ "request_count counter);");
}
After this I deleted table entry from CQLSH like:
jay#jayesh-380:~$ cqlsh
Connected to Test Cluster at 127.0.0.1:9042.
[cqlsh 5.0.1 | Cassandra 3.9 | CQL spec 3.4.2 | Native protocol v4]
Use HELP for help.
cqlsh:myks> DELETE FROM clients_count WHERE ip_address='127.0.0.1';
Then to insert row with same primary-Key I used following statement(via cqlsh):
UPDATE myks.clients_count SET request_count = 1 WHERE ip_address ='127.0.0.1';
And it is not allowed as:
InvalidRequest: Error from server: code=2200 [Invalid query] message="Cannot set the value of counter column request_count (counters can only be incremented/decremented, not set)"
But, I want the value of record's counter column should be set to 1, and with same primary-Key. (Functional Requirement)
How to do the same ??
The usage of counters is a bit strange, but you'll get used to. The main thing however, is that counters cannot be reused. Once you delete a counter value for a particular primary key, this counter is lost forever. This is by design and I think is not going to change.
Back to your question, the first of your problems is the initial DELETE. Don't.
Second, if the counter value for a primary key doesn't exists, C* will treat it is zero by default. Following the documentation, to load data in the counter for the first time you have to issue:
UPDATE mysecurity.clients_count SET request_count = request_count + 1 WHERE ip_address ='127.0.0.1';
And a SELECT will return the correct answer: 1
Again, beware of deletes! Don't! If you do, any subsequent query:
UPDATE mysecurity.clients_count SET request_count = request_count + 1 WHERE ip_address ='127.0.0.1';
will NOT fail, but the counter will NOT be updated.
Another thing to note is that C* don't support atomic read and update (or update and read on counter columns. That is you cannot issue an update and get within the same query the new (or the old) value of the counter. You'll need to perform two distinct queries, one with SELECT and one with UPDATE, but in a multi-client environment the SELECT value you get could not reflect the counter value during the UPDATE.
Your app will definitely fail if you do underestimate this.
I am seeing some interesting behavior with Cassandra lightweight transactions in a standalone Cassandra instance. I am using DataStax Enterprize 5.0.2 for my testing. The issue is, a table being updated using lightweight transaction is returning true, which means it is updated. But a subsequent query on the same table shows that the row is NOT updated. Please note that I tried the same in a clustered environment and it worked absolutely fine! So I am just trying to understand what's going wrong in my environment.
Here is a simple example of what I am doing.
I create a simple table as provided below:
CREATE TABLE smart.TOPICSCONSUMERASSNSTATUS (
TOPNM text,
PARTID int,
STATUS text,
PRIMARY KEY (TOPNM,PARTID)
);
I I put the following set of preload data for testing purpose:
insert into smart.topicsconsumerassnstatus (topnm, partid, status) values ('ESP', 0, 'UNASSIGNED');
insert into smart.topicsconsumerassnstatus (topnm, partid, status) values ('ESP', 1, 'UNASSIGNED');
insert into smart.topicsconsumerassnstatus (topnm, partid, status) values ('ESP', 2, 'UNASSIGNED');
insert into smart.topicsconsumerassnstatus (topnm, partid, status) values ('ESP', 3, 'UNASSIGNED');
insert into smart.topicsconsumerassnstatus (topnm, partid, status) values ('ESP', 4, 'UNASSIGNED');
Now, I put the first select statement to get the details from the table:
select * from smart.topicsconsumerassnstatus where topnm='ESP';
It lists all partids with status UNASSIGNED. To assign partid 0, I then fire the following update statement:
update smart.topicsconsumerassnstatus set status='ASSIGNED' where topnm='ESP' and partid=0 if status='UNASSIGNED';
It returns true. And now, when I fire the above select query again, it lists all 5 rows with status UNASSIGNED. Interestingly, repeated execution of the update statement keeps returning true all the time - this clearly mean that actual data is not getting updated in the table.
I have seen the query trace as well and the update seems to be working fine as CAS is returned successful.
Also note that this behavior is seen explicitly when a query is used with "allow filtering" at least once, and from then on...
Can anyone please put some lights on what could be the issue? Is it something to do with "allow filtering" clause?
I have a issue with my CQL and cassandra is giving me no viable alternative at input '(' (...WHERE id = ? if [(]...) error message. I think there is a problem with my statement.
UPDATE <TABLE> USING TTL 300
SET <attribute1> = 13381990-735b-11e5-9bed-2ae6d3dfc201
WHERE <attribute2> = dfa2efb0-7247-11e5-a9e5-0242ac110003
IF (<attribute1> = null OR <attribute1> = 13381990-735b-11e5-9bed-2ae6d3dfc201) AND <attribute3> = 0;
Any idea were the problem is in the statement about?
It would help to have your complete table structure, so to test your statement I made a couple of educated guesses.
With this table:
CREATE TABLE lwtTest (attribute1 timeuuid, attribute2 timeuuid PRIMARY KEY, attribute3 int);
This statement works, as long as I don't add the lightweight transaction on the end:
UPDATE lwttest USING TTL 300 SET attribute1=13381990-735b-11e5-9bed-2ae6d3dfc201
WHERE attribute2=dfa2efb0-7247-11e5-a9e5-0242ac110003;
Your lightweight transaction...
IF (attribute1=null OR attribute1=13381990-735b-11e5-9bed-2ae6d3dfc201) AND attribute3 = 0;
...has a few issues.
"null" in Cassandra is not similar (at all) to its RDBMS counterpart. Not every row needs to have a value for every column. Those CQL rows without values for certain column values in a table will show "null." But you cannot query by "null" since it isn't really there.
The OR keyword does not exist in CQL.
You cannot use extra parenthesis to separate conditions in your WHERE clause or your lightweight transaction.
Bearing those points in mind, the following UPDATE and lightweight transaction runs without error:
UPDATE lwttest USING TTL 300 SET attribute1=13381990-735b-11e5-9bed-2ae6d3dfc201
WHERE attribute2=dfa2efb0-7247-11e5-a9e5-0242ac110003
IF attribute1=13381990-735b-11e5-9bed-2ae6d3dfc201 AND attribute3=0;
[applied]
-----------
False