I am trying to read a particular data from database. And executing the query to get the result. I am running the code in a while (true) loop so that I am doing the operation again and again.
In some of the iterations, I am getting result but in some cases it shows com.datastax.driver.core.exceptions.TraceRetrievalException and couldn't get the execution time for the query.
Cassandra version-3.11.4 , Datastax version-3.7
Exception thrown :
com.datastax.driver.core.exceptions.TraceRetrievalException: Unable to retrieve complete query trace for id bf7902b0-5d26-11e9-befd-d97cd54dc732 after 5 tries
at com.datastax.driver.core.QueryTrace.doFetchTrace(QueryTrace.java:242)
at com.datastax.driver.core.QueryTrace.maybeFetchTrace(QueryTrace.java:176)
at com.datastax.driver.core.QueryTrace.getDurationMicros(QueryTrace.java:105)
at com.cassandra.datastax.DatastaxTestBootstrapper.lambda$0(DatastaxTestBootstrapper.java:227)
at java.util.Collections$SingletonList.forEach(Collections.java:4822)
at com.cassandra.datastax.DatastaxTestBootstrapper.readData(DatastaxTestBootstrapper.java:227)
at com.cassandra.datastax.MultipleThread.run(MultipleThread.java:13)
at java.lang.Thread.run(Thread.java:748)
Is there any way I can overcome this exception or can I increase the number of retries to retrieve complete query trace?
Statement statement1 = new SimpleStatement(
"SELECT * FROM keyspace.table where key='405861500500033'").enableTracing();
ResultSet resultSet = session.execute(statement1);
resultSet.getAllExecutionInfo()
.forEach(e -> System.out.println("time : " + e.getQueryTrace().getDurationMicros()));
Related
I am trying to do a COPY INTO statement for about 10 files of size 100mb each that are stored in the data lake, and it keeps throwing this error;
Msg 110806, Level 11, State 0, Line 7
110806;A distributed query failed: A severe error occurred on the current command. The results, if any, should be discarded.
Operation cancelled by user.
The command i used is;
COPY INTO AAA.AAAA FROM 'https://AAAA.blob.core.windows.net/data_*.csv' WITH (
CREDENTIAL = (IDENTITY = 'MANAGED IDENTITY'), FIELDQUOTE = N'"', FIELDTERMINATOR = N',',FIRSTROW = 2 );
where did i go wrong? please advise.
I am trying to connect DB2/IDAA using ADFV2 - while executing simple query "select * from table" - I am getting below error:
Operation on target Copy data from IDAA failed: An error has occurred on the Source side. 'Type = Microsoft.HostIntegration.DrdaClient.DrdaException, Message = Exception or type' Microsoft.HostIntegration.Drda.Common.DrdaException 'was thrown. SQLSTATE = HY000 SQLCODE = -343, Source = Microsoft.HostIntegration.Drda.Requester, '
I checked a lot and tried various options but still it's an issue.
I tried query "select * from table with ur" - query to call with read-only but still get above result.
If I use query like select * from table; commit; - then activity succeeded but no record fetch.
Is anyone have solution ?
I have my linked service setup like this. additional connection properties value is : SET CURRENT QUERY ACCELERATION = ALL
We are using psycopg2 jsonb cursor to fetch the data and processing but when ever new thread or processing coming it should not fetch and process the same records which first process or thread.
For that we have try to use the FOR UPDATE but we just want to know whether we are using correct syntax or not.
con = self.dbPool.getconn()
cur = conn.cursor()
sql="""SELECT jsondoc FROM %s WHERE jsondoc #> %s"”"
if 'sql' in queryFilter:
sql += queryFilter 'sql’]
When we print this query, it will be shown as below:
Query: "SELECT jsondoc FROM %s WHERE jsondoc #> %s AND (jsondoc ->> ‘claimDate')::float <= 1536613219.0 AND ( jsondoc ->> ‘claimstatus' = ‘done' OR jsondoc ->> 'claimstatus' = 'failed' ) limit 2 FOR UPDATE"
cur.execute(sql, (AsIs(self.tablename), Json(queryFilter),))
cur.execute()
dbResult = cur.fetchall()
Please help us to clarify the syntax and explain if that syntax is correct then how this query lock the fetched records of first thread.
Thanks,
Sanjay.
If this exemplary query is executed
select *
from my_table
order by id
limit 2
for update; -- wrong
then two resulting rows are locked until the end of the transaction (i.e. next connection.rollback() or connection.commit() or the connection is closed). If another transaction tries to run the same query during this time, it will be stopped until the two rows are unlocked. So it is not the behaviour you are expected. You should add skip locked clause:
select *
from my_table
order by id
limit 2
for update skip locked; -- correct
With this clause the second transaction will skip the locked rows and return next two onces without waiting.
Read about it in the documentation.
How to fetch the data from influx db using time-string in node.js
Query:
"SELECT * FROM Measurement WHERE time > '2018-01-08 23:00:01.232000000' AND time < '2018-01-09'/ "
I am facing the following error when executing the above query
Error: Error from InfluxDB: invalid operation: time and *influxql.RegexLiteral are not compatible at ResultError.Error
I was experimenting with astyana write operation, and was using cqlsh for that.
If i simply use the query, I am able to get the result
keyspace.prepareQuery(COLUMN_FAMILY).withCql("insert into scan_request (customer_id, uuid, scan_type, status, content_size, request_time, request_content_hash) values ("+dao.getCustomerId()+","+"'"+dao.getUuId()+"',"+dao.getScanType()+","+dao.getStatus()+","+dao.getContentSize()+",'2012-12-12 12:12:12', '"+dao.getRequestContentHash()+"');")
.execute();
However if i use prepared statement to do the same, i get the below error.
this.getKeyspace()
.prepareQuery(COLUMN_FAMILY)
.withCql(INSERT_STATEMENT)
.asPreparedStatement()
.withIntegerValue(dao.getCustomerId())
.withStringValue(dao.getUuId())
.withIntegerValue(dao.getScanType())
.withIntegerValue(dao.getStatus())
.withIntegerValue(dao.getContentSize())
.withStringValue("'2012-12-12 12:12:12'")
.withStringValue(dao.getRequestContentHash())
.execute();
I get the below error
Exception in thread "main" java.lang.RuntimeException: failed to write data to C*
at com.tools.dbaccess.cassandra.astyanax.AstyanaxClient.write(AstyanaxClient.java:155)
at com.tools.dbaccess.cassandra.astyanax.AstyanaxClient.main(AstyanaxClient.java:164)
Caused by: com.netflix.astyanax.connectionpool.exceptions.BadRequestException: BadRequestException: [host=localhost(127.0.0.1):9160, latency=11(11), attempts=1]InvalidRequestException(why:Expected 8 or 0 byte long for date (21))
at com.netflix.astyanax.thrift.ThriftConverter.ToConnectionPoolException(ThriftConverter.java:159)
at com.netflix.astyanax.thrift.AbstractOperationImpl.execute(AbstractOperationImpl.java:65)
at com.netflix.astyanax.thrift.AbstractOperationImpl.execute(AbstractOperationImpl.java:28)
at com.netflix.astyanax.thrift.ThriftSyncConnectionFactoryImpl$ThriftConnection.execute(ThriftSyncConnectionFactoryImpl.java:151)
at com.netflix.astyanax.connectionpool.impl.AbstractExecuteWithFailoverImpl.tryOperation(AbstractExecuteWithFailoverImpl.java:69)
at com.netflix.astyanax.connectionpool.impl.AbstractHostPartitionConnectionPool.executeWithFailover(AbstractHostPartitionConnectionPool.java:256)
at com.netflix.astyanax.thrift.AbstractThriftCqlQuery$3.execute(AbstractThriftCqlQuery.java:80)
at com.tools.dbaccess.cassandra.astyanax.AstyanaxClient.write(AstyanaxClient.java:144)
... 1 more
Caused by: InvalidRequestException(why:Expected 8 or 0 byte long for date (21))
at org.apache.cassandra.thrift.Cassandra$execute_prepared_cql3_query_result.read(Cassandra.java:41868)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78)
at org.apache.cassandra.thrift.Cassandra$Client.recv_execute_prepared_cql3_query(Cassandra.java:1689)
at org.apache.cassandra.thrift.Cassandra$Client.execute_prepared_cql3_query(Cassandra.java:1674)
at com.netflix.astyanax.thrift.ThriftCql3Query.execute_prepared_cql_query(ThriftCql3Query.java:29)
at com.netflix.astyanax.thrift.AbstractThriftCqlQuery$3$1.internalExecute(AbstractThriftCqlQuery.java:92)
at com.netflix.astyanax.thrift.AbstractThriftCqlQuery$3$1.internalExecute(AbstractThriftCqlQuery.java:82)
at com.netflix.astyanax.thrift.AbstractOperationImpl.execute(AbstractOperationImpl.java:60)
I think its wrong when i try to store the timestamp into the table. But i didn't find anything adequeate in the preparedStatement to store timestamp. The datatype of the field in the database is "timestamp".
I should have used
.withByteBufferValue(new Date(), DateSerializer.get()).
Or if you have a custom object to serialize, extend the AbstractSerializer class.
Since what you get is a NullPointerException have you checked that some of your values aren't null?