An unknown N1QL error occured. This is usually related to an out-of-memory condition when fatch data - n1ql

An unknown N1QL error occured. This is usually related to an out-of-memory condition. Check the errors responseBody property or inspect the cluster logs for further details
please help me how to solve the issue.

Related

How can i find which column is bad when a spark cast fails?

I have a spark job that is failing, saying one column has the wrong type. I'm working with tables that have thousands of columns, and the error message doesn't tell me which column is the issue. The stack trace is unhelpful as well, as the only frames in the stack trace contain generated code, all the way down to something called a 'ShuffleWriteProcessor'.
So i can't figure out how to, say, catch the exception in a debugger, and the log message doesn't give me enough details to proceed. Any ideas on how to debug this situation?

Why i am getting server error: not implemented : LWT in cassandra cql?

i am using cassandra cql, i want to use light weight transaction (LWT) for my insert and update query but it gives error
Server Error: Not implemented: LWT
Can any one help please?
Happen to see this post when I search for Light Weighted Txn. Scylla currently doesn't support it; related Github Issue: https://github.com/scylladb/scylla/issues/1359

Multiple Query run N1QL couchbase with nodejs error out-of-memory condition

I want to migrated data MySQL to Couchbase.
I have imported company with _id=UUID.
Now I want to import other data which it related to company. I need _id of company in to new Import But When I run N1Ql Query in a loop to find it related data it show error as below.
Error: An unknown N1QL error occured. This is usually related to an
out-of-memory condition
what I am doing.
First get other data then use for loop to run N1QL query to get it related data. This time the error occured. I am Using Nodejs
when I put the limit 0,200 it's work but it more then 300 it is given this error
Share the exact N1QL query, sample document, and your code.
Can you check what errors/warnings you have in query.log. Also, provide your h/w setup and couchbase cluster setup details. How many documents are in the bucket, and avg size of the docs?
-Prasad

Datastax Mismatch for Key Issue

Our current setup contain DSE 5.0.2 version with 3 node cluster.Currently we are facing issue with heavy load and node failure issue.Debug.log details is given below:
DEBUG [ReadRepairStage:8] 2016-09-27 14:11:58,781 ReadCallback.java:234 - Digest mismatch:
org.apache.cassandra.service.DigestMismatchException: Mismatch for key DecoratedKey(5503649670304043860, 343233) (45cf191fb10d902dc052aa76f7f0b54d vs ffa7b4097e7fa05de794371092c51c68)
at org.apache.cassandra.service.DigestResolver.resolve(DigestResolver.java:85) ~[cassandra-all-3.0.7.1159.jar:3.0.7.1159]
at org.apache.cassandra.service.ReadCallback$AsyncRepairRunner.run(ReadCallback.java:225) ~[cassandra-all-3.0.7.1159.jar:3.0.7.1159]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_77]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_77]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_77]
I'm answering this from the perspective of what the error you posted means. However I dont think this alone will be the cause of your problems. Without seeing all the logs from the nodes in your cluster it is hard to tell.
The Digest mismatch you posted actually comes from a read repair. This docs link explains it at a high level (note contrary to what the doc says, read repair can be blocking at other CLs too):
https://docs.datastax.com/en/cassandra/3.0/cassandra/operations/opsRepairNodesReadRepair.html
If you are seeing too many read repairs and you have multiple DCs you might want to consider setting read_repair_chance lower and increasing dclocal_read_repair_chance, by default iirc they are 0.1 and 0 respectively so not always the most optimal.
I have seen these cause read timeouts as a digest mismatch can cause a blocking read repair. Your best bet if you think this is causing problems is to either run the query in cqlsh with tracing or use probabilistic tracing to log queries which you can view traces on in retrospect
Doc links:
https://docs.datastax.com/en/cql/3.3/cql/cql_reference/tracing_r.html
https://docs.datastax.com/en/cassandra/3.0/cassandra/tools/toolsSetTraceProbability.html

Titan data corruption

I get an exception when calling com.tinkerpop.blueprints.Edge.getLabel on some vertex edges :
java.lang.IllegalStateException: Could not find type for id: 630
at com.google.common.base.Preconditions.checkState(Preconditions.java:200)
at com.thinkaurelius.titan.graphdb.types.vertices.TitanTypeVertex.getName(TitanTypeVertex.java:30)
at com.thinkaurelius.titan.graphdb.transaction.StandardTitanTx$VertexConstructor.get(StandardTitanTx.java:321)
I can solve this issue by removing all vertex and start over, but we obviously don't want to do that in production.
This error seems to occur when there is a data corruption but I don't have any idea why it happens : I don't know if it's data which are corruped or type schemas.
Does anyone know the possible causes for this error/corruption and know who it can be repaired softly?
The backend I’m using is cassandra.

Resources