why cassandra throws exception in select query? - cassandra

I am using cassandra db ,while i use select at some times i get this exception?
Traceback (most recent call last):
File "bin/cqlsh", line 1001, in perform_statement_untraced
self.cursor.execute(statement, decoder=decoder)
File "bin/../lib/cql-internal-only-1.4.0.zip/cql-1.4.0/cql/cursor.py", line 81, in execute
return self.process_execution_results(response, decoder=decoder)
File "bin/../lib/cql-internal-only-1.4.0.zip/cql-1.4.0/cql/thrifteries.py", line 131, in process_execution_results
raise Exception('unknown result type %s' % response.type)
Exception: unknown result type None
can any one explain why this exceptions occur and also i get Internal application error.
what this error message actually means?
EDIT: I get this error for the first time, next time onwards its running correctly.I dont get why it is so?
//cql query via cqlsh
select * from event_logging limit 5;

Related

ValueError: time inversion found

I am using datetimerange to check if a date is in between two dates. Unfortunetaly I somehow got a strange error message without quitting the program or showing anything. Only when I ctrl+c it I get this error:
ValueError: time inversion found: 2021-09-02 14:48:34.796000+00:00 > 2021-08-25 12:27:20.603000+00:00
So this is the lines causing it:
try:
inRange = time_in_range(start_date, end_date, timestamp)
except:
inRange = time_in_range(end_date, start_date, timestamp)
I got the dates from elasticsearch logs and didn't get this before, so I don't know what has it caused. I even don't understand this error message.
Do you know what the problem is? THere is literally no information on the internet, so I think I ran into a bug or it is very obvious.
Thanks
I guess the issue is that range is incorrect. I can easily trigger this message - see below code:
#!/usr/bin/python3.9
from datetimerange import DateTimeRange
d1="2015-03-22T10:00:00+0900";
d2="2015-03-22T10:10:00+0900";
print(DateTimeRange(d1,d2).validate_time_inversion())
print(DateTimeRange(d2,d1).validate_time_inversion())
Output:
None
Traceback (most recent call last):
File "/home/username/py/time_inversion_ex.py", line 6, in <module>
print(DateTimeRange(d2,d1).validate_time_inversion())
File "/usr/local/lib/python3.9/dist-packages/datetimerange/__init__.py", line 272, in validate_time_inversion
raise ValueError(
ValueError: time inversion found: 2015-03-22 10:10:00+09:00 > 2015-03-22 10:00:00+09:00

ete3 error : could not be translated into taxids! - Bioinformatics

I am using ete3(http://etetoolkit.org/) package in Python within a bioinformatics pipeline I wrote myself.
While running this script, I get the following error. I have used this script a lot for other datasets which don't have any issues and have not given any errors. I am using Python3.5 and miniconda. Any fixes/insights to resolve this error will be appreciated.
[Error]
Traceback (most recent call last):
File "/Users/d/miniconda2/envs/py35/bin/ete3", line 11, in <module>
load_entry_point('ete3==3.1.1', 'console_scripts', 'ete3')()
File "/Users/d/miniconda2/envs/py35/lib/python3.5/site-packages/ete3/tools/ete.py", line 95, in main
_main(sys.argv)
File "/Users/d/miniconda2/envs/py35/lib/python3.5/site-packages/ete3/tools/ete.py", line 268, in _main
args.func(args)
File "/Users/d/miniconda2/envs/py35/lib/python3.5/site-packages/ete3/tools/ete_ncbiquery.py", line 168, in run
collapse_subspecies=args.collapse_subspecies)
File "/Users/d/miniconda2/envs/py35/lib/python3.5/site-packages/ete3/ncbi_taxonomy/ncbiquery.py", line 434, in get_topology
lineage = id2lineage[sp]
KeyError: 3
Continuing from the comment section for better formatting.
Assuming that the sp contains 3 as suggested by the error message (do check this yourself). You can inspect the ete3 code (current version) following its definition, you can trace it to line:
def get_lineage_translator(self, taxids):
"""Given a valid taxid number, return its corresponding lineage track as a
hierarchically sorted list of parent taxids.
So I went to https://www.ncbi.nlm.nih.gov/Taxonomy/Browser/wwwtax.cgi
and checked if 3 is valid taxid and it appears that it is not.
# relevant section from ncbi taxonomy browser
No result found in the Taxonomy database for taxonomy id
3
It appears to me that your only option is to trace how the 3 gets computed. Because the root cause is simply that taxid 3 is not valid taxid number as required by the function.

Meaning of OverflowError message

In python, an OverflowError is raised when the number we are trying to compute is so large that it cannot be represented in a built-in float object, which I think is limited to 32 bits. I would like to understand the full meaning of the message printed by OverflowError in the following example:
>>> 10.1 ** 400
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
OverflowError: (34, 'Result too large')
What does 34 stand for in this message?
That's the built-in error code. Each error type is assigned a different error code. For instance for OSerror, expect to see 25 as the error code. You can see other built-in error types here: https://pymotw.com/2/exceptions/

Can't write to local Hive using JDBC

I am running a small Amazon EMR cluster and wish to write to its Hive database from a remote connection via JDBC. I am running into an error that also appears if I execute everything locally on that EMR cluster, which is why I think the fault is not the remote connection but something directly on EMR.
The error appears when running this minimal example:
connectionProperties = {
"user" : "aengelhardt",
"password" : "doot",
"driver" : "org.apache.hive.jdbc.HiveDriver"
}
from pyspark.sql import DataFrame, Row
test_df = sqlContext.createDataFrame([
Row(name=1)
])
test_df.write.jdbc(url= "jdbc:hive2://127.0.0.1:10000", table = "test_df", properties=connectionProperties, mode="overwrite")
I then get a lot of Java error messages, but I think the important lines are these:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/spark/python/pyspark/sql/readwriter.py", line 940, in jdbc
self.mode(mode)._jwrite.jdbc(url, table, jprop)
File "/usr/lib/spark/python/lib/py4j-0.10.6-src.zip/py4j/java_gateway.py", line 1160, in __call__
File "/usr/lib/spark/python/pyspark/sql/utils.py", line 63, in deco
return f(*a, **kw)
File "/usr/lib/spark/python/lib/py4j-0.10.6-src.zip/py4j/protocol.py", line 320, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o351.jdbc.
: org.apache.hive.service.cli.HiveSQLException: Error while compiling statement: FAILED: ParseException line 1:23 cannot recognize input near '"name"' 'BIGINT' ')' in column name or primary key or foreign key
The last line hints that something came up while creating the table, since he tries to specifiy the 'name' column as a 'BIGINT' there.
I found this question which has a similar problem, and the issue was that the SQL query was wrongly specified. But here, I don't specify a query, so I don't know where that happened or how to fix it.
As of now, I have no idea how to dive in deeper to find the cause of this. Does anyone have a solution or an idea of how to search further for the cause?

Error I do not understand [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed last month.
Improve this question
I generated this error in Python 3.5:
Traceback (most recent call last):
File "C:\Users\Owner\AppData\Local\Programs\Python\Python35\lib\shelve.py", line 111, in __getitem__
value = self.cache[key]
KeyError: 'P4_vegetables'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\Owner\Documents\Python\Allotment\allotment.py", line 217, in
main_program()
File "C:\Users\Owner\Documents\Python\Allotment\allotment.py", line 195, in main_program
main_program()
File "C:\Users\Owner\Documents\Python\Allotment\allotment.py", line 49, in main_program
print("Plot 4 - ", s["P4_vegetables"])
File "C:\Users\Owner\AppData\Local\Programs\Python\Python35\lib\shelve.py", line 113, in __getitem__
f = BytesIO(self.dict[key.encode(self.keyencoding)])
File "C:\Users\Owner\AppData\Local\Programs\Python\Python35\lib\dbm\dumb.py", line 141, in __getitem__
pos, siz = self._index[key] # may raise KeyError
KeyError: b'P4_vegetables'
It has been a while, but in case somebody comes across this: The following error
Traceback (most recent call last):
File "filepath", line 111, in __getitem__
value = self.cache[key]
KeyError: 'item1'
can occur if one attempts to retrieve the item outside of the with block. The shelf is closed once we start executing code outside of the with block. Therefore, any operation performed on the shelf outside of the with block in which the shelf is opened will be considered an invalid operation. For example,
import shelve
with shelve.open('ShelfTest') as item:
item['item1'] = 'item 1'
item['item2'] = 'item 2'
item['item3'] = 'item 3'
item['item4'] = 'item 4'
print(item['item1']) # no error, since shelf file is still open
# If we try print after file is closed
# an error will be thrown
# This is quite common
print(item['item1']) #error. It has been closed, so you can't retrieve it.
Hope this helps anyone who comes across a similar issue as the original poster.
It means that the dictionary (or of whatever type it is) cache does not contain the key named key which is of value 'P4_vegetables'. Make sure you added the key before using it.

Resources