Cassandra ODBC parameter binding - cassandra

I've installed DataStax Community Edition, and added DataStax ODBC connector. Now I try to access the database via pyodbc:
import pyodbc
connection = pyodbc.connect('Driver=DataStax Cassandra ODBC Driver;Host=127.0.0.1',
autocommit = True)
cursor = connection.cursor()
cursor.execute('CREATE TABLE Test (id INT PRIMARY KEY)')
cursor.execute('INSERT INTO Test (id) VALUES (1)')
for row in cursor.execute('SELECT * FROM Test'):
print row
It works fine and returns
>>> (1, )
However when I try
cursor.execute('INSERT INTO Test (id) VALUES (:id)', {'id': 2})
I get
>>> Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "test.py", line 11, in <module>
cursor.execute('INSERT INTO Test (id) VALUES (:id)', {'id': 2})
pyodbc.ProgrammingError: ('The SQL contains 0 parameter markers, but 1 parameters were supplied', 'HY000')
Alternatives do neither work:
cursor.execute('INSERT INTO Test (id) VALUES (:1)', (2))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "test.py", line 11, in <module>
cursor.execute('INSERT INTO Test (id) VALUES (?)', (2))
pyodbc.ProgrammingError: ('The SQL contains 0 parameter markers, but 1 parameters were supplied', 'HY000')
and
cursor.execute('INSERT INTO Test (id) VALUES (?)', (2))
>>> Traceback (most recent call last):
File "<stdin>", line 1, in <module>
pyodbc.Error: ('HY000', "[HY000] [DataStax][CassandraODBC] (15) Error while preparing a query in Cassandra: [33562624] : line 1:31 no viable alternative at input '1' (...Test (id) VALUES (:[1]...) (15) (SQLPrepare)")
My Cassandra version is 2.2.3, ODBC driver is from https://downloads.datastax.com/odbc-cql/1.0.1.1002/

According to pyodbc Documentation your query should be
cursor.execute('INSERT INTO Test (id) VALUES (?)', 2)
More details on pyodbc Insert
As per the comment got a thread which says it is a open bug in pyodbc
BUG

Related

Can't query datetime column in SQLAlchemy with postgreSQL

I want to delete rows based on a datetime filter.
I created a table with DateTime column without timezone using similar script.
class VolumeInfo(Base):
...
date: datetime.datetime = Column(DateTime, nullable=False)
Then I try to delete rows using such filter
days_interval = 10
to_date = datetime.datetime.combine(
datetime.datetime.utcnow().date(),
datetime.time(0, 0, 0, 0),
).replace(tzinfo=None)
from_date = to_date - datetime.timedelta(days=days_interval)
query = delete(VolumeInfo).where(Volume.date < from_date)
Unexpectedly, sometimes there is no error, but sometimes there is the error:
Traceback (most recent call last):
...
File "script.py", line 381, in delete_volumes
db.execute(query)
File "/usr/local/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 1660, in execute
) = compile_state_cls.orm_pre_session_exec(
File "/usr/local/lib/python3.10/site-packages/sqlalchemy/orm/persistence.py", line 1843, in orm_pre_session_exec
update_options = cls._do_pre_synchronize_evaluate(
File "/usr/local/lib/python3.10/site-packages/sqlalchemy/orm/persistence.py", line 2007, in _do_pre_synchronize_evaluate
matched_objects = [
File "/usr/local/lib/python3.10/site-packages/sqlalchemy/orm/persistence.py", line 2012, in <listcomp>
and eval_condition(state.obj())
File "/usr/local/lib/python3.10/site-packages/sqlalchemy/orm/evaluator.py", line 211, in evaluate
return operator(eval_left(obj), eval_right(obj))
TypeError: can't compare offset-naive and offset-aware datetimes
Using python3.10 in docker (image python:3.10-slim) and postgreSQL database with psycopg2 driver.
I have already tried all possible options, but this error appears every once in a while
How can i solve this? or where I made a mistake?
UPD1:

Python Error No such table using sqlite3

Trying Python today for the first time today and got stuck following and example almost immediately. Using Pyhon 3.6 on Windows. Can someone help?
RESTART: C:/Users/tom_/AppData/Local/Programs/Python/Python36-32/Projects/Database/dbexample.py
Traceback (most recent call last):
File "C:/Users/tom_/AppData/Local/Programs/Python/Python36-32/Projects/Database/dbexample.py", line 13, in <module>
enter_data()
File "C:/Users/tom_/AppData/Local/Programs/Python/Python36-32/Projects/Database/dbexample.py", line 11, in enter_data
c.execute("INSERT INTO Example VALUES('Python', 2.7, 'Beginner')")
sqlite3.OperationalError: no such table: Example
Code:
import sqlite3
conn = sqlite3.connect('tutorial.db')
c = conn.cursor()
def create_table():
c.execute("CREATE TABLE Example(Language VARCHAR, Version REAL, Skill TEXT)")
def enter_data():
c.execute("INSERT INTO Example VALUES('Python', 2.7, 'Beginner')")
enter_data()
conn.close()
you need to call create_table() once before you can use enter_data() for a new db.
Once it has been created you will get a sqlite3.OperationalError: table Example already exists if you call it again.

Pyspark error while querying cassandra to convert into dataframes

I am getting the following error while executing the command:
user = sc.cassandraTable("DB NAME", "TABLE NAME").toDF()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/src/spark/spark-1.4.1/python/pyspark/sql/context.py", line 60, in toDF
return sqlContext.createDataFrame(self, schema, sampleRatio)
File "/usr/local/src/spark/spark-1.4.1/python/pyspark/sql/context.py", line 333, in createDataFrame
schema = self._inferSchema(rdd, samplingRatio)
File "/usr/local/src/spark/spark-1.4.1/python/pyspark/sql/context.py", line 220, in _inferSchema
raise ValueError("Some of types cannot be determined by the "
ValueError: Some of types cannot be determined by the first 100 rows, please try again with sampling
Load into a Dataframe directly this will also avoid any python level code for interpreting types.
sqlContext.read.format("org.apache.spark.sql.cassandra").options(keyspace="ks",table="tb").load()

Executing an insert query on each row in results: psycopg2.ProgrammingError: no results to fetch

What dumb thing am I missing here:
>>> cur.execute("select id from tracks")
>>> for row in cur:
... story = random.choice(fortunes) + random.choice(fortunes)
... cur.execute("update tracks set story=%s where id=%s", (story, row[0]))
...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
psycopg2.ProgrammingError: no results to fetch
But there seem to be results:
>>> cur.execute("select id from tracks")
>>> for row in cur:
... print(row)
...
(8,)
(45,)
(12,)
(64,)
(1,)
(6,)
Looks like psycopg2 doesn't allow interleaved queries (although PostgreSQL can do it, on the back end). If the initial query isn't huge, the simplest solution would be to coalesce the results into a list - just change from row in cur: to from row in cur.fetchall(): and you should be right.

How to create a session in Cassandra?

Total Cassandra newbie here, using Python client.
from cassandra.cluster import Cluster
cluster = Cluster(['127.0.0.1'])
session = cluster.connect()
I get error:
Exception in thread event_loop (most likely raised during interpreter shutdown):
Traceback (most recent call last):
File "/usr/lib/python2.7/threading.py", line 551, in __bootstrap_inner
File "/usr/lib/python2.7/threading.py", line 504, in run
File "/usr/local/lib/python2.7/dist-packages/cassandra_driver-1.0.2-py2.7-linux-x86_64.egg/cassandra/io/asyncorereactor.py", line 52, in _run_loop
: __exit__
I want to create my first table and can't get past session.
query = "create table timeseries (
event_type text,
insertion_time timestamp,
event blob,
PRIMARY KEY (event_type, insertion_time)
)
WITH CLUSTERING ORDER BY (insertion_time DESC);"
session.execute(query)

Resources