Can anyone explain how to insert datetime into SQLite database using python?
which datatype should be datetime, I tried many different ways
my doubt is based on the problem set 7 in cs50, we use flask as a web framework
history is the name of the database
transaction is name of field
db.execute("INSERT INTO history (transaction) VALUES(:d)",
d=datetime.datetime.today())
error message, I receive this message when I run the application
builtins.RuntimeError
RuntimeError: near "transaction": syntax error [SQL: "INSERT INTO history (transaction) VALUES('2019-05-19 17:14:25')"] (Background on this error at: http://sqlalche.me/e/e3q8)
transaction is a reserved word in SQLite.
sqlite> INSERT INTO history (transaction) VALUES('2019-05-19 17:14:25');
Error: near "transaction": syntax error
In order to disambiguate reserved words they have to be specially quoted. In this case as an identifier with double quotes.
sqlite> INSERT INTO history ("transaction") VALUES('2019-05-19 17:14:25');
Rather than constantly having to remember to quote transaction, I'd recommend renaming the column; created_at is very common for a row timestamp. I'd also recommend using an ORM rather than writing SQL by hand; it will handle all the quoting for you, and has a great many other benefits.
Related
Error:
cassandra.protocol.SyntaxException: \
<Error from server: code=2000 [Syntax error in CQL query] \
message="line 1:36 no viable alternative at input '(' \
(CREATE TABLE master_table(dict_keys[(]...)">
Code:
cluster = Cluster(cloud=cloud_config, auth_provider=auth_provider)
session=cluster.connect('firstkey')
ColName={"qty_dot_url": "int",
"qty_hyphen_url": "int",
"qty_underline_url": "int",
"qty_slash_url": "int"}
columns = ColName.keys()
values = ColName.values()
session.execute('CREATE TABLE master_table({ColName} {dataType}),PRIMARY KEY(qty_dot_url)'.format(ColName=columns, dataType=values))
How to resolve above mentioned error?
So I replaced the session.execute with a print, and it produced this:
CREATE TABLE master_table(dict_keys(['qty_dot_url', 'qty_hyphen_url', 'qty_underline_url', 'qty_slash_url']) dict_values(['int', 'int', 'int', 'int'])),PRIMARY KEY(qty_dot_url)
That is not valid CQL. It needs to look like this:
CREATE TABLE master_table(qty_dot_url int, qty_hyphen_url int,
qty_underline_url int, qty_slash_url int, PRIMARY KEY(qty_dot_url))
I was able to create that by making these adjustments to your code:
createTableCQL = "CREATE TABLE master_table("
for key, value in ColName.items():
createTableCQL += key + " " + value + ", "
createTableCQL += "PRIMARY KEY(qty_dot_url))"
You could then follow that with a session.execute(createTableCQL).
Notes:
The PRIMARY KEY definition must be inside the paren list.
Creating schema from inside application code is often problematic, and can create a schema disagreement in the cluster. It's almost always better to create tables outside of code.
The syntax exception is a result of your Python code generating an invalid CQL which Aaron pointed out in his response.
To add to his answer, you need to add additional steps whenever you are programatically making schema changes. In particular, you need to make sure that you check for schema agreement (i.e. the schema change has been propagated to all nodes) before moving on to the next bit in your code.
You will need to modify your code to save the result from the schema change, for example:
resultset = session.execute(SimpleStatement("CREATE TABLE ..."))
then call this in your code:
resultset.response_future.is_schema_agreed
You'll need to loop through this check until True is returned. Depending on how long you want to wait (default max_schema_agreement_wait is 10 seconds), you'll need to implement some logic to do [something] when schema agreement is not achieved (because a node is down for example) -- this requires manual intervention from an operator to investigate the cluster.
As Aaron already said, performing schema changes programatically is very problematic and we discourage doing this unless you fully understand the pitfalls and know how to handle failures. Cheers!
I'm using Cosmos DB in Azure. I recently changed the schema to add more information. However, I'm confused because Cosmos uses SQL to query the database. Further, it seems I can't query based on the new schema because I receive the following error messsage:
Message: {"errors":[{"severity":"Error","location":{"start":22,"end":29},"code":"SC2001","message":"Identifier 'comment' could not be resolved."}]}
So I'm wondering is it possible to disable schema validation so I can query on the new comment attribute, without getting this error message.
Further, I'm confused how Cosmos DB can be considered a NoSQL database if it behaves much the same as MySQL.
Edit: The query I am using is
SELECT * FROM c
WHERE comment = ""
The error is not related to the schema, the error is saying your query is incorrectly written.
Following the official documentation (for example, https://learn.microsoft.com/azure/cosmos-db/sql/sql-query-getting-started#query-the-json-items)
The query should be:
SELECT * FROM c
WHERE c.comment = ""
Keep in mind that that query is not checking for documents that do not have the comment property, it's basically filtering documents that do have the comment property with an empty string value.
The following jOOQ query spits out a SQL warning in my logs about: Fields Ambiguous match found for id
db.select(FORWARDED_MESSAGE.SES_MESSAGE_ID).
from(FORWARDED_MESSAGE).
where(
FORWARDED_MESSAGE.FORWARDED.lt( DSL.currentTimestamp().subtract(
FORWARDED_MESSAGE.mailMapping().mailKeyword().mailDomain().account().
MESSAGE_RETENTION_DAYS ))).
fetch(FORWARDED_MESSAGE.SES_MESSAGE_ID);
The SQL generated appears to be correct, but I don't want the warning polluting my logs (and I want to know if jOOQ is warning me about something important that I need to be aware of).
Some context about the schema tables:
forwarded_message doesn't have a primary key
mail_domain uses a natural PK named "domain"
mail_mapping, mail_keyword and account all have a PK named id
I tried the following, but it fails saying Key ambiguous between tables:
db.select(FORWARDED_MESSAGE.SES_MESSAGE_ID).
from(FORWARDED_MESSAGE).
join(MAIL_MAPPING).onKey().
join(MAIL_KEYWORD).onKey().
join(MAIL_DOMAIN).onKey().
join(ACCOUNT).onKey().
where(
FORWARDED_MESSAGE.FORWARDED.lt(
DSL.currentTimestamp().subtract(ACCOUNT.MESSAGE_RETENTION_DAYS) )).
fetch(FORWARDED_MESSAGE.SES_MESSAGE_ID);
jOOQ version is 3.13.4, DB is postgres, using pgjdbc 42.2.14.
The Question:
How do I resolve the Fields Ambiguous match found warning?
Note: this is not a dupe of How to resolve ambiguous match when chaining generated Jooq classes because that was about a sub-classing ambiguity - this question is about simple chaining of joins (across tables that do have duplicate PK columns).
I was able to make the warning go away by re-writing the query to the join().onKey() style, but specifying the join foreign keys explicitly.
It's a bit verbose, but it seems to work:
db.select(FORWARDED_MESSAGE.SES_MESSAGE_ID).
from(FORWARDED_MESSAGE).
join(MAIL_MAPPING).
onKey(Keys.FORWARDED_MESSAGE__FORWARDED_MESSAGE_MAIL_MAPPING_ID_FKEY).
join(MAIL_KEYWORD).onKey(Keys.MAIL_MAPPING__MAIL_MAPPING_MAIL_KEYWORD_ID_FKEY).
join(MAIL_DOMAIN).onKey(Keys.MAIL_KEYWORD__MAIL_KEYWORD_DOMAIN_FKEY).
join(ACCOUNT).onKey(Keys.MAIL_DOMAIN__MAIL_DOMAIN_MAIL_DOMAIN_ACCOUNT_ID_FKEY).
where(
FORWARDED_MESSAGE.FORWARDED.lt(
DSL.currentTimestamp().subtract(ACCOUNT.MESSAGE_RETENTION_DAYS) )).
fetch(FORWARDED_MESSAGE.SES_MESSAGE_ID);
Not sure why this is much different from the FORWARDED_MESSAGE.mailMapping()...account() style. But it works and the generated SQL is cleaner.
The database is in Azure cloud and not being used in production currently. There are 80.000 rows and a uprn is a VARCHAR(100);
I'm already using JOI to validate each UPRN as well;
I'm using KNEX with a SQL Server database with the following whereIn query:
knex(LOCATIONS.table).whereIn(LOCATIONS.uprn, req.body.uprns)
but this takes 8-12s to complete and sometimes timesout. if I use .toQuery() on the same thing, SSMS will return the result within 1-2.
If I do a raw query, the resulting .toQuery() or toString() works in SSMS and returns results. But if I try to use the raw directly, it will return 0 results.
I'm looking to either fix what's making whereIn so slow or get the raw query working.
EDIT 1:
After much debugging and trying -- it seems that the bug is due to how knex deals with arrays, so I made a for-of loop to add ? ? ? for each array element and then inputed the array for all params.
This led me to realizing the performance issue is due to SQL server way of parameterising.
I ended up building a raw query string with all of the parameters and validating the input with Joi string/regex config:
Joi.string()
.min(1)
.max(35)
.regex(/^[a-z\d\-_\s]+$/i)
allowing only for alphanumeric, dashes and spaces which should prevent sql injection.
I'm going to look deeper into security issues with this and might make a separate login that can only SELECT data from that table and nothing more to run with these queries.
Needed to just handle it raw and validate separately.
I wish to safely pass a schema name that must be double-quote escaped to the database engine, in this case when constructing a GRANT statement I want to pass a variable containing test safely to the database.
GRANT SELECT ON ALL TABLES IN SCHEMA "test" TO readuser
I'm unsure how to do this from SQLAlchemy.
If it helps I am using psycopg2 to connect to postgreSQL
I've never tried issuing database maintenance queries like GRANT through SQLAlchemy. I suppose the ORM won't issue this kind of query, therefore I guess that you want to issue it textually using Session.execute. If so, the examples in the documentation are pretty straightforward about how to do it:
session.execute(
"GRANT SELECT ON ALL TABLES IN SCHEMA :param TO readuser",
{ "param": "test" }
)