Currently I have the following code which checks if tablename "Company" exists in the database, and then creates the table with the given fields.
cur.executescript('''
DROP TABLE IF EXISTS Company;
CREATE TABLE Company (
id INTEGER NOT NULL PRIMARY KEY AUTOINCREMENT UNIQUE,
name VARCHAR2
)
''')
I want to make this query generic as in, instead of just using "Company" in my query, I need to take the names from a list. Is it possible to pass a variable in the query instead of passing "Company" in this example.
Thank you!
It is not possible to pass a variable table name (or column name) to sqlite. (And since executescript takes exactly one argument, it's not possible to pass a variable to executescript).
You could build the query before the execute and pass that variable to executescript.
And of course if you take the table names from a list, it seems likely you will have to take the column names too!
Related
As long as I know, there are only five data types that can be set to columns in a sqlite3 table. They are:
null means no data.
integer means a whole number.
real means a float.
text means any string.
blob a binary data field in which you can store files, documents, images.
But currently, I have a list called self. inventory in my code, that gets items added into it occasionally when users' do something specific. That is not the issue. My problem is that what data type should I assign to the list that I want to store in the table? Or is there any other method I can use to store the values of the table into the db. Currently, here is my connection, cursor and table execution:
connection = sqlite3.connect('db_of_game.db')
cursor = connection.cursor()
cursor.execute(
'CREATE TABLE user_data(user_name text primary key, money integer, inventory <What data type to use here?>, deposited integer, allowed_deposit integer)'
)
connection.commit()
connection.close()
Assuming each item can only belong to a single user, you'd use a one-to-many pattern. Many items, one user. Items have their own table and they refer to their user.
create items (
id integer primary key,
name text not null,
user_name text not null references user_data(user_name)
)
(Note: Using a username as a primary key is to be avoided. Usernames change. Primary keys cannot change. They also require more storage and comparison time. Instead, use a simple integer. In SQLite integer primary key works.)
Then to get all a user's items...
select items.name
from items
where user_name = ?
If each item can belong to many users, that is a many-to-many relationship and you need a join table to link users to items.
create items (
id integer primary key,
name text not null
)
create inventory (
item_id integer not null references items(id),
user_name text not null references user_data(user_name)
)
And to get a user's inventory you check inventory to get the item IDs and join with items to get the item name.
select items.name
from items
join inventory on items.id = inventory.item_id
where inventory.user_name = ?
This might seem convoluted, but this is how a relational database works. By setting up relationships between items. It takes a bit to wrap your head around, but it's worth it. It makes searching very fast. If you used a comma separated list and want to find the users with a certain item, you need to look at every user and parse their list. Now you just query the items table. If items.name is indexed it will not have to search the whole table.
select *
from items
where item.name like ?
For more...
W3Schools - SQL Joins
Visual representation of SQL joins
TutorialsPoint - Using Joins
TutorialsPoint - Indexes
I have this table where I have put a Hash Key on a column called org_id and a Global Secondary Index on a column called ts. And I need to run a query against the table matching the condition, but I am getting the error Query key condition not supported.I can't use the "ts" as a Sort Key because there might be repetition there.
Therefore I wanted to know is it possible to query both the index and table in single condition like I have done below.
KeyCondition = Key("org_id").eq("some_id") &
Key("ts").between(START_DATE,END_DATE)
ProjectionExpression = "ts,val"
response = GET_TABLE.query(
TableName=DYNAMO_TABLE_NAME,
IndexName="ts-index",
KeyConditionExpression=KeyCondition,
ProjectionExpression=ProjectionExpression,
Limit=50
)
It isn't possible to access base table attributes and from a GSI query. You have to project the attributes you need, into the GSI.
You can project other base table attributes into the index if you want. When you query the index, DynamoDB can retrieve these projected attributes efficiently. However, global secondary index queries cannot fetch attributes from the base table.
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/GSI.html
Note that the "primary key" of a GSI doesn't need to be unique.
In a DynamoDB table, each key value must be unique. However, the key values in a global secondary index do not need to be unique.
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/GSI.html
This link contains the answer to the question, but it is not completed.
A Sybase DBMS has a notion of catalogs and schemas. So, how do I write a query to retrieve list of indexes for a table inside schema that is inside catalog?
[EDIT]
Consider following scenario:
USE test
GO
CREATE TABLE dbo.test_table(<field_list>)
GO
CREATE TABLE foo.test_table(<field_list>)
GO
CREATE INDEX test_index ON test_table(<index_field_list>)
GO
As you can see there are 2 test_table tables created: one in the schema called dbo and one in the schema called foo. And so now my question would be - how do I write a query that properly check for existence of the index on the table test_table in the schema foo? Because the link I referenced does not differentiate between those 2 tables and therefore will fail in this case. I very much prefer to filter schema and table names rather than using schemaName.tableName format. I hope you get an idea. If not please let me know and I will try to explain with even further details.
[/EDIT]
If you're signed into the test database as user foo, the create index command will be applied against the foo.test_table table (precedence is given to objects you own).
If you're signed into the test database as anyone other than foo, and assuming you have permissions to create an index, the create index command will be applied against the dbo.test_table table (precedence goes to objects owner by dbo if you don't own an object of the given name and you have not provided an explicit owner).
If you know you're going to have multiple tables with the same name but different owners, it's a bit more 'clean' to get in a habit of providing explicit owner names (and you're less likely to issue a command against the 'wrong' table).
As for how to check for the existence of an index ... in a nutshell:
sysusers contains db user names and ids (name, uid)
sysobjects contains object names, object types, object ids and owner ids (name, type, id, uid)
sysindexes contains index names, object ids, index ids, and a denormalized list of columns that make up the index (name, id, indid, keys1/keys2)
syscolumns contains column names for tables/procs/views, object ids, column ids (name, id, colid)
Sample joins (using old style join clauses):
select ....
from sysusers u,
sysobjects o,
sysindexes i
where u.name = '<user_name>'
and o.name = '<table_name>'
and o.type = 'T' -- T=table, P=procedure, V=view
and i.name = '<index_name>'
and o.uid = u.uid
and o.id = i.id
The join from sysindexes.keys1/keys2 to syscolumns.colid is a bit convoluted as you need to figure out how you wish to parse the keys1/keys2 columns to obtain individual syscolumns.colid values.
Again, I'd suggest you take a look at the code for the sp_helpindex stored proc as it references all of the appropriate system (aka catalog) tables and includes examples of the necessary join clauses:
exec sybsystemprocs..sp_helptext sp_helpindex,null,null,'showsql'
go
I have a VBA script that generates a query string for a SAP HANA ODBC Connection in Excel. The query is determined by user inputs and can vary greatly in length. The query itself uses many versions of a similar query appended to one another using UNION ALL syntax.
The script sometimes throws a runtime error when trying to refresh. From my research, it has become clear that the reason for this is that the CommandText string exceeds a maximum allowed length of 32,767 (https://ask.sqlservercentral.com/questions/50819/too-long-sql-in-excel-vba.html).
I wondered whether there is a workaround for this, other than using a stored procedure (I am not against this if there is a way to create a stored procedure at runtime then execute it, but I cannot use a predefined stored procedure as my query is always different hence the need for VBA to create it)
Some more info about the dynamic query in VBA:
Column names, as well as parameters, are created dynamically and can be different every time
The query uses groups of lists of product numbers to generate an IN statement for each product group, then sums the sales for those products under the name of the group. These are then all UNION'd together to create one table with grouped records
Example of user input:
Example of resulting query:
WITH SOME_CTE (SOME_FIELDS) AS
(SELECT SOME_STUFF
FROM SOME_TABLE
WHERE SOME_STUFF_IS_GOING_ON)
SELECT GEND "Gender", 'Attribute 1' "Attribute", SUM(UNITS) "Units", SUM(VAL) "Value", SUM(MARGIN) "Margin"
FROM SOME_CTE
WHERE PRODUCT IN ('12345', '23456', '34567', '45678')
GROUP BY GEND
UNION ALL
SELECT GEND, 'Attribute 2' ATTR_NAME, SUM(UNITS), SUM(VAL), SUM(MARGIN)
FROM SOME_CTE
WHERE PRODUCT IN ('01234', '02345', '03456', '03567')
GROUP BY GEND
ORDER BY "Gender", "Attribute"
...and so on.
As you can see, with 2 attribute groups containing 4 products each there is no problem, but when we get to about 30 with several hundred each, it could be too long.
Note: I have tried things like shortening field references in the repeated parts of the query string to 1 character etc. which helps but does not solve the problem.
Any help would be greatly appreciated.
One workaround is to send multiple queries. Since you are using union all, you could execute every time single select statement, i.e.
create table in (for example) master database (don't create temporary tables! as they will be dropped after every query) - but before that, make sure you create new table, so delete old one if exists (also drop the table after you are done with it). Now every single select statement you'll change to insert statement, which will insert records to your so-called temporary table.
This way, you'll avoid lengthy queries, you'll just send single insert .. into.. select statements.
At the end, to get all results, you just need simple select query. After getting this data, you should drop that table, as it's no longer needed.
Assuming you have a table with a field (column) that serves as the primary (partition) key (let say its name is "id") and the rest of the fields columns are "regular" (no clustering) - lets call them "field1", "field2", field3", "field4", etc. The logic that currently exists in the system might generate 2 separate update commands to the same row. For example:
UPDATE table SET field1='value1' WHERE id='key';
UPDATE table SET field2='value2' WHERE id='key';
These commands run one after the other in quorum.
Seldom, when you retrieve the row (quorum read) from the DB, its as if one of the updates did not happen. Is it possible that the inconsistency is because of this write pattern and can be circumvented by making one update call like this:
UPDATE table SET field1='value1',field2='value2' WHERE id='key';
This is happening on Cassandra 2.1.17
Yes this is totally possible.
If you need to preserve the orders when making the two statements you can to 2 things:
add using timestamp to your queries and set it explicitly on client code - this will prevent the inconsistencies
use batch
What I would have done,is change the table definition
CREATE TABLE TABLE_NAME(
id text,
field text,
value text
PRIMARY KEY( id , field )
This way you don't have to worry about updates to fields for a particular key.
Your queries would be ,
INSERT INTO TABLE_NAME (id , field , value ) VALUES ('key','fieldname1', 'value1' );
INSERT INTO TABLE_NAME (id , field , value ) VALUES ('key','fieldname2', 'value2' );
The drawback of design is, if you have too many data for 'key',it would created wide row.
For select queries -
SELECT * from TABLE_NAME where id ='key';
On client side, build your object.