Optimizing a SELECT from Oracle to INSERT into SQLite (in Python3) - python-3.x

I have an Oracle database that I'm making an extract of a subset of the data into a local SQLite database. My code is basically the following:
print(datetime.datetime.now())
#Oracle portion of the script
sql = '''select [columns] from [table] where [condition]'''
oracle_cursor.execute(sql)
print(datetime.datetime.now())
#SQLite portion of the script
sqlite_conn.executeany('''INSERT into [table]
([columns])
values (?,?,?...etc.)''',
oracle_cursor.fetchall()
)
sqlite_conn.commit()
sqlite_conn.close()
It does what need it to, but it takes longer than I would like. The execution of the Oracle portion is actually surprisingly fast at around 3 minutes. But the inserting takes much longer. I've played around with the SQLite settings like buffer settings, etc. Nothing seems to break 50 rows / second. There is a spike in network activity for the first three minutes, but once it prints the second datetime from above, there's no network activity, which leads me to believe the bottleneck is something I've coded. Is my code inefficient at inserting? If so, is there a better way to get what I'm after?

fetchall() loads all data into memory. This is not necessary because executemany can work with any iterator; replace oracle_cursor.fetchall() with oracle_cursor.
Also ensure that you are using a single transaction. (If you have enabled autocommit mode, you should start a transaction explicitly.)

Related

MariaDB got much faster, and I can't find the cause?

I have a concern about my MariaDB 10.4.12 database query execution time, which is getting much faster without any update to my database schema or data. While a speed-up is always welcome, I am concerned about the root cause of this speed-up, especially since I have not rolled out any changes in the last 24 hours. This specific query has sped up 60x overnight.
I have a NodeJS web application that filters a large dataset into "reporting" pages, which typically take 10-12 seconds to load. My main table has 3.5 million rows and the base query involves many joins, date comparisons, and text comparisons. There is room for fine-tuning the query, but it worked for what it was designed to do and I could live with 10 second load times. I noticed this morning, though, that my queries were executed in less than 1 second, without any recent changes on my part.
The most recent change to the application was pushed out five days ago, which affected the amount of data being pulled into this database. A separate application on the same server reaches out to a data set every 10 minutes and replicates these rows into the same database the "reporting" application communicates with. Up until this update, the query was collecting and inserting ~80,000 rows on average, taking about 8-10 seconds to fully replicate the data into this database. My change five days ago reduced the rows being inserted to ~20,000 on average.
Other clues:
PHPMyAdmin still takes 10-12 seconds to run the query, while the MySQL command-line tool takes in less than 1 second
The MariaDB temp directory was changed to a larger partition 7 days ago
The query was tested to be slow (10-12 seconds) 24 hours ago
The query is still slow on a pre-production server that runs the same application with an identical MySQL instance running (same schema and data)
My current running theory is that the ~80,000 inserts were not being executed in the time range being reported by NodeJS (8-10 seconds for the inserts), and they were instead waiting in the MariaDB temp directory until they could be fully written to the database. That would suggest that the database was constantly bogged down by these writes, and reducing the number to ~20k allowed the database to insert faster, allowing the select queries to run faster this morning.
Should I be concerned about this speed up? Could MariaDB have found a faster way to index my data? Am I going crazy?
Thank you.
Don't worry. This kind of thing can be caused by contention (multiple database clients using the database concurrently) and all sorts of other things.
(Cherish this moment. Performance usually goes the other direction.)
You can test for correctness to increase your confidence level. Check a few older and a few newer records to see if they still contain good data.
Or a full-table-scan query, something like this
SELECT COUNT(*), AVG(some_number_column), MIN(some_text_column) FROM mytable
That will take a while but it will hit every row in the table.
You probably don't need to do this, but it's a way to double check (and tell your boss, "I double checked.)
10 seconds, then 1 second. That is "normal".
The first was run when none of the data was cached in RAM; the second was with all cached.
Run it a third time; it will be 1 second again.
Restart MariaDB and run it again; it will again take 10 seconds.
Walk away from the machine for a long time; don't touch the table. It might be back to 10 seconds. For this, look at size of RAM and innodb_buffer_pool_size. Also look for big table scans that bump everything out of cache.

Cassandra - Truncate a table while inserts in progress

I want to understand how the truncate command works in Cassandra (version 3.9) to be able to know what would happen in the following scenario:
I have about 100GB of data on a table in production on a table that needs to be truncated.
I want to truncate this table, but at the same time there will be a few hundred requests per second that will be making inserts at the same time.
I am trying to understand, theoretically how would this play out.
Would the truncate try to acquire some sort of a lock on the table before it can proceed? and possibly stop the insert requests or itself be timed out?
Or would the truncate go through in sequence as the request came in and following insert requests would create the additional rows and I would end up with a small number of rows remaining after the truncate.
I am just trying to reclaim space, so I am not particularly concerned if a small amount of data remains from the insert requests run after the truncate command.
I am just trying to understand if you'd expect this to complete successfully or it would fail / time-out.
I will try to run a similar scenario on a smaller cluster, but I'm not sure if that will be a good substitute to understand the actual behavior. Any inputs will be helpful.
Truncate sends a message to all the nodes with a request to delete all the SSTables at the moment of execution, you will have information only of those upserts received after the truncate was issued.
In the Datastax documentation it is stated that this is done with JMX, but looking at the comments of this answer, this is done with CQL and the messaging service.
If you are trying to reclaim disk space, please note that a snapshot will be created with the truncate if auto_snapshot is set to true (true is the default value), so you will need to remove the snapshot after the execution of the command. Also, note that truncate will require to have all the nodes to be up and healthy to be able to complete.
I tried this for myself. On a 2 node Cassandra cluster I Made inserts at about 160 requests per second in the background and ran a truncate query on the same table that had about 200,000 records.
The table got truncated and the inserts continued without an error.
The new rows inserted after the truncate showed on the DB.

Cassandra data model too many table

I have a single structured row as input with write rate of 10K per seconds. Each row has 20 columns. Some queries should be answered on these inputs. Because most of the queries needs different WHERE, GROUP BY or ORDER BY, The final data model ended up like this:
primary key for table of query1 : ((column1,column2),column3,column4)
primary key for table of query2 : ((column3,column4),column2,column1)
and so on
I am aware of the limit in number of tables in Cassandra data model (200 is warning and 500 would fail)
Because for every input row I should do an insert in every table, the final write per seconds became big * big data!:
writes per seconds = 10K (input)
* number of tables (queries)
* replication factor
The main question: am I on the right path? Is it normal to have a table for every query even when the input rate is already so high?
Shouldn't I use something like spark or hadoop instead of relying on bare datamodel? Or event Hbase instead of Cassandra?
It could be that Elassandra would resolve your problem.
The query system is quite different from CQL, but the duplication for indexing would automatically be managed by Elassandra on the backend. All the columns of one table will be indexed so the Elasticsearch part of Elassandra can be used with the REST API to query anything you'd like.
In one of my tests, I pushed a huge amount of data to an Elassandra database (8Gb) going non-stop and I never timed out. Also the search engine remained ready pretty much the whole time. More or less what you are talking about. The docs says that it takes 5 to 10 seconds for newly added data to become available in the Elassandra indexes. I guess it will somewhat depend on your installation, but I think that's more than enough speed for most applications.
The use of Elassandra may sound a bit hairy at first, but once in place, it's incredible how fast you can find results. It includes incredible (powerful) WHERE for sure. The GROUP BY is a bit difficult to put in place. The ORDER BY is simple enough, however, when (re-)ordering you lose on speed... Something to keep in mind. On my tests, though, even the ORDER BY equivalents was very fast.

MemSQL - why can't I do a cross-database insert into .. select

I'm trying to do a simple insert with a field list from a table in one database to a table in another.
insert into db_a.target_table (field1,field2,field3) select field1,field2,field3 from db_b.source_table;
The error message seems straight-forward..
MemSQL does not support this type of query: Cross-database INSERT ... SELECT
Oddly enough, this example does work:
insert into db_a.target_table select * from db_b.source_table;
But this seems like such a common scenario. Has anyone run into a similar issue, and were you able to work around it?
Unfortunately, this isn't allowed because it is difficult to keep such queries transactional; multi-statement transactions are used internally to guarantee transactionality of the single insert-select (if one partition fails (dup key or something), we want to rollback everything!). Since we don't have cross-db multi-statement transactions (yet!), we don't have cross-db insert-select (yet!).
Stay tuned for nicer solutions.
However, if you REAAALY want to do this, here is what you do. However,
PROCEED AT YOUR OWN RISK. THIS IS NOT A SUPPORTED PROCEEDURE.
But it should work.
1) On db_b, create a table with the same columns as source_table, but make the shard key SHARD().
2) On db_a, run SHOW PARTITIONS.
3) For each of those partitions, create a connection to db_a_<ordinal> on the host and port listed in SHOW PARTITIONS. Run SHOW DATABASES on that connection and you'll see some databases called db_b_<another>. Pick one, doesn't matter which. Run INSERT INTO db_b<another>.source_table SELECT * from db_a_<ordinal>.source_table.
3.5) At this point, you haven't yet written to a table you care about, but now we will. Look at db_b.source_table. Is everything correct? Is all the data there? Run SHOW CREATE TABLE and double check the shard key is SHARD KEY () (it should be in comments). Everything look good? Ok, we can proceed.
4) After you're done doing this for EVERY partition, you can do INSERT INTO db_b.target_table (cols) SELECT cols from db_b.source_table, or whatever you want.
Good luck!

First preparedStatement using Cassandra always slow

I noticed if I have a java method in which I have a preparedStatement uisng the JDBC driver that comes with Cassandra it is always slow. But if I put the same query twice in the method the second time it is 20x faster. Why is that? I would think the second, third, four time I call the java method it would be faster then the first. I am using Cassandra 1.2.5. I have also cached 100MB of rows in the row-cache and set the table to caching = "all". In Cassandra-cli I verified the settings. And in Cassandra-Cli I verified the second, third fourth time I get the rows from the same table I do the JDBC calls against I get faster response time.
Any Ideas?
Thanks,
-Tony
From the all knowing CQL3 documentation (always a great starting point btw):
Prepared statement is an optimization that allows to parse a query only once but execute it multiple times with different concrete values.
The statement gets cached. This is the difference maker you are experiencing. Also prepared statements get pre-compiled, typically meaning an execution plan is prepared before the query is run against the db. Knowing what you are doing makes the process faster.
At the first run your prepared statement is cached in-case you run the same query again, which you do, and since its cached the querying will be executed much faster.

Resources