Robot Framework: save SQL results to excel - excel

I just started to automate our test process. I made the following case:
Example:
* Settings *
Library Selenium2Library
Library Remote http://localhost:0000/ WITH NAME JDBCDB2 #DB2
Library ExcelLibrary
* Test Cases *
Connect to database connection parm login password
Store Query Result To File select * from table where x=y :\Testresults.txt
what keyword to use (instead of Store query results to file) so i can read the query from a file, instead of writing the full select statement.
How to write the sql results to an excel. Results seperated in columns/records

Related

Get data from between temporary table create and drop

I have optimized the peformance of one of my SQLite scripts by adding a temporary table so now it looks like this:
create temp table temp.cache as select * from (...);
--- Complex query.
select * from (...);
drop table temp.cache;
The issue with this solution is that I no longer can use Pandas' pd.read_sql_query because it doesn't return any result and throws an exception saying I'm allowed to execute only a single statement.
What would you say is the preferable solution? I can think of two:
Plan-A: There is some trick to extract the data anyway or
Plan-B: I need to call python's SQLite execute function before and after using Pandas to handle the temporary table.

Execution failed on sql ' SELECT * FROM Reviews WHERE Score != 3 LIMIT 5000': no such table: Reviews

I need a help with the below issue i am facing:
I am trying to connect to sqllite and trying to read the data using read sql query from pandas and i am stuck with the error.
Execution failed on sql ' SELECT * FROM Reviews WHERE Score != 3 LIMIT 5000': no such table: Reviews
Below is the code snippet for accessing the sqllite and connection:
con = sqlite3.connect('database.sqlite')
print(con)
import os
os.getcwd()
os.listdir()
output of above code: You will see the reviews.csv file in the directory.
<sqlite3.Connection object at 0x00000240C2C6C570>
['.ipynb_checkpoints',
'03 Amazon Fine Food Reviews Analysis_KNN.ipynb',
'Assignment_SAMPLE_SOLUTION.ipynb',
'database.sqlite',
'K NN Implementation with Sample Data for regression and classification.ipynb',
'Reviews.csv']
Now as the file is in the directory i use this :
import sqlite3
import pandas as pd
import numpy as np
con = sqlite3.connect('database.sqlite')
filtered_data = pd.read_sql_query(""" SELECT * FROM Reviews WHERE Score != 3 LIMIT 5000""", con)
the above snippet of code gives the error:
Execution failed on sql ' SELECT * FROM Reviews WHERE Score != 3 LIMIT 5000': no such table: Reviews
can you please anyone let me know where i am going wrong.
The error message you get is explicit: there is no table Reviews in your database 'database.sqlite'.
The .csv file you mention is just a csv file, and you can run sql queries only on databases by definition.
In order to know what are the available tables in the database, go to your sqllite command line by entering something like sqlite3 database.sqlite and use the command .tables. This will give you the list of tables in this database.
if you want to learn more about sqlite you can use sqlitetutorial.net for example.
give actual location of sql dataset and still if it dont work then put r'location'.
As Bluexm said that there is no table available called review.
I was using colab I follwed below steps and it worked for me. may be you can try these steps.
Generate Json api key from your kaggle profile i.e(Account-> Generate Api Key)
Upload the same Json file in "/root/.kaggle/" folder
Download the dataset using api key
eg.
!kaggle datasets download -d snap/amazon-fine-food-reviews
!unzip archive
con = sqlite3.connect('/content/database.sqlite')
filtered_data = pd.read_sql_query(""" SELECT * FROM Reviews WHERE Score != 3 LIMIT 5000""", con)

Hive Query result to XL

I am newbie to Hadoop and Hive. My current requirement is to collect the stats of number of records loaded in 15 tables on each run day. Instead of executing each select Count(*) query and copy output manually to XL. Could anyone suggest what is the best method to automate this task please?
Note: we are not having any GUI to run Hive Queries, submitting Hive queries in normal Unix terminal.
Export to the CSV or TSV file, then open file in Excel. Normally it generates TSV file (tab-separated). This is how to transform it to comma-separated if you prefer CSV;
hive -e "SELECT 'table1' as source, count(*) cnt FROM db.table1
UNION ALL
SELECT 'table2' as source, count(*) cnt FROM db.table2" | tr "\t" "," > mydata.csv
Add more tables to the query.
You can mount directory in which you are writing output file in Windows using SAMBA/NFS. Schedule the command using crontab and voila, every day you have updated file.
Also you can connect directly using ODBC drivers:
https://mapr.com/blog/connecting-apache-hive-to-odbc/
https://learn.microsoft.com/en-us/azure/hdinsight/hadoop/apache-hadoop-connect-excel-hive-odbc-driver
Error connecting Hortonworks Hive ODBC in Excel 2013

How can i send multiple queries in jaydebeapi in python (Netezza JDBC)

How to send multiple queries in single execute statement for example simplified version of my query is (which i am trying to execute using jaydebeapi )
Create temp table tempTable as
select * from table1 where x=y;
select * from tempTable ;
UPDATE : I am using Netezza JDBC

Qlikview - append data to Excel

I have qvw file with sql query
Data:
LOAD source, color, date;
select source, color, date
as Mytable;
STORE Data into [..\QV_Data\Data.qvd] (qvd);
Then I export data to excel and save.
I need something to do that automatically instead of me
I need to run query every day and automatically send data to excel but keep old data in excel and append new value.
Can qlikview to do that?
For that you need to create some crazy macro that runs after a reload task in on open-trigger. If you schedule a windows task that execute a bat file with path to qlikview.exe with the filepath as parameters and -r flag for reload(?) you can probably accomplish this... there are a lot of code of similar projects to be found on google.
I suggest adding this to the loadscript instead.
STORE Table into [..\QV_Data\Data.csv] (txt);
and then open that file in excel.
If you need to append data you could concatenate new data onto the previous data.. something like:
Data:
load * from Data.csv;
//add latest data
concatenate(Data)
LOAD source, color, date from ...
STORE Data into [..\QV_Data\Data.csv] (txt);
I assume you have the desktop version so you don't have access to the Qlikview Management Console (if you do, this is obviously the best way).
So, without the Console, you should create a txt file with this command: "C:\Program Files\QlikView\Qv.exe" /r "\\thePathToYourFile\test.qvw". Save this file with .cmd file extension. After that you can schedule this command file with the windows task scheduler.

Resources