Export data from SqlQuery to Excel sheet [duplicate] - excel

I have table with more than 3 000 000 rows. I have try to export the data from it manually and with SQL Server Management Studio Export data functionality to Excel but I have met several problems:
when create .txt file manually copying and pasting the data (this is several times, because if you copy all rows from the SQL Server Management Studio it throws out of memory error) I am not able to open it with any text editor and to copy the rows;
the Export data to Excel do not work, because Excel do not support so many rows
Finally, with the Export data functionality I have created a .sql file, but it is 1.5 GB, and I am not able to open it in SQL Server Management Studio again.
Is there a way to import it with the Import data functionality, or other more clever way to make a backup of the information of my table and then to import it again if I need it?
Thanks in advance.

I am not quite sure if I understand your requirements (I don't know if you need to export your data to excel or you want to make some kind of backup).
In order to export data from single tables, you could use Bulk Copy Tool which allows you to export data from single tables and exporting/Importing it to files. You can also use a custom Query to export the data.
It is important that this does not generate a Excel file, but another format. You could use this to move data from one database to another (must be MS SQL in both cases).
Examples:
Create a format file:
Bcp [TABLE_TO_EXPORT] format "[EXPORT_FILE]" -n -f "[ FORMAT_FILE]" -S [SERVER] -E -T -a 65535
Export all Data from a table:
bcp [TABLE_TO_EXPORT] out "[EXPORT_FILE]" -f "[FORMAT_FILE]" -S [SERVER] -E -T -a 65535
Import the previously exported data:
bcp [TABLE_TO_EXPORT] in [EXPORT_FILE]" -f "[FORMAT_FILE] " -S [SERVER] -E -T -a 65535
I redirect the output from hte export/import operations to a logfile (by appending "> mylogfile.log" ad the end of the commands) - this helps if you are exporting a lot of data.

Here a way of doing it without bcp:
EXPORT THE SCHEMA AND DATA IN A FILE
Use the ssms wizard
Database >> Tasks >> generate Scripts… >> Choose the table >> choose db model and schema
Save the SQL file (can be huge)
Transfer the SQL file on the other server
SPLIT THE DATA IN SEVERAL FILES
Use a program like textfilesplitter to split the file in smaller files and split in files of 10 000 lines (so each file is not too big)
Put all the files in the same folder, with nothing else
IMPORT THE DATA IN THE SECOND SERVER
Create a .bat file in the same folder, name execFiles.bat
You may need to check the table schema to disable the identity in the first file, you can add that after the import in finished.
This will execute all the files in the folder against the server and the database with, the –f define the Unicode text encoding should be used to handle the accents:
for %%G in (*.sql) do sqlcmd /S ServerName /d DatabaseName -E -i"%%G" -f 65001
pause

Related

Convert Access database into delimited format on Unix/Linux

I have an Access database file and I need to convert it into delimited file format.
The Access DB file has multiple tables and I need to create separate delimited files for each table.
So far I am not able to parse Access DB files with any Unix commands. Is there some way that I can do this on Unix?
You can use UCanAccess to dump Access tables to CSV files using the console utility:
gord#xubuntu64-nbk1:~/Downloads/UCanAccess$ ./console.sh
/home/gord/Downloads/UCanAccess
Please, enter the full path to the access file (.mdb or .accdb): /home/gord/ClientData.accdb
Loaded Tables:
Clients
Loaded Queries:
Loaded Procedures:
Loaded Indexes:
Primary Key on Clients Columns: (ID)
UCanAccess>
Copyright (c) 2019 Marco Amadei
UCanAccess version 4.0.4
You are connected!!
Type quit to exit
Commands end with ;
Use:
export [--help] [--bom] [-d <delimiter>] [-t <table>] [--big_query_schema <pathToSchemaFile>] [--newlines] <pathToCsv>;
for exporting the result set from the last executed query or a specific table into a .csv file
UCanAccess>export -d , -t Clients clientdata.csv;
UCanAccess>Created CSV file: /home/gord/Downloads/UCanAccess/clientdata.csv
UCanAccess>quit
Cheers! Thank you for using the UCanAccess JDBC Driver.
gord#xubuntu64-nbk1:~/Downloads/UCanAccess$
gord#xubuntu64-nbk1:~/Downloads/UCanAccess$ cat clientdata.csv
ID,LastName,FirstName,DOB
1,Thompson,Gord,2017-04-01 07:06:27
2,Loblaw,Bob,1966-09-12 16:03:00

BCP Unexpected String Data, Right Truncation

Trying to import data into Azure. Created a text file and have tried both comma and tab delimited text files.
Here is the table the text file is to be inserted into:
CREATE TABLE [dbo].[Test] (
[Id] [uniqueidentifier] NOT NULL,
[first_name] [varchar] (50),
[last_name] [varchar] (50),
[dob] [varchar] (10),
[gender] [char] (1),
[phone] [char] (10))
BCP dbo.Test in C:\Test.txt -S “DBServerName” -d “DBName” -U “UserName” -P “Password” -c -r/r
Have tried saving the text file in different formats and with different encodings, but believe that it’s correct to have it as UTF-16 with UNIX LF. Any thoughts? Also, if there are nulls in the data (excluding the Id field), does that need to be specified somehow in the BCP statement? Thanks!
I think you can reference this document:Import data into Azure SQL Database with BCP Utility.
This tutorial is talking about load data from CSV into Azure SQL Database (flat files).
From the document, we can get that:
1. Your data file needs to use the ASCII or UTF-16 encoding.
2. BCP does not support UTF-8.
Besides, you can also reference Format Files for Importing or Exporting Data.
I have a text data file which one column has null.
Then I imported this file into my Azure SQL database dbleon1 specified nothing else and succeeded.
My bcp code:
bcp tb1 in C:\Users\leony\Desktop\tb1.txt -S *****.database.windows.net -d dbleon1 -U ServerAdmin -P ***** -q -c -t
I checked and the data is imported into tb2 in database dbleon1.
Hope this helps.

Linux shell script in importing csv data file to DB2

I am new to Linux and would like to seek for your help. The task is to import csv data to DB2. It is in shell script, and on scheduled run. The file has a header that is why I used skipcount 1. Delimiter is comma so since it is the default one, I did not include COLDEL.
Can you help me troubleshoot as to why upon running the script, we got the error below? I am using IMPORT and INSERT_UPDATE because I learned that the LOAD method deletes the whole contents of the table before importing the data from CSV file. The existing data in the table should not be deleted. Records will only be updated if there are changes from the CSV file, otherwise, should create a new record.
I am looking at which METHOD should be used in getting the specific values from the CSV file and currently I am using METHOD P. I am not so sure with regards to numbering inside its parameter, does it signify how many columns are there to be accessed, and should tally with the ones I am importing from the file?
Below is the script snippet:
db2 connect to MYDB user $USERID using $PASSWORD
LOGFILE=/load/log/MYDBLog.txt
if [ -f /load/data/CUST_GRP.csv ]; then
db2 "import from /load/data/CUST_GRP.csv of del skipcount 1 modified by usedefaults METHOD P(1,2,3,4,5)
messages $LOGFILE
insert_update into myuser.CUST(NUM_ID,NUM_GRP,NUM_TEAM,NUM_LG,NUM_STATUS)";
else echo "/load/data/CUST_GRP.csv file not found." >> $LOGFILE;
fi
if [ -f /load/data/CUST_GRP.csv ]; then
db2 "import from /load/data/CUST_GRP.csv of del skipcount 1 modified by dateformat=\"YYYY-MM-DD\" timeformat=\"HH:MM:SS\" usedefaults METHOD P(1,2,3,4,5,6,7)
messages $LOGFILE
insert_update into myuser.MY_CUST(NUM_CUST,DTE_START,TME_START,NUM_CUST_CLSFCN,DTE_END,TME_END,NUM_CUST_TYPE)";
else echo "/load/data/CUST_GRP.csv file not found." >> $LOGFILE;
fi
The error I am encountering is this:
SQL0104N An unexpected token "modified" was found following "<identifier>".
Expected tokens may include: "INSERT". SQLSTATE=42601
Thank you!
You can’t place clauses in the arbitrary order in the IMPORT command.
Place the skipcount 1 clause before messages.
LOAD command can either INSERT new portion of data or REPLACE the table contents emptying it at the beginning.

Qlikview - append data to Excel

I have qvw file with sql query
Data:
LOAD source, color, date;
select source, color, date
as Mytable;
STORE Data into [..\QV_Data\Data.qvd] (qvd);
Then I export data to excel and save.
I need something to do that automatically instead of me
I need to run query every day and automatically send data to excel but keep old data in excel and append new value.
Can qlikview to do that?
For that you need to create some crazy macro that runs after a reload task in on open-trigger. If you schedule a windows task that execute a bat file with path to qlikview.exe with the filepath as parameters and -r flag for reload(?) you can probably accomplish this... there are a lot of code of similar projects to be found on google.
I suggest adding this to the loadscript instead.
STORE Table into [..\QV_Data\Data.csv] (txt);
and then open that file in excel.
If you need to append data you could concatenate new data onto the previous data.. something like:
Data:
load * from Data.csv;
//add latest data
concatenate(Data)
LOAD source, color, date from ...
STORE Data into [..\QV_Data\Data.csv] (txt);
I assume you have the desktop version so you don't have access to the Qlikview Management Console (if you do, this is obviously the best way).
So, without the Console, you should create a txt file with this command: "C:\Program Files\QlikView\Qv.exe" /r "\\thePathToYourFile\test.qvw". Save this file with .cmd file extension. After that you can schedule this command file with the windows task scheduler.

How do I read data from a excel worksheet in DB2

I need to write a Stored Proc/ Function which reads data from a worksheet of Excel workbook. How do I do it in DB2 ? I am using AIX os.
Tried Read Excel from DB2 but wont work on my OS.
Also tried
Import from FileName.csv of DEL COMMITCOUNT 1000 insert into TableName
but invain.
You have several options, the cleanest is probably to write a Java Stored Procedure, utilising the Apache POI library, if you intend to read Excel workbooks (.xls or .xlsx) rather than plain CSV formatted text files.
Not as clean but just as effective you can write a Perl / Python / PHP script to read the file and return a line at a time, and invoke the script from a stored procedure, see: Making Operating System Calls from SQL
Its be better to convert your excel file to flat file like csv if possible. Because DB2 not natively know excel file. Its csv file that can processed natively using IMPORT, LOAD or INGEST tools from DB2

Resources