BCP QUERYOUT how to change length to all decimal fields - decimal

I export SQL tables to txt files (by the code below in ssms)
all the columns are exported well except decimal columns,
which export 41 chars (I want 19 chars),
even the size column is 14(4)
how can I change the settings in order the column will export in the size I want?
notes:
bcp export 0.0 decimal value as .0000 as I need
my tables is very big can't use substring- a lot of columns and a lot of rows
EXEC sp_configure 'show advanced options', 1
RECONFIGURE
EXEC sp_configure 'xp_cmdshell', 1
RECONFIGURE
-------------
DECLARE #stmtQuery VARCHAR(8000),#stmt VARCHAR(8000);
----שליפת הנתונים מהטבלה
set #stmtQuery ='SELECT * FROM myDB.dbo.HugeTable' --a lot of decimal columns, a lot of rows
--copy data to txt file
SET #stmt = 'BCP "' + #stmtQuery +
'" QUERYOUT "path\to\file\TableData.txt" -t -C RAW -S ' + ##SERVERNAME + ' -d ' + DB_NAME()+' -e path\to\file\log.txt -c -r"0x0A" -T';
EXEC master.sys.xp_cmdshell #stmt;

Related

Excel file generation using SQL query, SPOOL command in Shell script

I have to generate excel file from tables from Oracle Database. The code is working fine but however the column names are not coming completely, there are just coming as the length of there data length.
I want complete header/column names
The column names are coming like this
ST STAT_TYPE_DESC ST S NXT_STATME DELAY_DAYS ANN_PREM_LIM_LOW ANN_PREM_LIM_HI CONTRIB_HIST_LEN EVENT_DO C P
But I want is complete names of the columns, for example ST is STATEMENT_TYPE_ID
#!/bin/ksh
FILE="A.csv"
sqlplus -s lifelite/lifelite#dv10 <<EOF
SPOOL $FILE
SET HEADING ON
SET HEADSEP OFF
SET FEEDBACK OFF
SET LINESIZE 250
SET PAGESIZE 5000 embedded ON
SET COLSEP ","
SELECT * FROM TLLI_01150STATTYPE;
EOF
SPOOL OFF
EXIT 0
Before your select add a new one with the column names:
SELECT 'STATEMENT_TYPE_ID' || ',' || 'STAT_TYPE_DESC' || ',' || ... FROM dual;
And set heading off

Shell script to fetch sql query data in csv file

Need to extract the below query data along with header in csv file using shell script.
Below is the query.
SELECT SourceIdentifier,SourceFileName,ProfitCentre2,PlantCode,
tax_retur ReturnPeriod,document_number DocumentNumber,TO_CHAR(invoice_generation_date,'YYYY-MM-DD')
Docume,Original,customer_name CustomerName,NVL(sns_pos,new_state_code)POS,PortCode,NEW_HSN_CODE HSNorSAC,(SGSATE+UTGSATE) Stat,(SGS+UT)StateUT,Userde FROM arbor.INV_REPO_FINA WHERE UPPER(document_type)='INV' AND UPPER(backout_flag)='VALID' AND new_gst_id_new IS NOT NULL AND new_charges<>0 AND taxable_adj=0
UNION
SELECT SourceIdentifier,SourceFileName,ProfitCentre2,PlantCode,
tax_retur ReturnPeriod,document_number DocumentNumber,TO_CHAR(invoice_generation_date,'YYYY-MM-DD')
Docume,Original,customer_name CustomerName,NVL(sns_pos,new_state_code)POS,PortCode, NEW_HSN_CODE HSNorSAC,(SGSATE+UTGSATE) Stat,(SGS+UTG)StateUT,Userde FROM arbor.INV_REPO_FINA WHERE UPPER(document_type)='INV' AND UPPER(backout_flag)='VALID' AND new_gst_id_new IS NOT NULL AND new_charges<>0 AND taxable_adj<>0
Could please let me know if below approach to fetch data using shell script is correct and script is correct.
#!/bin/bash
file="output.csv"
sqlplus -s username/password#Oracle_SID << EOF
SPOOL $file
select 'SourceIdentifier','SourceFileName','ProfitCentre2','PlantCode',
'tax_retur ReturnPeriod','document_number DocumentNumber','TO_CHAR(invoice_generation_date,'YYYY-MM-DD') Docume','Original','customer_name CustomerName','NVL(sns_pos,new_state_code)POS','PortCode','NEW_HSN_CODE HSNorSAC','(SGSATE+UTGSATE) Stat','(SGS+UT)StateUT','Userde' from dual
Union all
select 'TO_CHAR(SourceIdentifier)','TO_CHAR(SourceFileName)','TO_CHAR(ProfitCentre2)','TO_CHAR(PlantCode)',
'TO_CHAR(tax_retur ReturnPeriod)','TO_CHAR(document_number DocumentNumber)','TO_CHAR(invoice_generation_date,'YYYY-MM-DD')
Docume','TO_CHAR(Original)','TO_CHAR(customer_name CustomerName)','TO_CHAR(NVL(sns_pos,new_state_code)POS)','TO_CHAR(PortCode)','TO_CHAR(NEW_HSN_CODE HSNorSAC)','TO_CHAR((SGSATE+UTGSATE) Stat)','TO_CHAR((SGS+UT)StateUT)','TO_CHAR(Userde)' from
(SELECT SourceIdentifier,SourceFileName,ProfitCentre2,PlantCode,
tax_retur ReturnPeriod,document_number DocumentNumber,TO_CHAR(invoice_generation_date,'YYYY-MM-DD')
Docume,Original,customer_name CustomerName,NVL(sns_pos,new_state_code)POS,PortCode,NEW_HSN_CODE HSNorSAC,(SGSATE+UTGSATE) Stat,(SGS+UT)StateUT,Userde FROM arbor.INV_REPO_FINA WHERE UPPER(document_type)='INV' AND UPPER(backout_flag)='VALID' AND new_gst_id_new IS NOT NULL AND new_charges<>0 AND taxable_adj=0
UNION
SELECT SourceIdentifier,SourceFileName,ProfitCentre2,PlantCode,
tax_retur ReturnPeriod,document_number DocumentNumber,TO_CHAR(invoice_generation_date,'YYYY-MM-DD')
Docume,Original,customer_name CustomerName,NVL(sns_pos,new_state_code)POS,PortCode, NEW_HSN_CODE HSNorSAC,(SGSATE+UTGSATE) Stat,(SGS+UTG)StateUT,Userde FROM arbor.INV_REPO_FINA WHERE UPPER(document_type)='INV' AND UPPER(backout_flag)='VALID' AND new_gst_id_new IS NOT NULL AND new_charges<>0 AND taxable_adj<>0)
SPOOL OFF
EXIT
EOF
In short: the ; is missing from the end of the select statement.
Some unrequested advice:
I think spool will put extra stuff into your file (at least some new lines), a redirect is better, further the first line is not db-related:
echo "SourceIdentifier;SourceFileName;ProfitCentre2..." > $file
I recommend to generate the csv format right in the select query, later it will be more headache (you can escape there what you want):
$query = "select SourceIdentifier || ';' || SourceFileName || ';' || ProfitCentre2 ... ;"
So querying the DB (I think capital -S is the right one) plus for the formatting of the records (and maybe you want to format your columns too):
sqlplus -S username/password#Oracle_SID >> $file << EOF
set linesize 32767 pagesize 0 heading off
$query
EOF
For me this one is working but one empty line before first query and second query is coming. Empty line remove using awk command
#!/bin/bash
FILE="A.csv"
$ORACLE_HOME/bin/sqlplus -s username/password#Oracle_SID<<EOF
SET PAGESIZE 50000 COLSEP "," LINESIZE 20000 FEEDBACK OFF HEADING off
SPOOL $FILE
select 'TYPE_OF_CALL_V','SWITCH_CALL_TYPE_V','RECORD_TYPE_V','TARF_TYPE_V' from dual;
SELECT TYPE_OF_CALL_V,SWITCH_CALL_TYPE_V,RECORD_TYPE_V,TARF_TYPE_V FROM TABLE;
SPOOL OFF
EXIT
EOF
awk 'NF > 0' $FILE > out.txt
mv out.txt $FILE

load data infile parameterized

i need to load data from flat file into mariaDB on linux environment.
i've plan to put mariaDB script on shell file. then call shell from cron.
mariadb script shown as follow:
set #path = (select path_file from param);
set #tbl = (select table_name from param);
set #x = concat(
'LOAD DATA LOCAL INFILE ',
#path,
' INTO TABLE ', #tbl,
' (#row) set id = trim(substr(#row,1,2)), name = trim(substr(#row,3,19)), address= trim(substr(#row,22,20))'
);
prepare y from #x;
execute y;
deallocate prepare y;
when i execute the script directly on heidisql,
error shown:
this command is not supported in the prepared statement protocol yet
does any one have better way to load data from flat file into MariaDB on linux environment regularly (scheduled) without using any ETL tools?
Thanks.
One option you can try is (adjust as needed):
File: load_data.sh
path=$(mysql -u ${mysql_user} -p${mysql_password} -s -N <<GET_PATH
SELECT '/path/to/file/data.csv';
GET_PATH
)
tbl=$(mysql -u ${mysql_user} -p${mysql_password} -s -N <<GET_TABLE
SELECT 'table';
GET_TABLE
)
# mysql -u ${mysql_user} -p${mysql_password} -s -N <<LOAD_DATA
# LOAD DATA LOCAL INFILE '${path}'
# INTO TABLE \`${tbl}\` ...
# LOAD_DATA
# TEST
cat <<LOAD_DATA
LOAD DATA LOCAL INFILE '${path}'
INTO TABLE \`${tbl}\` ...
LOAD_DATA
Command line:
$ ls -l
-r-x------ load_data.sh
$ ./load_data.sh
LOAD DATA LOCAL INFILE '/path/to/file/data.csv'
INTO TABLE `table` ...
For clarity, write as much of the SQL into a STORED PROCEDURE. Then use bash to call that SP.

In Kentico, how can I export files from the database?

In Kentico, how can I export files with their correct filenames from the table 'BlobTable'?
Using this example:
Declare #sql varchar(500)
set #sql = 'BCP "SELECT * FROM [esc_mcms].dbo.[BlobTable] where [BlobId]=4" QUERYOUT C:\Temp\blob\04.pdf -T -f C:\Temp\blob\testblob_all.fmt -S ' + ##SERVERNAME EXEC master.dbo.xp_CmdShell #sql
I wrote a script to generate commands that could be executed to extract all of the files stored as blobs:
SELECT
'set #sql = ''BCP "SELECT * FROM [esc_mcms].dbo.[BlobTable] where [BlobId]="'+convert(varchar,[BlobId])+' QUERYOUT C:\Temp\blob\out\'+NodeResource.name+'.'+BlobFileExt+' -T -f C:\Temp\blob\testblob_all.fmt -S '' + ##SERVERNAME EXEC master.dbo.xp_CmdShell #sql' SQL
FROM [BlobTable], NodeResource, Node
where BlobId=ResourceBlobId
and Node.Id=NodeResource.Id
and NodeResource.name != '_Content'

BCP queryout -w generating wrong CSV File

I'm trying to generate a CSV file with BCP. My problem is that I have some NVARCHAR columns so I must use the parameter -w for the bcp utility. So, the CSV file generated is opening in a single column in EXCEL. If I create a new text file copy the content of the CSV generated and paste in the new file and then change its type to CSV it works and open the content spread in different columns. Has someone seen it before?
SET #File = 'MyQuery.csv'
set #SQL = 'bcp "SELECT FirstName, LastName, DwellingName FROM Table" queryout "' + + '" -w -t
"," -T -S '+ convert(varchar,##ServerName)
exec master..xp_cmdshell #SQL
I've found a solution for my problem:
I've used the -CACP to generate the CSV file ANSI Encoded and it works!!!
Now my command looks like:
SET #File = 'MyQuery.csv'
set #SQL = 'bcp "SELECT FirstName, LastName, DwellingName FROM Table" queryout "' + + '" -c -CACP -t "," -T -S '+ convert(varchar,##ServerName)
exec master..xp_cmdshell #SQL

Resources