I am new to Linux and would like to seek for your help. The task is to import csv data to DB2. It is in shell script, and on scheduled run. The file has a header that is why I used skipcount 1. Delimiter is comma so since it is the default one, I did not include COLDEL.
Can you help me troubleshoot as to why upon running the script, we got the error below? I am using IMPORT and INSERT_UPDATE because I learned that the LOAD method deletes the whole contents of the table before importing the data from CSV file. The existing data in the table should not be deleted. Records will only be updated if there are changes from the CSV file, otherwise, should create a new record.
I am looking at which METHOD should be used in getting the specific values from the CSV file and currently I am using METHOD P. I am not so sure with regards to numbering inside its parameter, does it signify how many columns are there to be accessed, and should tally with the ones I am importing from the file?
Below is the script snippet:
db2 connect to MYDB user $USERID using $PASSWORD
LOGFILE=/load/log/MYDBLog.txt
if [ -f /load/data/CUST_GRP.csv ]; then
db2 "import from /load/data/CUST_GRP.csv of del skipcount 1 modified by usedefaults METHOD P(1,2,3,4,5)
messages $LOGFILE
insert_update into myuser.CUST(NUM_ID,NUM_GRP,NUM_TEAM,NUM_LG,NUM_STATUS)";
else echo "/load/data/CUST_GRP.csv file not found." >> $LOGFILE;
fi
if [ -f /load/data/CUST_GRP.csv ]; then
db2 "import from /load/data/CUST_GRP.csv of del skipcount 1 modified by dateformat=\"YYYY-MM-DD\" timeformat=\"HH:MM:SS\" usedefaults METHOD P(1,2,3,4,5,6,7)
messages $LOGFILE
insert_update into myuser.MY_CUST(NUM_CUST,DTE_START,TME_START,NUM_CUST_CLSFCN,DTE_END,TME_END,NUM_CUST_TYPE)";
else echo "/load/data/CUST_GRP.csv file not found." >> $LOGFILE;
fi
The error I am encountering is this:
SQL0104N An unexpected token "modified" was found following "<identifier>".
Expected tokens may include: "INSERT". SQLSTATE=42601
Thank you!
You can’t place clauses in the arbitrary order in the IMPORT command.
Place the skipcount 1 clause before messages.
LOAD command can either INSERT new portion of data or REPLACE the table contents emptying it at the beginning.
Related
I have an excel sheet with employee data. My task is to store the data from excel file in MongoDB database > employee collection(a row from excel sheet in mongodb document). I'm doing all this in a react application. I thought of using mongoimport. Since I need it in a CSV or Json format, I converted my excel file to CSV using SheetJs npm package and created a blob file of type csv. And then using the below command I was able to import that CSV file to my mongoDB database.
mongoimport --db demo --collection employees --type csv --headerline --file /path/to/myfile.csv
But I did this from mongo shell by giving a path of my local disk. Now I'm trying to implement this within my react app. Initially I proceeded with this idea - as soon as I upload an excel file, I will convert that to CSV file and I will call a post api with that CSV file in body. Upon sending that request, I will call the "mongoimport" command in my nodejs backend/server so that the data from that CSV file will be stored in my mongoDb collection. Now I can't find any solution to usmongoimport command programmatically. How can I call the "Mongoimport" command in my nodejs server code? I couldn't find any documentation regarding it.
If that is not the right way of doing this task, please suggest me any entirely other way of achieving this task.
In Layman's terms I want to import data from an excel file to MongoDb database using Reactjs/nodejs app.
how are you?
First of all, mongoimport also allows you to import TSV files (same command as for CSV but putting --type tsv), many times friendlier to use with Excel.
Regarding mongoImport, I regret to report that mongoImport cannot be used by any means other than the command line.
What you can do from NodeJs is execute commands in the same way that they are executed by a terminal. For this you can use the child_proccess Module or Exec Function.
I hope that helps even a little.
Regards!
New to FileMaker server and have been trying to roll it out for my own company but running into a problem I can't seem to solve or find the right answers to online.
I am saving an excel file called import.xlsx in /Library/FileMaker Server/Data/Documents/ and using a variable filepath to point to the file this script does not work when used in PSoS step. However if I specify the file locally whilst using FMP, and using only the script "Import Test" the import works fine.
Is it a problem with the variable filepath I have set? Or have I missed something out? I have included the two scripts that I wrote to try and make the import happen via PSoS. Any help or advice would be greatly appreciated!! - Yong
Script to Trigger PSoS "Import PSoS Button"
Commit Records/Request
Perform Script on Server[ "Import Test" ]
Script to Import Records from Excel "Import Test"
Go to Layout [ "Test" (Test) ]
Set Variable [ $filepath ; Value: "filemac:" & Get(DocumentsPath) & "import.xlsx" ]
Import Records [ With dialog: Off ; Source: "$filepath"; Worksheet: "" ; Add; Mac Roman ]
UPDATE:
I have tried using a .csv file in the place of the .xlsx file and the script works perfectly. Still not sure why a .xlsx file doesn't work as it is stated that FileMaker server 15 supports .xlsx import records script step (http://help.filemaker.com/app/answers/detail/a_id/12067/~/import%2Fexport-script-steps-with-filemaker-server-scheduled-scripts)
You should not need to prepend the path with "filemac:" since the function gets the correct path. Can you try it without and see if that does it?
Just to update that I figured out what is wrong after all these time. Filemaker Server does support server side imports with .xlsx, what was going wrong was the file rights that is inside the /Library/FileMaker Server/Data/Documents/ folder. To have server side imports to work, you have to make sure that for the specific file .csv or .xlsx, you have to make sure that either (fmsadmin) or (fmserver User) has read rights to that file. Add it if it doesn't.
Can't believe it has taken me that long to figure that out. Not sure why this isn't documented anywhere. Hope it helps whoever else is facing this problem, cheers!
YS
I have table with more than 3 000 000 rows. I have try to export the data from it manually and with SQL Server Management Studio Export data functionality to Excel but I have met several problems:
when create .txt file manually copying and pasting the data (this is several times, because if you copy all rows from the SQL Server Management Studio it throws out of memory error) I am not able to open it with any text editor and to copy the rows;
the Export data to Excel do not work, because Excel do not support so many rows
Finally, with the Export data functionality I have created a .sql file, but it is 1.5 GB, and I am not able to open it in SQL Server Management Studio again.
Is there a way to import it with the Import data functionality, or other more clever way to make a backup of the information of my table and then to import it again if I need it?
Thanks in advance.
I am not quite sure if I understand your requirements (I don't know if you need to export your data to excel or you want to make some kind of backup).
In order to export data from single tables, you could use Bulk Copy Tool which allows you to export data from single tables and exporting/Importing it to files. You can also use a custom Query to export the data.
It is important that this does not generate a Excel file, but another format. You could use this to move data from one database to another (must be MS SQL in both cases).
Examples:
Create a format file:
Bcp [TABLE_TO_EXPORT] format "[EXPORT_FILE]" -n -f "[ FORMAT_FILE]" -S [SERVER] -E -T -a 65535
Export all Data from a table:
bcp [TABLE_TO_EXPORT] out "[EXPORT_FILE]" -f "[FORMAT_FILE]" -S [SERVER] -E -T -a 65535
Import the previously exported data:
bcp [TABLE_TO_EXPORT] in [EXPORT_FILE]" -f "[FORMAT_FILE] " -S [SERVER] -E -T -a 65535
I redirect the output from hte export/import operations to a logfile (by appending "> mylogfile.log" ad the end of the commands) - this helps if you are exporting a lot of data.
Here a way of doing it without bcp:
EXPORT THE SCHEMA AND DATA IN A FILE
Use the ssms wizard
Database >> Tasks >> generate Scripts… >> Choose the table >> choose db model and schema
Save the SQL file (can be huge)
Transfer the SQL file on the other server
SPLIT THE DATA IN SEVERAL FILES
Use a program like textfilesplitter to split the file in smaller files and split in files of 10 000 lines (so each file is not too big)
Put all the files in the same folder, with nothing else
IMPORT THE DATA IN THE SECOND SERVER
Create a .bat file in the same folder, name execFiles.bat
You may need to check the table schema to disable the identity in the first file, you can add that after the import in finished.
This will execute all the files in the folder against the server and the database with, the –f define the Unicode text encoding should be used to handle the accents:
for %%G in (*.sql) do sqlcmd /S ServerName /d DatabaseName -E -i"%%G" -f 65001
pause
I'm working with a windows batch command to create a list of filepaths and filenames (without the ext) for processing and archival. I need to make a CSV file that will contain the path to the file and the filename.
I was able to use the 'DIR /A-D-S /D /S' command to output the list with the file paths, which is filelistA.txt. Then I use a vbscript (makelistB.vbs) to strip the path and extension and save that as filelistB.txt. I need to merge the two files row for row, putting the comma separator in between and that's where I need some sort of VBscript.
filelistA.txt looks like:
C:\Data\Clients\COLD\AC3060P.txt
C:\Data\Clients\COLD\AC3090P.txt
C:\Data\Clients\COLD\AC3100P.txt
C:\Data\Clients\COLD\AC3150P.txt
C:\Data\Clients\COLD\AC3200P.txt
C:\Data\Clients\COLD\AC3600P.txt
C:\Data\Clients\COLD\AC3652P.txt
C:\Data\Clients\COLD\AC5715P.txt
C:\Data\Clients\COLD\AC5720P.txt
C:\Data\Clients\COLD\AC5725P.txt
filelistB.txt looks like:
AC3060P
AC3090P
AC3100P
AC3150P
AC3200P
AC3600P
AC3652P
AC5715P
AC5720P
AC5725P
I want to make FileListCSV.txt, that looks like this:
C:\Data\Clients\FWBT\COLD\AC3060P.txt,AC3060P
C:\Data\Clients\FWBT\COLD\AC3090P.txt,AC3090P
C:\Data\Clients\FWBT\COLD\AC3100P.txt,AC3100P
C:\Data\Clients\FWBT\COLD\AC3150P.txt,AC3150P
C:\Data\Clients\FWBT\COLD\AC3200P.txt,AC3200P
C:\Data\Clients\FWBT\COLD\AC3600P.txt,AC3600P
C:\Data\Clients\FWBT\COLD\AC3652P.txt,AC3652P
C:\Data\Clients\FWBT\COLD\AC5715P.txt,AC5715P
C:\Data\Clients\FWBT\COLD\AC5720P.txt,AC5720P
C:\Data\Clients\FWBT\COLD\AC5725P.txt,AC5725P
I'm also open to using SED for windows if that can do all of this in one shot. However, I would imagine this should be something that can be whipped up in VBscript in a few minutes.
This Windows batch file will do what you want without the need for the intermediate files.
#ECHO OFF
FOR %%i IN (*.txt) DO ECHO %%~fi,%%~ni
You can get the output of this batch into a text file by redirecting the output like this:
MyBatch.cmd>>Output.txt
This could be a job for the SQLite shell.
C:\Temp> sqlite3.exe
create table paths(path text);
create table filenames(fname text);
.import fileListA.txt paths
.import fileListB.txt filenames
.separator ,
.output FileListCSV.txt
select * from paths p join filenames f on f.rowid = p.rowid;
.q
The SQLite shell is a single executable that will either create a persistent SQLite database in the form of a file, or create a database in memory (when, like here, without any argument in the command line).
This can be done pretty fast in EasyMorph using Append transformation in "Append columns" mode.
Im trying to import data into tables from a file using BTEQ import.
im facing weird errors while doing this
Like:
if im using text file as input data file with ',' as delimiter as filed seperator im getting the error as below:
*** Failure 2673 The source parcel length does not match data that was defined.
or
if im using EXCEL file as input data file im getting the error as below:
* Growing Buffer to 53200
* Error: Import data size does not agree with byte length.
The cause may be:
1) IMPORT DATA vs. IMPORT REPORT
2) incorrect incoming data
3) import file has reached end-of-file.
*** Warning: Out of data.
please help me out by giving the syntax for BTEQ import using txt file as input data file and also the syntax if we use EXCEL file as the input data file
Also is there any specific format for the input data file for correct reading of data from it.
if so please give me the info about that.
Thanks in advance:)
EDIT
sorry for not posting the script in first.
Im new to teradata and yet to explore other tools.
I was asked to write the script for BTEQ import
.LOGON TDPD/XXXXXXX,XXXXXX
.import VARTEXT ',' FILE = D:\cc\PDATA.TXT
.QUIET ON
.REPEAT *
USING COL1 (VARCHAR(2)) ,COL2 (VARCHAR(1)) ,COL3 (VARCHAR(56))
INSERT INTO ( COL1 ,COL2 ,COL3) VALUES ( :COL1 ,:COL2 ,:COL3);
.QUIT
I executed the above script and it is successful using a txt(seperating the fileds with comma) file and giving the datatype as varchar.
sample input txt file:
1,b,helloworld1
2,b,helloworld2
3,D,helloworld1
12,b,helloworld1
I also tried to do the same using tab(\t) as the field seperator but it giving the same old error.
Q) Does this work only for comma seperated txt files?
Please could u tell me where can i find the BTEQ manual...
Thanks a lot
Can you post your BTEQ script? May I also ask why you are using BTEQ instead of FastLoad or MultiLoad?
The text file error is possibly due to the data types declared in the using clause. I believe they need to be declared as VARCHAR when reading delimited input (eg. declare as VARCHAR(10) for INTEGER fields).
As for Excel, I can't find anything in the BTEQ manual that says that BTEQ can handle .xls files.
For your tab delimited files, are you doing this (that's a tab character below)?
.import vartext ' '
Or this?
.import vartext '\t'
The former works, the latter doesn't.
The BTEQ manual that I have is on my work machine. One of the first Google results for "BTEQ manual" should yield one online.