Import csv file in sqlite3 table using query - visual-c++

I need to import csv file in sqlite3 table. I am using Visual studio (MFC applciation).
I know how to import csv using command prompt.
sqlite> .mode csv
sqlite> .import your.csv Table_Name
This is working fine. But I need to perform similar operation using some query in program.
How can I do this using query in my program?

The .import command is implemented by the sqlite3 command-line shell.
If you want to do the same in your program, you have to copy that code from the shell's source code (shell.c).

Related

Connect sqlite database into excel spreadsheet from node

I need to run this commands from node.js (i'm using sqlite3 package):
sqlite3 example-db.db
.headers on
.excel
select * from table;
is it possible?

How to import excel/CSV data into MongoDB collection/documents in a programmatic way(MERN)

I have an excel sheet with employee data. My task is to store the data from excel file in MongoDB database > employee collection(a row from excel sheet in mongodb document). I'm doing all this in a react application. I thought of using mongoimport. Since I need it in a CSV or Json format, I converted my excel file to CSV using SheetJs npm package and created a blob file of type csv. And then using the below command I was able to import that CSV file to my mongoDB database.
mongoimport --db demo --collection employees --type csv --headerline --file /path/to/myfile.csv
But I did this from mongo shell by giving a path of my local disk. Now I'm trying to implement this within my react app. Initially I proceeded with this idea - as soon as I upload an excel file, I will convert that to CSV file and I will call a post api with that CSV file in body. Upon sending that request, I will call the "mongoimport" command in my nodejs backend/server so that the data from that CSV file will be stored in my mongoDb collection. Now I can't find any solution to usmongoimport command programmatically. How can I call the "Mongoimport" command in my nodejs server code? I couldn't find any documentation regarding it.
If that is not the right way of doing this task, please suggest me any entirely other way of achieving this task.
In Layman's terms I want to import data from an excel file to MongoDb database using Reactjs/nodejs app.
how are you?
First of all, mongoimport also allows you to import TSV files (same command as for CSV but putting --type tsv), many times friendlier to use with Excel.
Regarding mongoImport, I regret to report that mongoImport cannot be used by any means other than the command line.
What you can do from NodeJs is execute commands in the same way that they are executed by a terminal. For this you can use the child_proccess Module or Exec Function.
I hope that helps even a little.
Regards!

Python dbf Module, write fine, but not appear on VFP until pack() table

I use a dbf python module by Ethan to write records in a Visual Fox Pro dbf Table. I did this to append:
import dbf
Db = dbf.Table ('table.dbf')
Db.open (mode=dbf.READ_WRITE)
record = { 'NUMERO' : 1, 'TIPO' : 'TA' } # It's a simplified record, real record it's so long.
Db.append( record )
Db.close()
The record is added to the dbf file, I can see on VFP Table, using VFP command window, but not appear in the VFP program. At first, I've supposed something happened inside VFP flow, but when I make a manual PACK with DBF Manager, the records appear correctly inside VFP Program, the record not appear in DBF Manager before do a PACK in the table neither.
I try to do
import dbf
Db = dbf.Table ('table.dbf')
Db.open (mode=dbf.READ_WRITE)
record = { 'NUMERO' : 1, 'TIPO' : 'TA' } # It's a simplified record, real record it's so long.
Db.append( record )
Db.pack() # Pack before close.
Db.close()
But it's not working. Does somebody know what's happening?
The VFP program is probably using an index, and the Python dbf module doesn't support index files. I suspect that a REINDEX command would also show the new record.

Cassandra CQL : insert data from existing file

I have a JSON file that I want to insert into a Cassandra table using CQL.
According to datastax documentation, you can insert json with the following command :
INSERT INTO data JSON '{My_Json}';
But I can't find a way to do that directly from an existing json file. Is this possible or do I need to to some Java code to do that insert ?
Note : I am using Cassandra 3.9
The only file format supported for importing is csv. It is possible to convert your json file to CSV format and import it with the copy command. If that is not an option for you, java code is needed to parse your file and insert it into Cassandra.

How to import data into teradata tables from an excel file using BTEQ import?

Im trying to import data into tables from a file using BTEQ import.
im facing weird errors while doing this
Like:
if im using text file as input data file with ',' as delimiter as filed seperator im getting the error as below:
*** Failure 2673 The source parcel length does not match data that was defined.
or
if im using EXCEL file as input data file im getting the error as below:
* Growing Buffer to 53200
* Error: Import data size does not agree with byte length.
The cause may be:
1) IMPORT DATA vs. IMPORT REPORT
2) incorrect incoming data
3) import file has reached end-of-file.
*** Warning: Out of data.
please help me out by giving the syntax for BTEQ import using txt file as input data file and also the syntax if we use EXCEL file as the input data file
Also is there any specific format for the input data file for correct reading of data from it.
if so please give me the info about that.
Thanks in advance:)
EDIT
sorry for not posting the script in first.
Im new to teradata and yet to explore other tools.
I was asked to write the script for BTEQ import
.LOGON TDPD/XXXXXXX,XXXXXX
.import VARTEXT ',' FILE = D:\cc\PDATA.TXT
.QUIET ON
.REPEAT *
USING COL1 (VARCHAR(2)) ,COL2 (VARCHAR(1)) ,COL3 (VARCHAR(56))
INSERT INTO ( COL1 ,COL2 ,COL3) VALUES ( :COL1 ,:COL2 ,:COL3);
.QUIT
I executed the above script and it is successful using a txt(seperating the fileds with comma) file and giving the datatype as varchar.
sample input txt file:
1,b,helloworld1
2,b,helloworld2
3,D,helloworld1
12,b,helloworld1
I also tried to do the same using tab(\t) as the field seperator but it giving the same old error.
Q) Does this work only for comma seperated txt files?
Please could u tell me where can i find the BTEQ manual...
Thanks a lot
Can you post your BTEQ script? May I also ask why you are using BTEQ instead of FastLoad or MultiLoad?
The text file error is possibly due to the data types declared in the using clause. I believe they need to be declared as VARCHAR when reading delimited input (eg. declare as VARCHAR(10) for INTEGER fields).
As for Excel, I can't find anything in the BTEQ manual that says that BTEQ can handle .xls files.
For your tab delimited files, are you doing this (that's a tab character below)?
.import vartext ' '
Or this?
.import vartext '\t'
The former works, the latter doesn't.
The BTEQ manual that I have is on my work machine. One of the first Google results for "BTEQ manual" should yield one online.

Resources