I have an Access database file and I need to convert it into delimited file format.
The Access DB file has multiple tables and I need to create separate delimited files for each table.
So far I am not able to parse Access DB files with any Unix commands. Is there some way that I can do this on Unix?
You can use UCanAccess to dump Access tables to CSV files using the console utility:
gord#xubuntu64-nbk1:~/Downloads/UCanAccess$ ./console.sh
/home/gord/Downloads/UCanAccess
Please, enter the full path to the access file (.mdb or .accdb): /home/gord/ClientData.accdb
Loaded Tables:
Clients
Loaded Queries:
Loaded Procedures:
Loaded Indexes:
Primary Key on Clients Columns: (ID)
UCanAccess>
Copyright (c) 2019 Marco Amadei
UCanAccess version 4.0.4
You are connected!!
Type quit to exit
Commands end with ;
Use:
export [--help] [--bom] [-d <delimiter>] [-t <table>] [--big_query_schema <pathToSchemaFile>] [--newlines] <pathToCsv>;
for exporting the result set from the last executed query or a specific table into a .csv file
UCanAccess>export -d , -t Clients clientdata.csv;
UCanAccess>Created CSV file: /home/gord/Downloads/UCanAccess/clientdata.csv
UCanAccess>quit
Cheers! Thank you for using the UCanAccess JDBC Driver.
gord#xubuntu64-nbk1:~/Downloads/UCanAccess$
gord#xubuntu64-nbk1:~/Downloads/UCanAccess$ cat clientdata.csv
ID,LastName,FirstName,DOB
1,Thompson,Gord,2017-04-01 07:06:27
2,Loblaw,Bob,1966-09-12 16:03:00
Related
I am using a Cassandra database with several key-spaces. Now i want to use that key-spaces within another system.
What are valid options to achieve that?
You can use the cqlsh COPY command to export your data into csv. Then you can import csv into your other database if it is supported, e.g.:
COPY keyspace.tablename (column1, column2, ..) TO '../export.csv' WITH HEADER = TRUE ;
https://docs.datastax.com/en/cql/3.3/cql/cql_reference/cqlshCopy.html
I have taken snapshot of a cassandra table . Following are the files generated :-
manifest.json mc-10-big-Filter.db mc-10-big-TOC.txt mc-11-big-Filter.db mc-11-big-TOC.txt mc-9-big-Filter.db mc-9-big-TOC.txt
mc-10-big-CompressionInfo.db mc-10-big-Index.db mc-11-big-CompressionInfo.db mc-11-big-Index.db mc-9-big-CompressionInfo.db mc-9-big-Index.db schema.cql
mc-10-big-Data.db mc-10-big-Statistics.db mc-11-big-Data.db mc-11-big-Statistics.db mc-9-big-Data.db mc-9-big-Statistics.db
mc-10-big-Digest.crc32 mc-10-big-Summary.db mc-11-big-Digest.crc32 mc-11-big-Summary.db mc-9-big-Digest.crc32 mc-9-big-Summary.db
Is there a way to use these files to extract data of the table into a csv file .
Yes, you can do that with the sstable2json tool.
Use the tool against the *Data.db file
This outputs in JSON format. You need to convert to CSV after.
When importing a record with a large field inside (longer than 124214 characters) I am getting the error
"field larger than field limit (131072)"
I saw form other posts how to solve this on Python but I don't know if it is possible on CQLSH.
Thanks
Take a look at this answer:
_csv.Error: field larger than field limit (131072)
You will need to add this solution to the top of the cqlsh file. So after:
import csv
import getpass
csv.field_size_limit(sys.maxsize)
Rather than hacking into the cqlsh file, there is a standard option provided by cassandra to change the field_size_limit. The Cassandra installation includes a cqlshrc.sample file in the conf directory of a tarball distribution. In this file the field_size_limit option can be found and changed. To make cqlsh read it's options from this file, you need to copy the cqlshrc.sample file from the conf directory to the hidden .cassandra folder of your user home folder, and renaming it to cqlshrc.
Cassandra documentation contains more details about it: http://docs.datastax.com/en/cql/3.1/cql/cql_reference/cqlsh.html?scroll=refCqlsh__cqlshUsingCqlshrc
Download & extract the cassandra distribution from
https://cassandra.apache.org/download/
You will find cqlshrc.sample file in conf directory after you extracted
Copy the cqlshrc.sample to ~/.cassandra and rename it to cqlshrc
Open the cqlshrc file and change ; field_size_limit = 131072 to field_size_limit = 1000000000
Don't forget to remove ";" in the above step
Open a new terminal & run your queries
I have table with more than 3 000 000 rows. I have try to export the data from it manually and with SQL Server Management Studio Export data functionality to Excel but I have met several problems:
when create .txt file manually copying and pasting the data (this is several times, because if you copy all rows from the SQL Server Management Studio it throws out of memory error) I am not able to open it with any text editor and to copy the rows;
the Export data to Excel do not work, because Excel do not support so many rows
Finally, with the Export data functionality I have created a .sql file, but it is 1.5 GB, and I am not able to open it in SQL Server Management Studio again.
Is there a way to import it with the Import data functionality, or other more clever way to make a backup of the information of my table and then to import it again if I need it?
Thanks in advance.
I am not quite sure if I understand your requirements (I don't know if you need to export your data to excel or you want to make some kind of backup).
In order to export data from single tables, you could use Bulk Copy Tool which allows you to export data from single tables and exporting/Importing it to files. You can also use a custom Query to export the data.
It is important that this does not generate a Excel file, but another format. You could use this to move data from one database to another (must be MS SQL in both cases).
Examples:
Create a format file:
Bcp [TABLE_TO_EXPORT] format "[EXPORT_FILE]" -n -f "[ FORMAT_FILE]" -S [SERVER] -E -T -a 65535
Export all Data from a table:
bcp [TABLE_TO_EXPORT] out "[EXPORT_FILE]" -f "[FORMAT_FILE]" -S [SERVER] -E -T -a 65535
Import the previously exported data:
bcp [TABLE_TO_EXPORT] in [EXPORT_FILE]" -f "[FORMAT_FILE] " -S [SERVER] -E -T -a 65535
I redirect the output from hte export/import operations to a logfile (by appending "> mylogfile.log" ad the end of the commands) - this helps if you are exporting a lot of data.
Here a way of doing it without bcp:
EXPORT THE SCHEMA AND DATA IN A FILE
Use the ssms wizard
Database >> Tasks >> generate Scripts… >> Choose the table >> choose db model and schema
Save the SQL file (can be huge)
Transfer the SQL file on the other server
SPLIT THE DATA IN SEVERAL FILES
Use a program like textfilesplitter to split the file in smaller files and split in files of 10 000 lines (so each file is not too big)
Put all the files in the same folder, with nothing else
IMPORT THE DATA IN THE SECOND SERVER
Create a .bat file in the same folder, name execFiles.bat
You may need to check the table schema to disable the identity in the first file, you can add that after the import in finished.
This will execute all the files in the folder against the server and the database with, the –f define the Unicode text encoding should be used to handle the accents:
for %%G in (*.sql) do sqlcmd /S ServerName /d DatabaseName -E -i"%%G" -f 65001
pause
I need to write a Stored Proc/ Function which reads data from a worksheet of Excel workbook. How do I do it in DB2 ? I am using AIX os.
Tried Read Excel from DB2 but wont work on my OS.
Also tried
Import from FileName.csv of DEL COMMITCOUNT 1000 insert into TableName
but invain.
You have several options, the cleanest is probably to write a Java Stored Procedure, utilising the Apache POI library, if you intend to read Excel workbooks (.xls or .xlsx) rather than plain CSV formatted text files.
Not as clean but just as effective you can write a Perl / Python / PHP script to read the file and return a line at a time, and invoke the script from a stored procedure, see: Making Operating System Calls from SQL
Its be better to convert your excel file to flat file like csv if possible. Because DB2 not natively know excel file. Its csv file that can processed natively using IMPORT, LOAD or INGEST tools from DB2