I need to record temperatures to a SQLite-DB on a linux system (using bash)
My problem is that I get the temperature readings in an individual file.
How can I get that reading into the SQLite command
sqlite3 mydb "INSERT INTO readings (TStamp, reading) VALUES (datetime(), 'xxx');"
The file contains just one line with the value "45.7" and should replace the xxx.
Using fix data the SQL command works pretty well.
You can simply echo commands to the sqlite3, just like this:
temp=`cat file_with_temperature_value`
echo "INSERT INTO readings (TStamp, reading) VALUES (datetime(), '$temp');" | sqlite3 mydb
or do it like in your example:
temp=`cat file_with_temperature_value`
sqlite3 mydb "INSERT INTO readings (TStamp, reading) VALUES (datetime(), '$temp');"
Related
I have an Access database file and I need to convert it into delimited file format.
The Access DB file has multiple tables and I need to create separate delimited files for each table.
So far I am not able to parse Access DB files with any Unix commands. Is there some way that I can do this on Unix?
You can use UCanAccess to dump Access tables to CSV files using the console utility:
gord#xubuntu64-nbk1:~/Downloads/UCanAccess$ ./console.sh
/home/gord/Downloads/UCanAccess
Please, enter the full path to the access file (.mdb or .accdb): /home/gord/ClientData.accdb
Loaded Tables:
Clients
Loaded Queries:
Loaded Procedures:
Loaded Indexes:
Primary Key on Clients Columns: (ID)
UCanAccess>
Copyright (c) 2019 Marco Amadei
UCanAccess version 4.0.4
You are connected!!
Type quit to exit
Commands end with ;
Use:
export [--help] [--bom] [-d <delimiter>] [-t <table>] [--big_query_schema <pathToSchemaFile>] [--newlines] <pathToCsv>;
for exporting the result set from the last executed query or a specific table into a .csv file
UCanAccess>export -d , -t Clients clientdata.csv;
UCanAccess>Created CSV file: /home/gord/Downloads/UCanAccess/clientdata.csv
UCanAccess>quit
Cheers! Thank you for using the UCanAccess JDBC Driver.
gord#xubuntu64-nbk1:~/Downloads/UCanAccess$
gord#xubuntu64-nbk1:~/Downloads/UCanAccess$ cat clientdata.csv
ID,LastName,FirstName,DOB
1,Thompson,Gord,2017-04-01 07:06:27
2,Loblaw,Bob,1966-09-12 16:03:00
I am new to Linux and would like to seek for your help. The task is to import csv data to DB2. It is in shell script, and on scheduled run. The file has a header that is why I used skipcount 1. Delimiter is comma so since it is the default one, I did not include COLDEL.
Can you help me troubleshoot as to why upon running the script, we got the error below? I am using IMPORT and INSERT_UPDATE because I learned that the LOAD method deletes the whole contents of the table before importing the data from CSV file. The existing data in the table should not be deleted. Records will only be updated if there are changes from the CSV file, otherwise, should create a new record.
I am looking at which METHOD should be used in getting the specific values from the CSV file and currently I am using METHOD P. I am not so sure with regards to numbering inside its parameter, does it signify how many columns are there to be accessed, and should tally with the ones I am importing from the file?
Below is the script snippet:
db2 connect to MYDB user $USERID using $PASSWORD
LOGFILE=/load/log/MYDBLog.txt
if [ -f /load/data/CUST_GRP.csv ]; then
db2 "import from /load/data/CUST_GRP.csv of del skipcount 1 modified by usedefaults METHOD P(1,2,3,4,5)
messages $LOGFILE
insert_update into myuser.CUST(NUM_ID,NUM_GRP,NUM_TEAM,NUM_LG,NUM_STATUS)";
else echo "/load/data/CUST_GRP.csv file not found." >> $LOGFILE;
fi
if [ -f /load/data/CUST_GRP.csv ]; then
db2 "import from /load/data/CUST_GRP.csv of del skipcount 1 modified by dateformat=\"YYYY-MM-DD\" timeformat=\"HH:MM:SS\" usedefaults METHOD P(1,2,3,4,5,6,7)
messages $LOGFILE
insert_update into myuser.MY_CUST(NUM_CUST,DTE_START,TME_START,NUM_CUST_CLSFCN,DTE_END,TME_END,NUM_CUST_TYPE)";
else echo "/load/data/CUST_GRP.csv file not found." >> $LOGFILE;
fi
The error I am encountering is this:
SQL0104N An unexpected token "modified" was found following "<identifier>".
Expected tokens may include: "INSERT". SQLSTATE=42601
Thank you!
You can’t place clauses in the arbitrary order in the IMPORT command.
Place the skipcount 1 clause before messages.
LOAD command can either INSERT new portion of data or REPLACE the table contents emptying it at the beginning.
I need to load the data in psql db time-to-time via a node app.
I'm using node-postgres package, which seems to be working fine with INSERT statements.
Since my db_dum is huge, I need to move to COPY statement in pg_dump (for better performance), but getting all kinds of error while trying to load the data with pg package in Node - This works find if I use command line psql
the psql dump file I have is huge which includes COPY statements like this:
COPY hibernate.address (id, create_timestamp,
update_timestamp, street_address_line_1, street_address_line_2, city, state_code, postal_code, postal_code_suffix, country_code, latitude, longitude, updated_by_base_user_id) FROM stdin;
379173 2017-02-20 02:34:17.715-08 2018-01-20 08:34:17.715-08 3 Brewster St \N Phoenix AZ 17349 \N US \N \N 719826
/.
Here's a pseudo code for the node app running the sql dump file:
const sqlFile = fs.readFileSync('data_dump.sql').toString();
const connectionString = `postgres://<user>:${PgPassword}#${pgIpAndPort}/<table>`;
const client = new pg.Client(connectionString);
lient.connect();
client.query(sqlFile);
Here's a sample pg_dump command I use (which is for data-only - no schema):
pg_dump -U <user> --data-only --disable-triggers -n hibernate <table_name> > <dump_file.sql>
but it doesn't work when I'm trying to load the data via node app
I know --column-inserts would solve the problem, but that decreases the performance drastically.
So I'm looking for possible solutions for loading the data with COPY tbl FROM stdin; statement in the node app
Any suggestion/comments is appreciated.
I need to import csv file in sqlite3 table. I am using Visual studio (MFC applciation).
I know how to import csv using command prompt.
sqlite> .mode csv
sqlite> .import your.csv Table_Name
This is working fine. But I need to perform similar operation using some query in program.
How can I do this using query in my program?
The .import command is implemented by the sqlite3 command-line shell.
If you want to do the same in your program, you have to copy that code from the shell's source code (shell.c).
I have recently been asked to take a .csv file that looks like this:
Into something like this:
Keeping in mind that there will be hundreds, if not thousands of rows due to a new row being created every time a user logs in/out, and there will be more than simply two users. My first thought was to load the .csv file into a MySQL then run a query on it. However, I really don't want to install MySQL on the machine that will be used for this.
I could do it manually for each agent in Ecxel/Open Office, but due to there being little room for error, and there are so many lines to do this, I want to automate the process. What's the best way to go about that?
This one-liner relies only on awk, and date for converting back and forth to timestamps:
awk 'BEGIN{FS=OFS=","}NR>1{au=$1 "," $2;t=$4; \
"date -u -d \""t"\" +%s"|getline ts; sum[au]+=ts;}END \
{for (a in sum){"date -u -d \"#"sum[a]"\" +%T"|getline h; print a,h}}' test.csv
having test.csv like this:
Agent,Username,Project,Duration
AAA,aaa,NBM,02:09:06
AAA,aaa,NBM,00:15:01
BBB,bbb,NBM,04:14:24
AAA,aaa,NBM,00:00:16
BBB,bbb,NBM,00:45:19
CCC,ccc,NDB,00:00:01
results in:
CCC,ccc,00:00:01
BBB,bbb,04:59:43
AAA,aaa,02:24:23
You can use this with little adjustments for extracting the date from extra columns.
Let me give you an example in case you decide to use SQLite. You didn't specify a language but I will use Python because it can be read as pseudocode. This part creates your sqlite file:
import csv
import sqlite3
con = sqlite3.Connection('my_sqlite_file.sqlite')
con.text_factory = str
cur = con.cursor()
cur.execute('CREATE TABLE "mytable" ("field1" varchar, \
"field2" varchar, "field3" varchar);')
and you use the command:
cur.executemany('INSERT INTO stackoverflow VALUES (?, ?, ?)', list_of_values)
to insert rows in your database once you have read them from the csv file. Notice that we only created three fields in the database so we are only inserting 3 values from your list_of_values. That's why we are using (?, ?, ?).