PostgreSQL import file - linux

In PostgreSQL and bash(linux), are there ways to directly import a file from the filesystem like
[execute.sh]
pgsql .... -f insert.txt
[insert.txt]
insert into table(id,file) values(1,import('/path/to/file'))
There seems to be no import function, bytea_import as well, lo_import would save int and I don't know how to get the file back (these files are in small sizes so using lo_import seems not appropriate)
And can how do I move the insert.txt statement to PostgreSQL?

I'm not sure what you're after, but if you have script with SQL-statements, for example the insert statements that you mention, you can run psql and then run the script from within psql. For example:
postgres#server:~$ psql dbname
psql (8.4.1)
Type "help" for help.
dbname=# \i /tmp/queries.sql
This will run the statements in /tmp/queries.sql.
Hope this was what you asked for.

In case of more detailed parameters:
$ psql -h [host] -p [port] -d [databaseName] -U [user] -f [/absolute/path/to/file]

The manual has some examples:
testdb=> \set content '''' `cat my_file.txt` ''''
testdb=> INSERT INTO my_table VALUES (:content);
See http://www.postgresql.org/docs/8.4/interactive/app-psql.html

Related

Insert data from STDIN to specific position of string

Case: I wanna insert some data from a file to specific position of the string.
Example:
cat users.log | mysql -h localhost -u mysql -e 'SELECT id FROM users WHERE ${USERS_IDS}'
I want to replace ${USERS_IDS} string to the data from the file
I'm sure this case is very popular, but I did not find a suitable solution
To insert the contents of a file in a specific position of a string you use the facilities of your shell.
For example, Bash has the $(< filename) construction:
mysql -h localhost -u mysql -e 'SELECT id FROM users WHERE '"$(< users.log)"
If the data to be inserted needs to be editted a little, you have the $(command) construction:
mysql -h localhost -u mysql -e 'SELECT id FROM users WHERE '"$(< users.log sed -e 's/.../.../')"
Whether the file substitution or the command substitution are to be enclosed in doublequotes or not depends on the specific use case.
And, if the intended use is really to feed values in a MySQL command, beware of Bobby Tables.

How to call and pipe multiple postgres commands from python

In order to copy a file-like object to a postgres database, I take the following steps:
~$ sudo psql -U postgres
password for root:
password for user postgres:
postgres=# \c migration v0
You are now connected to database "migration_v0" as user "postgres".
migration_v0=# cat file.csv | \copy table1 from stdin csv
I want to take the exact same steps, but from within Python and want to pass a StringIO buffer instead of a literal file. My first attempt consisted of the following steps:
# test.py
fmt = r"copy table1 FROM stdin csv"
sql = fmt.format(string_io)
psql = ['psql', '-U', 'postgres', '-c', sql]
output = subprocess.check_output(psql)
print(output)
The command is executed (a prompt pops up to type the password for the user postgres) but I get the following error:
ERROR: relation "table1" does not exist
This happens because I am currently trying to execute \copy on the default database postgres instead of migration_v0. Thus, I want to include both commands in the subprocess call (\c migration_v0 and \copy ...) and I don't know how to do this, since the postgres' flag -c takes only a single command.
I looked up a workaround and came across with this command line example:
\c migration_v0 \\ \copy ... | psql -U postgres
, but I have no idea how I can port this to python code.
Any suggestions on how I can pull this off?
Edit 1
I realized the flag -d also enables switching databases so now I don't need to run multiple commands. My code now looks like this:
p = subprocess.Popen([
'psql', '-U', 'postgres',
'-d', 'migration_v0',
'-c', '\copy table1 FROM stdin csv'],
shell=False,
stdin=string_io)
but I get the following error:
io.UnsupportedOperation: fileno
Apparently StringIO doesn't implement fileno. At this point I'm wondering if it's even possible to achieve what I want to through a subprocess call.

how to use /dev/null for bash command

I am using below command to get result for my SQL query.
su - postgres -c 'psql -d dbname' with stdin "COPY ( my SQL query ) TO STDOUT WITH CSV HEADER"
This works fine on my server but on different machine it is printing bash warning with output of SQL query.
For example -
/etc/profile: line 46: HISTSIZE: readonly variable
/etc/profile: line 50: HISTCONTROL: readonly variable
/etc/profile.d/20-tmout.sh: line 1: TMOUT: readonly variable
/etc/profile.d/history.sh: line 6: hcmnt_tty: readonly variable
name
abc
Please let me know anyway so that I can skip above warning messages and only get data.
If I would like to use /dev/null in this case how to modify above command to get data only.
if what you mean is "how to discard only error output?", the way to go is to redirect the standard error stream to oblivion (/dev/null), like so:
your-command 2>/dev/null
that way, if the command outputs data to standard out, it passes through, but any output to the standard error socket is discarded, so you won't see these error messages.
by the way, 2 here is a shorthand file descriptor for the standard error.
Sorry this is untested, but I hit this same error, your db session isn't read/write. You can echo the statements to psql to force a proper session as follows. I'm unsure as to how stdin may be effected
echo 'SET TRANSACTION READ WRITE; SET SESSION CHARACTERISTICS AS TRANSACTION READ WRITE ; COPY ( my SQL query ) TO STDOUT WITH CSV HEADER' | su - postgres -c 'psql -d dbname' with stdin
caution - bash hack
su - postgres -c 'psql -d dbname' with stdin "COPY ( my SQL query ) TO STDOUT WITH CSV HEADER" | grep -v "readonly"

Sequelize-auto for SQLite

I'm trying to autogenerate my data models on sequelize for SQLite using squelize-auto on Windows. I have created my sqlite file with schema only, no data inside.
Also installed everything as indicated here.
The command I'm using looks like this:
sequelize-auto -h localhost -u dontcare -d "E:\full\path\to\my\database.db" --dialect sqlite
Also tried with some other path styles like './database.db' etc.
And this is the answer I'm getting:
Executing (default): SELECT name FROM `sqlite_master` WHERE type='table' and name!='sqlite_sequence';
Done!
After this, the script creates a folder called "models" with nothing inside.
Does somebody know what's happening here?
Many thanks!
I've found the problem:
-d should be the database name, not the path to the file.
For specify the file path, you should use the option -c indicating a JSON file. The storage attribute finally indicates that path.
The command should looks like this:
sequelize-auto -h localhost -u dontcare -d databasename --dialect sqlite -c options.json
And options.json looks like this:
{
"storage":"./database_file_name.db"
}
I hope this will be useful to someone.
Bye!

restoring mysql db from the contents of split up mysqldump

Hi my database has started to go over 2GB in backed up size, so I'm looking at options for splitting the file and then reassembling it to restore the database.
I've got a series of files from doing the following backup shell file:
DATE_STRING=`date +%u%a`
BACKUP_DIR=/home/myhome/backups
/usr/local/mysql_versions/mysql-5.0.27/bin/mysqldump --defaults-file=/usr/local/mysql_versions/mysql-5.0.27/my.cnf
--user=myuser
--password=mypw
--add-drop-table
--single-transaction
mydb |
split -b 100000000 - rank-$DATE_STRING.sql-;
this prodes a sequence of files like:
mydb-3Wed.sql-aa
mydb-3Wed.sql-ab
mydb-3Wed.sql-ac
...
my question is what is the corresponding sequence of commands that I need to use for linux to do the restore?
Previously I was using this command:
/usr/local/mysql_versions/mysql-5.0.27/bin/mysql
--defaults-file=/usr/local/mysql_versions/mysql-5.0.27/my.cnf
--user=myuser
--password=mypw
-D mydb < the_old_big_dbdump.sql
Any suggestions even if they don't involve split / cat would be greatly appreciated
I don't see why you can't just do:
cat mydb-3Wed.sql-* | /usr/local/mysql_versions/mysql-5.0.27/bin/mysql --defaults-file=/usr/local/mysql_versions/mysql-5.0.27/my.cnf --user=myuser --password=mypw -D mydb
The * globbing should provide the files in the sorted order, check with ls mydb-3Wed.sql-* that they actually are though.

Resources