Run .sql within .sh file under the search path - linux

Currently, the search path in .sh file is set as:
export PSQL_PREAMBLE='SET search_path TO public,mimiciii'
I am running:
{ echo "${PSQL_PREAMBLE}; DROP TABLE IF EXISTS ventilation_durations; CREATE TABLE ventilation_durations AS "; cat durations/ventilation_durations.sql; } | sed -r -e "${REGEX_DATETIME_DIFF}" | sed -r -e "${REGEX_SCHEMA}" | psql ${CONNSTR}
Question: After running the code above in the .sh file, it seems like the code I am running above did not catch the search path set in PSQL_PREAMBLE. It just does not catch the functions I created in the mimiciii schema
(The full script is here: https://github.com/MIT-LCP/mimic-code/blob/main/mimic-iii/concepts/postgres_make_concepts.sh)

That should work just fine, but it seems unnecessarily complicated.
Just run
export PGOPTIONS=-csearch_path=public,mimiciii
at the beginning of your script, and search_path will automatically be set like that for all future psql invocations.
See the documentation of PGOPTIONS, the options connection parameter and the -c option of postgres.

Related

How to run shell script located on Linux server from Windows environment?

I am trying to run a shell script located on a Linux server from Windows. The shell script does two things:
Do a sed command to replace text in an .sql file in the same directory.
Run the .sql file with sqlplus.
The shell script:
!/bin/sh
arg1=$1
arg2=$2
arg3=$(echo $arg1 | tr '[:lower:]' '[:upper:]')
arg4=$(echo $arg2 | tr '[:lower:]' '[:upper:]')
echo $arg1
echo $arg2
echo $arg3
echo $arg4
sed -i "s/$arg3/$arg4/g" sequence.$arg1.sql
sqlplus $arg2/$arg2#MYDB <<EOF
#sequence.$arg1.sql
exit;
(My database is located on the same Linux server.)
1) Script runs correctly when I log in to the server via MobaXterm
Connect to server with userID.
Set my_env.
cd to the shell script's directory.
Run script with ./myscript.sh with arguments.
2) Same shell script runs successfully via .cmd manually
Create a Windows script test.cmd on my Windows PC.
In the .cmd file I have the line:
plink.exe -ssh userID#Server
After the console window pops up, I repeat the steps 2 to 4 and script runs successfully.
What I am failing to do so is to automate the whole process.
Here's the line in my .cmd file which I attempted:
plink.exe -ssh userID#Server /myfilepath/myscript.sh %arg1% %arg2%
I can see the arguments passed correctly using multiple echo in the shell script. However, the shell script fails to locate the .sql file.
Error log:
/mypath/myscript.sh[1]: !/bin/sh^M not found [No such file or directory]
myarg1value
myarg2value
:No such file or directory[myarg1value]
/mypath/myscript.sh[12]: sqlplus: not found [No such file or directory]
I also tried below, but unfortunately with same result:
plink.exe -ssh userID#Server -m command.txt
Where file command.txt contains:
. my_env
cd /filepath/
./myscript.sh %arg_with_actual_value%
I do not know why it is not working, especially when 2) works and the script is relatively simple.
Do I assume things incorrectly about plink (path, variable, etc.)?
Is Cygwin the only way out?
I tried not to rely on yet another tool as I have been using plink.
EDIT: While the line
sed -i "s/$arg3/$arg4/g" sequence.$arg1.sql
fails to run on the .sh, i can run it on the .cmd file itself via:
plink.exe -ssh userID#Server sed -i "s/%arg3%/%arg4%/g" /myfilepath/sequence.%arg1%.sql
Hence I am suspecting the problem comes from the .sh file not having the required components to run (i.e. set env variable, path, etc)
This is not a solution but partially fixed some issue, thanks to Martin Prikryl and Mofi's input:
in the command.txt, the following needs to be set:
ORACLE_SID
ORACLE_HOME
PATH
after these are set, the sqlplus and sed will work normally. However, passing values from .cmd through plink to Linux's shell script seems to have issue with the actual value being passed. The variable will have the assigned value along with some unreadable characters. In this case,
sqlplus $arg2/$arg2#MYDB
login fails because arg2 contains some other char.
#sequence.$arg1.sql
this line also fails as it will try to opens 2 files, one being called sequence.myvalue and another one called "%s", which i suspect the assigned variable contains some sort of unreadable nextline character.
EDIT: fixed, we can use the same treatment from sed - run sqlplus directly from plink instead of passing value and running a .sh script in Linux:
sqlplus $arg2/$arg2#MYDB #/myfilepath/sequence.%arg1%.sql

Including a date/time in a file name dumped from psql

I'm planning on running a .sh script that will run periodically through cron on linux. I'm running postgres 8.4 on centos.
My script will have something like this in it:
psql -U username -d db_name -c "COPY orders TO stdout DELIMITER ',' CSV HEADER;" > orders.csv
I know there are other ways to dump tables into csv files but this is the only one I could use without admin rights.
My problem is naming the files. I want to specifically name the file something along the lines of:
yyyymmdd-hhmm-orders.csv
I'm not the best scripting guru out there (as you can tell) so how can I get the dumps to dynamically do this?
Thanks
`date '+%Y%m%d-%H%M'`-orders.csv
I personally also add the seconds %S to the file name
man date
Will show the other formatting options
Use below code and its worked fine
Assigned date format with one variable and used the same
Code:
I=`date +%Y%m%d-%H%M%S -d`
psql -U username -d db_name -c "COPY orders TO stdout DELIMITER ',' CSV HEADER;" > $i-orders.csv

PostgreSQL CSV import from command line

I've been using the psql Postgres terminal to import CSV files into tables using the following
COPY tbname FROM
'/tmp/the_file.csv'
delimiter '|' csv;
which works fine except that I have to be logged into the psql terminal to run it.
I would like to know if anyone knows of a way to do a command similar to this from the Linux shell command line similar to how Postgres allows a shell command like bellow
/opt/postgresql/bin/pg_dump dbname > /tmp/dbname.sql
This allows the dumping of a database from the Linux shell without being logged into psql terminal.
The solution in the accepted answer will only work on the server and when the user executing the query will have permissions to read the file as explained in this SO answer.
Otherwise, a more flexible approach is to replace the SQL's COPY command with the psql's "meta-command" called \copy which which takes all the same options as the "real" COPY, but is run inside the client (with no need for ; at the end):
psql -c "\copy tbname FROM '/tmp/the_file.csv' delimiter '|' csv"
As per docs, the \copy command:
Performs a frontend (client) copy. This is an operation that runs an SQL COPY command, but instead of the server reading or writing the specified file, psql reads or writes the file and routes the data between the server and the local file system. This means that file accessibility and privileges are those of the local user, not the server, and no SQL superuser privileges are required.
In addition, if the the_file.csv contains the header in the first line, it can be recognized by adding header at the end of the above command:
psql -c "\copy tbname FROM '/tmp/the_file.csv' delimiter '|' csv header"
As stated in The PostgreSQL Documentation (II. PostgreSQL Client Applications - psql) you can pass a command to psql (PostgreSQL interactive terminal) with the switch -c. Your options are:
1, Client-side CSV: \copy meta-command
perform the SQL COPY command but the file is read on the client and the content routed to the server.
psql -c "\copy tbname FROM '/tmp/the_file.csv' delimiter '|' csv"
(client-side option originally mentioned in this answer)
2. Server-side CSV: SQL COPY command
reads the file on the server (current user needs to have the necessary permissions):
psql -c "COPY tbname FROM '/tmp/the_file.csv' delimiter '|' csv;"
the DB roles needed for reading the file on the server:
COPY naming a file or command is only allowed to database superusers
or users who are granted one of the default roles
pg_read_server_files, pg_write_server_files, or
pg_execute_server_program
also the PostgreSQL server process needs to have access to the file.
To complete the previous answer, I would suggest:
psql -d your_dbname --user=db_username -c "COPY tbname FROM '/tmp/the_file.csv' delimiter '|' csv;"
The most flexible way is to use a shell HERE document, which allows you to use shell variables inside your query, even inside (double or single) quotes:
#!/bin/sh
THE_USER=moi
THE_DB=stuff
THE_TABLE=personnel
PSQL=/opt/postgresql/bin/psql
THE_DIR=/tmp
THE_FILE=the_file.csv
${PSQL} -U ${THE_USER} ${THE_DB} <<OMG
COPY ${THE_TABLE} FROM '${THE_DIR}/${THE_FILE}' delimiter '|' csv;
OMG

Use the same command in shell script, just with a prefix

I have a command I run to check if a certain db exists.
I want to do it locally and via ssh on a remote server.
The command is as so:
mysqlshow -uroot | grep -o $DB_NAME
My question is if I can use the same command for 2 variables,
the only difference being ssh <remote-server> before one?
Something along the lines of !! variable expansion in the CLI:
LOCAL_DB=mysqlshow -uroot | grep -o $DB_NAME
REMOTE_DB=ssh <remote-host> !!
something like this perhaps?
cmd="whoami"
eval $cmd
ssh remote#host $cmd
eval will run the command in the string $cmd locally
also, for checking tables, it's safer to ask for the table name explicitly via a query
SHOW TABLES LIKE 'yourtable';
and for databases:
SHOW DATABASES LIKE 'yourdb';
You can create a function in .bashrc something like:
function showrdb() {
ssh remote#host "$1"
}
export -f showrdb
and then source .bashrc and call the function like;
showrdb "command you want to run on remote host"
Or alternately you can create a shell script contains the same function(or only the ssh line) and call the script as
./scriptname "command to execute of remote host"
But the level of comfort for me is more in first approach.

Proper format for a mysqldump dynamic filename in a cron?

I have a crontab set up that errors out every time I attempt to do it. It works fine in the shell. It's the format I'm using when I attempt to automatically insert the date into the filename of the database backup. Does anyone know the syntax I need to use to get cron to let me insert the date into the filename?
mysqldump -hServer -uUser -pPassword Table | gzip >
/home/directory/backups/table.$(date +"%Y-%m-%d").gz
Thanks in advance!
What about something like this for the "command" part of the crontab :
mysqldump --host=HOST --user=USER --password=PASSWORD DATABASE TABLE | gzip > /tmp/table.`date +"\%Y-\%m-\%d"`.gz
What has changed from OP is the escaping of the date format :
date +"\%Y-\%m-\%d"
(And I used backticks -- but that should do much of a difference)
(Other solution would be to put your original command in a shell-script, and execute this one from the crontab, instead of the command -- would probably be easier to read/write ^^)
The most typical reason for "works in shell but not in cron" is that commands you try to execute are not in PATH.. Reason is that shell invoked from cron aint loading same files as your login shell.
Fix: add absolute path to each command you try to execute.
Second thing i notice in your command. Syntax for running your date command looks like its not very portable. Change that to be in backticks, or run put your whole command to shellscript (also, you can use it to set your path too) and execute that script from cron..
EDIT:
During the writing my original reply my keyboard layout didnt have backticks so check what Pascal wrote.
And example of what you could do with a shellscript:
Copy following to /usr/local/bin/dumptable.sh
#!/bin/sh
/usr/bin/mysqldump --host=HOST --user=USER --password=PASSWORD DATABASE TABLE | /bin/gzip > /tmp/table.`/bin/date +"\%Y-\%m-\%d"`.gz
and then put the the /usr/local/bin/dumptable.sh into cron..

Resources