Yugabyte,
is there a way I can create the YB DB and it's schema by running one script file with all the commands mentioned here (https://docs.yugabyte.com/latest/quick-start/explore/ysql/#docker)
You can execute queries with ysqlsh from the command line using -c, example we use that to create the database:
./bin/ysqlsh -c 'create database ybdemo;'
You can also execute sql scripts with ysqlsh using -f (https://docs.yugabyte.com/latest/admin/ysqlsh/#f-filename-file-filename), example:
./bin/ysqlsh -d ybdemo -f share/schema.sql
This will execute the schema.sql script into ybdemo database. And repeat that for every .sql file.
Related
Currently, the search path in .sh file is set as:
export PSQL_PREAMBLE='SET search_path TO public,mimiciii'
I am running:
{ echo "${PSQL_PREAMBLE}; DROP TABLE IF EXISTS ventilation_durations; CREATE TABLE ventilation_durations AS "; cat durations/ventilation_durations.sql; } | sed -r -e "${REGEX_DATETIME_DIFF}" | sed -r -e "${REGEX_SCHEMA}" | psql ${CONNSTR}
Question: After running the code above in the .sh file, it seems like the code I am running above did not catch the search path set in PSQL_PREAMBLE. It just does not catch the functions I created in the mimiciii schema
(The full script is here: https://github.com/MIT-LCP/mimic-code/blob/main/mimic-iii/concepts/postgres_make_concepts.sh)
That should work just fine, but it seems unnecessarily complicated.
Just run
export PGOPTIONS=-csearch_path=public,mimiciii
at the beginning of your script, and search_path will automatically be set like that for all future psql invocations.
See the documentation of PGOPTIONS, the options connection parameter and the -c option of postgres.
I want to write a Cent OS script backing up data and copying files to another server every day.
I want to set up a script that would dump the DB and then, once that is done, copy the dump file to another server.
As I understand, I need to set up a file that would list those commands and then add it into crontab.
Where I'm stuck is how to write that file, as I'm not familiar with Linux server commands. Would it be something like that below? What could I fix?
#!/bin/sh
backupscript -r ~/path/to/db ~/path/to/backup
sshpass -f "/path/to/passwordfile" scp -r /some/local/path user#example.com:/some/remote/path
But how will scp know when to run after backupscript is over?
You can use this type of script to backup and scp. i hope this will help you to create script.
DB backup script
#!/bin/bash
mysql -uroot -proot#123 dbname > /opt/db_dumps/dbname.sql
SCP script
#!/bin/bash
scp /opt/backups/dbname.sql root#10.200.172.46:/opt/db_dumps/
ssh root#10.200.172.46 /opt/db_dumps/dbname.sql
I have a shell script that runs a command:
psql -h $DBHOST -U $DBUSERNAME -c "\copy sometable FROM '$PWD/sometable.csv' WITH DELIMITER ',' CSV HEADER"
which works fine.
Now, as I have some requirements to implement more advanced logic, I am migrating some of these commands to nodejs code.
Is it possible to run this \copy command with postgres-node?
If not, I see an alternative to run this command as it is as a shell command from nodejs with require('child_process').spawn.
you are looking for https://github.com/brianc/node-pg-copy-streams I suppose. It's same authors "extension" to node-pg
I've been using the psql Postgres terminal to import CSV files into tables using the following
COPY tbname FROM
'/tmp/the_file.csv'
delimiter '|' csv;
which works fine except that I have to be logged into the psql terminal to run it.
I would like to know if anyone knows of a way to do a command similar to this from the Linux shell command line similar to how Postgres allows a shell command like bellow
/opt/postgresql/bin/pg_dump dbname > /tmp/dbname.sql
This allows the dumping of a database from the Linux shell without being logged into psql terminal.
The solution in the accepted answer will only work on the server and when the user executing the query will have permissions to read the file as explained in this SO answer.
Otherwise, a more flexible approach is to replace the SQL's COPY command with the psql's "meta-command" called \copy which which takes all the same options as the "real" COPY, but is run inside the client (with no need for ; at the end):
psql -c "\copy tbname FROM '/tmp/the_file.csv' delimiter '|' csv"
As per docs, the \copy command:
Performs a frontend (client) copy. This is an operation that runs an SQL COPY command, but instead of the server reading or writing the specified file, psql reads or writes the file and routes the data between the server and the local file system. This means that file accessibility and privileges are those of the local user, not the server, and no SQL superuser privileges are required.
In addition, if the the_file.csv contains the header in the first line, it can be recognized by adding header at the end of the above command:
psql -c "\copy tbname FROM '/tmp/the_file.csv' delimiter '|' csv header"
As stated in The PostgreSQL Documentation (II. PostgreSQL Client Applications - psql) you can pass a command to psql (PostgreSQL interactive terminal) with the switch -c. Your options are:
1, Client-side CSV: \copy meta-command
perform the SQL COPY command but the file is read on the client and the content routed to the server.
psql -c "\copy tbname FROM '/tmp/the_file.csv' delimiter '|' csv"
(client-side option originally mentioned in this answer)
2. Server-side CSV: SQL COPY command
reads the file on the server (current user needs to have the necessary permissions):
psql -c "COPY tbname FROM '/tmp/the_file.csv' delimiter '|' csv;"
the DB roles needed for reading the file on the server:
COPY naming a file or command is only allowed to database superusers
or users who are granted one of the default roles
pg_read_server_files, pg_write_server_files, or
pg_execute_server_program
also the PostgreSQL server process needs to have access to the file.
To complete the previous answer, I would suggest:
psql -d your_dbname --user=db_username -c "COPY tbname FROM '/tmp/the_file.csv' delimiter '|' csv;"
The most flexible way is to use a shell HERE document, which allows you to use shell variables inside your query, even inside (double or single) quotes:
#!/bin/sh
THE_USER=moi
THE_DB=stuff
THE_TABLE=personnel
PSQL=/opt/postgresql/bin/psql
THE_DIR=/tmp
THE_FILE=the_file.csv
${PSQL} -U ${THE_USER} ${THE_DB} <<OMG
COPY ${THE_TABLE} FROM '${THE_DIR}/${THE_FILE}' delimiter '|' csv;
OMG
I have 2 DB2 sql scripts that I need to run. I have tried to put both of them in bash script and execute them.
here is script.sh:
#!/bin/bash
db2 -tf firstscript.sql;
db2 -tf secondscript.sql;
When I run this, I get the following error:
DB21034E The command was processed as an SQL statement because it was
not a valid Command Line Processor command. During SQL processing it
returned: SQL1024N A database connection does not exist.
SQLSTATE=08003
But I have made sure that the database connection already exists.
I think that the commands inside the sql scripts are not executed sequentially.
Because when I run each command individually, there is no error.
Also, when I run both the commands inline i.e. db2 -tf firstscript.sql;db2 -tf firstscript.sql, even then the code works.
I thought that it could have something to do with #!/bin/bash, so I removed it from the script.sh file and then executed it. Even then, it returned the same error.
What would be the possible problem and its solution?
When you have a establish connection, it is hold in your current environment. When you call a bash script, it will create a subshell, and that will not have any connection.
In order to solve this problem, you need to reuse the current environment by sourcing the script (as #jm666 said):
. ./script.sh
Make sure about the dot followed by a space before the scriptname.
db2 connect to sample
. ./script.sh
FYI, the commands inside the script will be executed sequentially, as you defined them.