This question already has answers here:
Pass commands as input to another command (su, ssh, sh, etc)
(3 answers)
How to execute a psql command within a bash for loop
(2 answers)
Closed 4 years ago.
Script only opens up Postgres but does not process any commands after that.
#/bin/bash
filename='mac_addresses.txt'
filelines=`cat $filename`
echo Start
for line in $filelines ; do
psql pcoip_mc_db postgres
update endpoint set endpoint_group_id = 15 where mac_address='$filelines';
\q
done
Expected results are to see this script go line by line in the mac_addresses.txt file and, after connecting to Postgres, run this command on every mac address in mac_addresses.txt:
update endpoint set endpoint_group_id = 15 where mac_address='$filelines';
The problem is that the update and the \q are not handled as input to the psql command, but as shell commands. You have to tell bash that this is supposed to be the standard input for psql, for example with a “here document”:
#/bin/bash
filename='mac_addresses.txt'
filelines=`cat $filename`
echo Start
for line in $filelines ; do
psql pcoip_mc_db postgres <<EOF
update endpoint set endpoint_group_id = 15 where mac_address='$filelines';
EOF
done
Warning: this code is still unsafe and vulnerable to SQL injection. If any of the entries in the file contain spaces or single quotes, you will get errors and worse.
These 3 lines are not doing what you expect:
psql pcoip_mc_db postgres
update endpoint set endpoint_group_id = 15 where mac_address='$filelines';
\q
Each line should be a bash command. So you need to wrap the SQL query in a string and then pass it to the psql command. Like this:
psql pcoip_mc_db postgres -c "update endpoint set endpoint_group_id = 15 where mac_address='$line';"
(-c tells psql to execute the string as an SQL command)
Also, this bash will be a bit inefficient as you're going to connect and disconnect from the database for every line of the file. A more idiomatic bash script would transform each line of the file into an appropriate SQL expression, and then pipe all the generated SQL into a single psql connection. This can replace your script with a single line:
<"$filename" awk '{print "update endpoint set endpoint_group_id = 15 where mac_address=\'"'"'"$0"\'"'"';"}' | psql pcoip_mc_db postgres
(As a further improvement, you could even generate a single SQL query using an IN clause, such as: update endpoint set endpoint_group_id = 15 where mac_address IN ('mac1', 'mac2', ...);)
Related
I have a shell script that runs a command:
psql -h $DBHOST -U $DBUSERNAME -c "\copy sometable FROM '$PWD/sometable.csv' WITH DELIMITER ',' CSV HEADER"
which works fine.
Now, as I have some requirements to implement more advanced logic, I am migrating some of these commands to nodejs code.
Is it possible to run this \copy command with postgres-node?
If not, I see an alternative to run this command as it is as a shell command from nodejs with require('child_process').spawn.
you are looking for https://github.com/brianc/node-pg-copy-streams I suppose. It's same authors "extension" to node-pg
This question already has answers here:
How to pass command line arguments to a shell alias? [duplicate]
(11 answers)
Closed 5 years ago.
I am new to linux and trying to create an alias that starts mongodb service.
The original command is sudo service mongod start. I want to generalise this usage for any service.
i.e. something like alias startservice="echo <password> | sudo -S $1 start".
So I would call it like startservice mongod should run the first command. I came to know that I can use functions for the same. However, I don't have a clue on how to do this either way.
Because I want my function which I create to be able to be accessible across terminals. I am not sure on how to create functions that act in this manner. Please help me on this.
I have gone through these two links:
parameter subsitution in bash aliases
Alias with Argument in Bash - Mac
Your help would be appreciated.
Example of function in bash:
startService(){
echo "your-password" | sudo -S service "$1" start
}
startService mongod # test
I want to redirect the output of a mongodb command into a file, but it's not working. I searched a lot on the net, but none of the commands worked for me.
> mongo --quiet 99.99.99.99/db --eval 'printjson(db.productAttribute.distinct('productId'))' > /home/myname/output_query.json
Wed May 13 12:28:58.022 SyntaxError: Unexpected identifier
> mongo --quiet db --eval 'printjson(db.productAttribute.distinct('productId'))' > /home/myname/output_query.json
Wed May 13 12:29:09.896 SyntaxError: Unexpected identifier
The command is simple, and I don't want to put it into a separate .js file. Also, I am trying to execute this command from the mongodb shell itself.
First, you should not run this in mongo shell. This will only work from linux shell as mongo executable is available in linux shell, but you are trying to call mongo executable from inside mongo shell.
Second, use double quotes in your command.
So, open a terminal window and run the following. It will work.
$ mongo --quiet 99.99.99.99/db --eval 'printjson(db.productAttribute.distinct("productId"))' > /home/myname/output_query.json
I've been using the psql Postgres terminal to import CSV files into tables using the following
COPY tbname FROM
'/tmp/the_file.csv'
delimiter '|' csv;
which works fine except that I have to be logged into the psql terminal to run it.
I would like to know if anyone knows of a way to do a command similar to this from the Linux shell command line similar to how Postgres allows a shell command like bellow
/opt/postgresql/bin/pg_dump dbname > /tmp/dbname.sql
This allows the dumping of a database from the Linux shell without being logged into psql terminal.
The solution in the accepted answer will only work on the server and when the user executing the query will have permissions to read the file as explained in this SO answer.
Otherwise, a more flexible approach is to replace the SQL's COPY command with the psql's "meta-command" called \copy which which takes all the same options as the "real" COPY, but is run inside the client (with no need for ; at the end):
psql -c "\copy tbname FROM '/tmp/the_file.csv' delimiter '|' csv"
As per docs, the \copy command:
Performs a frontend (client) copy. This is an operation that runs an SQL COPY command, but instead of the server reading or writing the specified file, psql reads or writes the file and routes the data between the server and the local file system. This means that file accessibility and privileges are those of the local user, not the server, and no SQL superuser privileges are required.
In addition, if the the_file.csv contains the header in the first line, it can be recognized by adding header at the end of the above command:
psql -c "\copy tbname FROM '/tmp/the_file.csv' delimiter '|' csv header"
As stated in The PostgreSQL Documentation (II. PostgreSQL Client Applications - psql) you can pass a command to psql (PostgreSQL interactive terminal) with the switch -c. Your options are:
1, Client-side CSV: \copy meta-command
perform the SQL COPY command but the file is read on the client and the content routed to the server.
psql -c "\copy tbname FROM '/tmp/the_file.csv' delimiter '|' csv"
(client-side option originally mentioned in this answer)
2. Server-side CSV: SQL COPY command
reads the file on the server (current user needs to have the necessary permissions):
psql -c "COPY tbname FROM '/tmp/the_file.csv' delimiter '|' csv;"
the DB roles needed for reading the file on the server:
COPY naming a file or command is only allowed to database superusers
or users who are granted one of the default roles
pg_read_server_files, pg_write_server_files, or
pg_execute_server_program
also the PostgreSQL server process needs to have access to the file.
To complete the previous answer, I would suggest:
psql -d your_dbname --user=db_username -c "COPY tbname FROM '/tmp/the_file.csv' delimiter '|' csv;"
The most flexible way is to use a shell HERE document, which allows you to use shell variables inside your query, even inside (double or single) quotes:
#!/bin/sh
THE_USER=moi
THE_DB=stuff
THE_TABLE=personnel
PSQL=/opt/postgresql/bin/psql
THE_DIR=/tmp
THE_FILE=the_file.csv
${PSQL} -U ${THE_USER} ${THE_DB} <<OMG
COPY ${THE_TABLE} FROM '${THE_DIR}/${THE_FILE}' delimiter '|' csv;
OMG
I have a command I run to check if a certain db exists.
I want to do it locally and via ssh on a remote server.
The command is as so:
mysqlshow -uroot | grep -o $DB_NAME
My question is if I can use the same command for 2 variables,
the only difference being ssh <remote-server> before one?
Something along the lines of !! variable expansion in the CLI:
LOCAL_DB=mysqlshow -uroot | grep -o $DB_NAME
REMOTE_DB=ssh <remote-host> !!
something like this perhaps?
cmd="whoami"
eval $cmd
ssh remote#host $cmd
eval will run the command in the string $cmd locally
also, for checking tables, it's safer to ask for the table name explicitly via a query
SHOW TABLES LIKE 'yourtable';
and for databases:
SHOW DATABASES LIKE 'yourdb';
You can create a function in .bashrc something like:
function showrdb() {
ssh remote#host "$1"
}
export -f showrdb
and then source .bashrc and call the function like;
showrdb "command you want to run on remote host"
Or alternately you can create a shell script contains the same function(or only the ssh line) and call the script as
./scriptname "command to execute of remote host"
But the level of comfort for me is more in first approach.