Insert data from STDIN to specific position of string - linux

Case: I wanna insert some data from a file to specific position of the string.
Example:
cat users.log | mysql -h localhost -u mysql -e 'SELECT id FROM users WHERE ${USERS_IDS}'
I want to replace ${USERS_IDS} string to the data from the file
I'm sure this case is very popular, but I did not find a suitable solution

To insert the contents of a file in a specific position of a string you use the facilities of your shell.
For example, Bash has the $(< filename) construction:
mysql -h localhost -u mysql -e 'SELECT id FROM users WHERE '"$(< users.log)"
If the data to be inserted needs to be editted a little, you have the $(command) construction:
mysql -h localhost -u mysql -e 'SELECT id FROM users WHERE '"$(< users.log sed -e 's/.../.../')"
Whether the file substitution or the command substitution are to be enclosed in doublequotes or not depends on the specific use case.
And, if the intended use is really to feed values in a MySQL command, beware of Bobby Tables.

Related

how to passing variable from bash to sqlplus

I have below code in bash:
#/!bin/bash
USERS=tom1,tom2,tom3
and I want to drop all users from my variable "USERS" via sqlplus
#/!bin/bash
USERS=tom1,tom2,tom3
sqlplus -s system/*** <<EOF
*some code*
DROP USER tom1 CASCADE;
DROP USER tom2 CASCADE;
DROP USER tom2 CASCADE;
EOF
pls help
Splitting in database using sql / plsql by using the USERS variable is complex and requires understanding of REGEXP functions and I will not recommend that to you. Moreover, it would also require Dynamic Sql to execute a drop query.
I would suggest you to split it within the shell script,create a script and execute it in sqlplus
USERS="tom1,tom2,tom3"
echo ${USERS} | tr ',' '\n' | while read word; do
echo "drop user $word;"
done >drop_users.sql
sqlplus -s system/pwd <<EOF
#some code
#drop_users.sql
EOF
Your code is OK except for DDL commands for Oracle DB which are derived from USERS parameter of the script file, call userDrop.sh. To make suitable relation with USERS parameter and DB, an extra file (drpUsr.sql) should be produced, and invoked inside EOF tags with an # sign prefixing.
Let's edit by vi command :
$ vi userDrop.sh
#!/bin/bash
USERS=tom1,tom2,tom3
IFS=',' read -ra usr <<< "$USERS" #--here IFS(internal field separator) is ','
for i in "${usr[#]}"; do
echo "drop user "$i" cascade;"
done > drpUsr.sql
#--to drop a user a privileged user sys or system needed.
sqlplus / as sysdba <<EOF
#drpUsr.sql;
exit;
EOF
Call as
$ . userDrop.sh
and do not forget to come back to command prompt by an exit command of DB.

how to use /dev/null for bash command

I am using below command to get result for my SQL query.
su - postgres -c 'psql -d dbname' with stdin "COPY ( my SQL query ) TO STDOUT WITH CSV HEADER"
This works fine on my server but on different machine it is printing bash warning with output of SQL query.
For example -
/etc/profile: line 46: HISTSIZE: readonly variable
/etc/profile: line 50: HISTCONTROL: readonly variable
/etc/profile.d/20-tmout.sh: line 1: TMOUT: readonly variable
/etc/profile.d/history.sh: line 6: hcmnt_tty: readonly variable
name
abc
Please let me know anyway so that I can skip above warning messages and only get data.
If I would like to use /dev/null in this case how to modify above command to get data only.
if what you mean is "how to discard only error output?", the way to go is to redirect the standard error stream to oblivion (/dev/null), like so:
your-command 2>/dev/null
that way, if the command outputs data to standard out, it passes through, but any output to the standard error socket is discarded, so you won't see these error messages.
by the way, 2 here is a shorthand file descriptor for the standard error.
Sorry this is untested, but I hit this same error, your db session isn't read/write. You can echo the statements to psql to force a proper session as follows. I'm unsure as to how stdin may be effected
echo 'SET TRANSACTION READ WRITE; SET SESSION CHARACTERISTICS AS TRANSACTION READ WRITE ; COPY ( my SQL query ) TO STDOUT WITH CSV HEADER' | su - postgres -c 'psql -d dbname' with stdin
caution - bash hack
su - postgres -c 'psql -d dbname' with stdin "COPY ( my SQL query ) TO STDOUT WITH CSV HEADER" | grep -v "readonly"

PostgreSQL import file

In PostgreSQL and bash(linux), are there ways to directly import a file from the filesystem like
[execute.sh]
pgsql .... -f insert.txt
[insert.txt]
insert into table(id,file) values(1,import('/path/to/file'))
There seems to be no import function, bytea_import as well, lo_import would save int and I don't know how to get the file back (these files are in small sizes so using lo_import seems not appropriate)
And can how do I move the insert.txt statement to PostgreSQL?
I'm not sure what you're after, but if you have script with SQL-statements, for example the insert statements that you mention, you can run psql and then run the script from within psql. For example:
postgres#server:~$ psql dbname
psql (8.4.1)
Type "help" for help.
dbname=# \i /tmp/queries.sql
This will run the statements in /tmp/queries.sql.
Hope this was what you asked for.
In case of more detailed parameters:
$ psql -h [host] -p [port] -d [databaseName] -U [user] -f [/absolute/path/to/file]
The manual has some examples:
testdb=> \set content '''' `cat my_file.txt` ''''
testdb=> INSERT INTO my_table VALUES (:content);
See http://www.postgresql.org/docs/8.4/interactive/app-psql.html

How Can I Import Data From Sql Server CE DataBase File To Excel?

How Can I Import Data From a Table in Sql Server CE To Excel?
I Try This From Excel But I Can Not Find an Item for This In Excel
Please Help Me To This
You can try the sqlcecmd command from the command line.
sqlcecmd -d "Data Source=C:\NW.sdf" -q "SELECT * FROM my_table" -h 0 -s "," -o my_data.csv
Of course you need to specify the correct location of your Data Source in the -d command
You need to specify the cells you wish to SELECT and the table in the FROM section of the -q command.
The "-h 0" flag will remove column names and any dashed lines.
The "-s ','" specifies the delimiter you wish you use between fields.
And finally the -o command specifies your output file.
You may also need to specify a username (-U) and password (-P) if these values have been set.
Before you can run the sqlcecmd you will need to make sure that the executable files for SQL Server CE are in your path.
Since you haven't specified a programming language i'm assuming you are doing this manually.
After you have the csv file excel should be able to open it no problem.
EDIT: If you do not have the SQLCECMD application then download and set it up first.

restoring mysql db from the contents of split up mysqldump

Hi my database has started to go over 2GB in backed up size, so I'm looking at options for splitting the file and then reassembling it to restore the database.
I've got a series of files from doing the following backup shell file:
DATE_STRING=`date +%u%a`
BACKUP_DIR=/home/myhome/backups
/usr/local/mysql_versions/mysql-5.0.27/bin/mysqldump --defaults-file=/usr/local/mysql_versions/mysql-5.0.27/my.cnf
--user=myuser
--password=mypw
--add-drop-table
--single-transaction
mydb |
split -b 100000000 - rank-$DATE_STRING.sql-;
this prodes a sequence of files like:
mydb-3Wed.sql-aa
mydb-3Wed.sql-ab
mydb-3Wed.sql-ac
...
my question is what is the corresponding sequence of commands that I need to use for linux to do the restore?
Previously I was using this command:
/usr/local/mysql_versions/mysql-5.0.27/bin/mysql
--defaults-file=/usr/local/mysql_versions/mysql-5.0.27/my.cnf
--user=myuser
--password=mypw
-D mydb < the_old_big_dbdump.sql
Any suggestions even if they don't involve split / cat would be greatly appreciated
I don't see why you can't just do:
cat mydb-3Wed.sql-* | /usr/local/mysql_versions/mysql-5.0.27/bin/mysql --defaults-file=/usr/local/mysql_versions/mysql-5.0.27/my.cnf --user=myuser --password=mypw -D mydb
The * globbing should provide the files in the sorted order, check with ls mydb-3Wed.sql-* that they actually are though.

Resources