I am using actual cygwin version and have installed mysql package 5.5.21. When using mysqldump i have the problem that the insert statement is in one line. i have already tried the following statements but it seems that they do not take any effect on the output.
mysqldump --opt --extended-insert --complete-insert ....
mysqldump -c ...
Does anyone have an idea how i can force mysqldump to create an insert for each data row?
--extended-insert is what causes multiple rows on each line. It's also part of --opt.
Try adding --skip-extended-insert.
Related
I'm planning on running a .sh script that will run periodically through cron on linux. I'm running postgres 8.4 on centos.
My script will have something like this in it:
psql -U username -d db_name -c "COPY orders TO stdout DELIMITER ',' CSV HEADER;" > orders.csv
I know there are other ways to dump tables into csv files but this is the only one I could use without admin rights.
My problem is naming the files. I want to specifically name the file something along the lines of:
yyyymmdd-hhmm-orders.csv
I'm not the best scripting guru out there (as you can tell) so how can I get the dumps to dynamically do this?
Thanks
`date '+%Y%m%d-%H%M'`-orders.csv
I personally also add the seconds %S to the file name
man date
Will show the other formatting options
Use below code and its worked fine
Assigned date format with one variable and used the same
Code:
I=`date +%Y%m%d-%H%M%S -d`
psql -U username -d db_name -c "COPY orders TO stdout DELIMITER ',' CSV HEADER;" > $i-orders.csv
I'm using a big-data database. in one of it's tutorial it has recommended me to use below bash scripts if order to running queries:
#!/bin/sh
# this will launch the real atquery program with the given .sql file
# note: please adjust INSTALLNAME, HOST and PORT to reflect your installation
/home/lms/INSTALLNAME/atquery HOST:PORT $*
Then, start runnable .sql files like the following:
#!/usr/local/bin/runatquery
select count(*) from mytable during all
I don't understand $* part of the /home/lms/INSTALLNAME/atquery HOST:PORT $*. what will $* does?
this was suppose to create a shell script in order to run a query, but another problem is this is two file (I supposse because we two #! in that) so how will this two file help me to run queries? I suppose if we had a script with below code in it, it would do this work to me better and without confusion:
!/bin/sh
/home/lms/INSTALLNAME/atquery HOST:PORT -e 'select count(*) from mytable during all'
You have to create that script as recommended (you didn't include that, probably right before the script) as a file with the executable bit on, and changing INSTALLNAME, HOST and PORT as per your system requirements.
The $* expands to all parameters received by the script.
The second file is an example of how you can create scripts that are run by runatquery.
The problem that I am having is that I want to run the following command (and I can't):
cqlsh < cql_directory/cql_create_stuff.cql
Because I have not logged in to cqlsh.
So I logged in:
cqlsh -u 'my_username' -p 'my_super_secret_password'
and now I tried doing the command in cqlsh shell but It just responds with a syntax error.
Basically, how do I login into cqlsh and run an external CQL script in my file system?
Use the SOURCE
http://www.datastax.com/documentation/cql/3.1/cql/cql_reference/source_r.html
You can use -f option as well to execute commands from file
http://www.datastax.com/documentation/cql/3.1/cql/cql_reference/cqlsh.html
Assuming that the path of the file with the CQL commands is /mydir/myfile.cql, there are two ways:
If you are not logged in to cqlsh:
cqlsh -u 'my_username' -p 'my_password' -f /mydir/myfile.cql
If you are logged in to cqlsh:
SOURCE '/mydir/myfile.cql'
Notice the single quotation marks. The shorthand notation for $HOME (for example, '~/mydir/myfile.cql') is also supported.
Both ways also work with relative paths (to the current directory).
Assuming your filename is "tables.cql" and it is placed as: /files/tables.cql;
A - Locally
cqlsh -f /files/tables.cql
B - Connecting To A Docker Container Running Cassandra
Assuming the name of the Docker container that which running Cassandra is "cas" (keep in mind that you can also use the hash id of the docker container if there is no name assigned to it);
docker exec -it cas cqlsh -f /files/tables.cql
As stated on other answers, -u and -p options can be added in order to use the username/password combinations.
This is for Window system
suppose you cassandra dir is
C:\Program Files\DataStax-DDC\apache-cassandra\bin
Suppose directory where your .cql file OR cql query file is
D:\ril\s\developement\new one\excel after parse\Women catalogue template.cql
Now follow below steps for importing cql file
Go on command prompt (cmd)
Go on the directory where cql file is there (cd "..\ril\sizeguide\developement\new one\excel after parse")
Run below command
"c:\Program Files\DataStax-DDC\apache-cassandra\bin\cqlsh.bat" <"Women catalogue template.cql"
And its Done.
Important Note:
Please make sure column value should not have single quote ' character like ('If you don't find a exact match, go for the next large size') other wise it will fail.
If you want single quote to be inserted, please use it two times like below and Cassandra will treat it as one time
('If you don''t find a exact match, go for the next large size')
All text column should be enclosed by single quote '' like 'Sale category'. For empty value, please use two single quote ''.
Hi I have been given a task of copying files from a given server to local machine. Even I can do it manually using the command line but I need to write a script to automate it. I dont have any clue how to do it using shell, how to give the password which we would have done manually. I went through other posts but did not get the precise answer.
Are there better ways than using SCP command?
Thanks in advance
The preferred + more secure way to do this is to set up ssh key pairs
That being said, if there's a specific need to supply passwords as part of your shell script, you can use pscp, which is part of putty-tools:
If you are on ubuntu, you can install it by:
sudo apt-get install putty-tools
(Or use equivalent package managers depending on your system)
Here's an example script of how to use pscp:
#!/bin/bash
password=hello_world
login=root
IP=127.0.0.1
src_dir=/var/log
src_file_name=abc.txt
dest_folder=/home/username/temp/
pscp -scp -pw $password $login#$IP:$src_dir/$src_file_name $dest_folder
This copies /var/log/abc.txt from the specified remote server to your local /home/username/temp/
I have a crontab set up that errors out every time I attempt to do it. It works fine in the shell. It's the format I'm using when I attempt to automatically insert the date into the filename of the database backup. Does anyone know the syntax I need to use to get cron to let me insert the date into the filename?
mysqldump -hServer -uUser -pPassword Table | gzip >
/home/directory/backups/table.$(date +"%Y-%m-%d").gz
Thanks in advance!
What about something like this for the "command" part of the crontab :
mysqldump --host=HOST --user=USER --password=PASSWORD DATABASE TABLE | gzip > /tmp/table.`date +"\%Y-\%m-\%d"`.gz
What has changed from OP is the escaping of the date format :
date +"\%Y-\%m-\%d"
(And I used backticks -- but that should do much of a difference)
(Other solution would be to put your original command in a shell-script, and execute this one from the crontab, instead of the command -- would probably be easier to read/write ^^)
The most typical reason for "works in shell but not in cron" is that commands you try to execute are not in PATH.. Reason is that shell invoked from cron aint loading same files as your login shell.
Fix: add absolute path to each command you try to execute.
Second thing i notice in your command. Syntax for running your date command looks like its not very portable. Change that to be in backticks, or run put your whole command to shellscript (also, you can use it to set your path too) and execute that script from cron..
EDIT:
During the writing my original reply my keyboard layout didnt have backticks so check what Pascal wrote.
And example of what you could do with a shellscript:
Copy following to /usr/local/bin/dumptable.sh
#!/bin/sh
/usr/bin/mysqldump --host=HOST --user=USER --password=PASSWORD DATABASE TABLE | /bin/gzip > /tmp/table.`/bin/date +"\%Y-\%m-\%d"`.gz
and then put the the /usr/local/bin/dumptable.sh into cron..