I'm a beginner I hope you can help me.
I have a wordpress development server and I have to change a value in a mysql db every night.
The command I use is the following, and it works via command line.
mysql -u root -pMYPASSWORD MYDB --execute="update DB_options set option_value = 'https://dev.example.com' where option_id = 1"
However, if I use this command in CRONTAB it is not executed and I have no error messages in the log.
40 4 * * * root mysql -u root -pMYPASSWORD MYDB --execute="update DB_options set option_value = 'https://dev.example.com' where option_id = 1"
What am I doing wrong? Thank's for any help, there is so much to learn!
Om cron the command must be:
40 4 * * * mysql -u root -pMYPASSWORD MYDB --execute="update DB_options set option_value = 'https://dev.example.com' where option_id = 1"
But will be wise to create dedicated shell script with the command and include your profile variables as PATH for example
Related
I have the following script segment in a Linux script:
sqlplus /
<<QUERY_1
UPDATE BATCH_FILE SET BATCH_ID = 0 WHERE BATCH_ID = -1;
COMMIT;
exit
QUERY_1
I am expecting the update to occur and the script to exit sqlplus
What actually happens is the query is not executed, and the script exits leaving sqlplus logged into my database with a SQL> prompt. I can execute the statements from the prompt, but of course, that is not what I want to do.
My current version of Oracle is 12.2.0.1
The output of the HERE-document is intended for the std input of sqlplus, but for the shell a command should be on a single line. Adding a backslash will make the shell ignore the line-end, combining the two physical lines into one logical line:
sqlplus / \
<<QUERY_1
UPDATE BATCH_FILE SET BATCH_ID = 0 WHERE BATCH_ID = -1;
COMMIT;
exit
QUERY_1
Or just:
sqlplus / <<QUERY_1
UPDATE BATCH_FILE SET BATCH_ID = 0 WHERE BATCH_ID = -1;
COMMIT;
exit
QUERY_1
This question already has answers here:
subprocess.call() arguments ignored when using shell=True w/ list [duplicate]
(2 answers)
Closed 4 years ago.
I have a config_file.yml file like this:
sample:
sql: "select * from dbname.tableName where sampleDate>='2018-07-20';"
config: {'hosts': [!!python/tuple ['192.162.0.10', 3001]]}
sample2:
sql: "select * from dbname.tableName where sampleDate<='2016-05-25';"
config: {'hosts': [!!python/tuple ['190.160.0.10', 3002]]}
My python code is:
data_config = yaml.load(config_file)
for dataset, config in data_config.items():
args = [config]
cmd = ['./execute_something.sh']
cmd.extend(args)
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, shell=True).communicate()
execute_something.sh:
#!/bin/bash
echo $1
data_config=$1
echo $data_config
So basically I want to pass {'sql': "select * from dbname.tableName where sampleDate>='2018-07-20';", config: {'hosts': [!!python/tuple ['190.160.0.10', 3002]]}} this entire string as an argument to the shell script.
Problem:
1) select * ends up listing all files in the current directory, instead of being passed entirely as a string
2) even if I pass a simple string say args="hi" it still won't work!
I don't understand what I am missing here. Kindly help.
Thanks!
DO NOT USE shell=True.
data_config = yaml.load(config_file)
for dataset, config in data_config.items():
cmd = ['./execute_something.sh', str(config)]
p = subprocess.Popen(cmd, stdout=subprocess.PIPE).communicate()
When you run shell=True, you're prepending sh -c to your literal argument list. What that does in this case is the following (escaping is added to make the single quotes literal):
sh -c './execute_something.sh' '{'"'"'sql'"'"': "select * from dbname.tableName where sampleDate>='"'"'2018-07-20'"'"';", config: {'"'"'hosts'"'"': [!!python/tuple ['"'"'190.160.0.10'"'"', 3002]]}}'
That doesn't work. Try it manually in a shell, if you like. Why? Because the argument starting with the { isn't passed to ./execute_something.sh, but is instead passed to the shell executing sh -c.
What would work, if you really insisted on keeping shell=True? Compare to the following:
sh -c './execute_something.sh "$#"' _ '{'"'"'sql'"'"': "select * from dbname.tableName where sampleDate>='"'"'2018-07-20'"'"';", config: {'"'"'hosts'"'"': [!!python/tuple ['"'"'190.160.0.10'"'"', 3002]]}}'
Here, the argument just after the -c is a shell script that looks at its arguments, and passes those arguments on to execute_something.sh.
I am using below script :
script_home=/home/insetl/ppp_scripts/prod/VSAVS
date_var=`date +'%Y%m%d' -d "0 days ago"`
date_var1=`date +'%Y%m%d' -d "1 days ago"`
lftp<<END_SCRIPT
open sftp://213.239.205.74
user sftpvuclip LW?T8e62
set xfer:clobber on
get Vuclip_C2B_TRX-${date_var}0030.csv
bye
END_SCRIPT
sudo grep -v '^ACC' $script_home/Vuclip_C2B_TRX-${date_var}0030.csv
/home/insetl/ppp_scripts/prod/VSAVS/Vuclip_C2B_TRX-${date_var}.csv
scp $script_home/Vuclip_C2B_TRX-${date_var}.csv $TARGET_HOST:/tmp/
psql -h $TARGET_HOST -d $TARGET_DB -p 5432 << ENDSQL
delete from vsavs_offline_revenue_dump where update_date like '${date_var2}%';
copy vsavs_offline_revenue_dump from '/tmp/Vuclip_C2B_TRX-${date_var}.csv' with delimiter as ',';
update fact_channel_daily_aggr a
set revenue_from_file = (select sum(substr(amount,3)::float) from vsavs_offline_revenue_dump where to_char(to_timestamp(update_date, 'dd-mm-yyyy' ),'YYYYMMDD') = ${date_var1})
where product_sk = 110030 and
a.date_id=${date_var1};
ENDSQL
The script runs completely fine when run manually. However the LFTP/SFTP part of the script doesnt run when ran through crontab. Please advise the solution for the same.
Output :
/home/insetl/ppp_scripts/prod/VSAVS/sftp_pull_in.sh: line 9: 08/02/2017: No such file or directory
20170209
/home/insetl/ppp_scripts/prod/VSAVS
grep: /home/insetl/ppp_scripts/prod/VSAVS/Vuclip_C2B_TRX-201702090030.csv: No such file or directory
DELETE 0
COPY 0
UPDATE 20
I am able to connect to a Microsoft SQL Server 2008 instance via a Mint Linux VM using freeTSD and command line to execute sql statements on it. Now I want automate this in a bash script. I am able to successfully login in my bash script:
TDSVER=8.0 tsql -H servername -p 1433 -D dbadmin -U domain\\Administrator -P password
I then have my SQL query:
USE dbname GO delete from schema.tableA where ID > 5 GO delete from schema.tableB where ID > 5 GO delete from schema.tableC where ID > 5 GO exit
This works when doing manually via freeTSD command line, but not when I put in bash file. I followed this post: freeTSD & bash.
Here is my bash script sample:
echo "USE dbname GO delete from schema.tableA where userid > 5 go delete from schema.tableB where userid > 5 go delete from schema.tableC where ID > 5 GO exit" > tempfile | TDSVER=8.0 tsql -H servername -p 1433 -D dbname -U domain\\Administrator -P password < tempfile
the output of the bash script is:
locale is "en_US.UTF-8"
locale charset is "UTF-8"
Default database being set to sbdb
1> 2> 3> 4> 5> 6> 7> 8>
and then the rest of my script is executed.
Can someone give me a step by step answer to my problem ?
I'm not sure how your sample can work at all.
Here is my bash script sample:
echo "USE dbname .... exit" > tempfile | TDSVER=8.0 tsql -H servername -p 1433 -D dbname -U domain\\Administrator -P password < tempfile
# ------------------------------------^^^^ ---- pipe char?
Try using a ';' char.
echo "USE dbname .... exit" > tempfile ; TDSVER=8.0 tsql -H servername -p 1433 -D dbname -U domain\\Administrator -P password < tempfile
# ------------------------------------^^^^ ---- semi-colon
Better yet, use shell's "here documents".
TDSVER=8.0 tsql -H servername -p 1433 -D dbname -U domain\\Administrator -P password <<EOS
USE dbname
GO
delete from schema.tableA where userid > 5
go
delete from schema.tableB where userid > 5
go
delete from schema.tableC where ID > 5
GO
exit
EOS
IHTH.
Current command line input:
echo "delete from table where userid > 5
go
delete from table where userid > 5
go
delete from table where ID > 5
GO
exit" < /tmp/tempfile; TDSDUMP=/tmp/freetds.log TDSVER=8.0 tsql -H servername -p 1433 -D dbname -U Administrator -P password <<EOS
Old thread but this seemed to work..
printf "use mydbname\ngo\nselect * from mytable\ngo\nexit\n"|tsql -I freetds.conf -S profileName -U user -P 'password'
1> 2> 1> 2> ID stringtest integertest
1 test 50
2 teststring2 60
3 test3 70
(3 rows affected)
try
echo "USE dbname\n GO\n delete from schema.tableA where ID > 5\n GO\n delete from schema.tableB userid > 5\n go\n delete from schema.tableC where ID > 5\n GO\n exit\n"
the rest of this string is stuff that maybe works
and try
echo "USE dbname;\n delete from schema.tableA where ID > 5;\n delete from schema.tableB userid > 5;\n delete from schema.tableC where ID > 5;\n exit\n"
and try
echo "USE dbname; delete from schema.tableA where ID > 5; delete from schema.tableB userid > 5; delete from schema.tableC where ID > 5; exit"
if you are using odbc, i recommend the second trial.
if you are sending commands to sql with a "go" word as sql sentences separator, maybe the first one is better.
maybe the third one... who knows... only trial and error can tell...
I am trying to let my makefile setup a cronjob for my application. Unfortunately it appears to not be working as the $CRONENTRY variable seems to be empty. What am I doing wrong here?
addcron:
CRONENTRY="*/2 * * * * /usr/bin/node cronapp.js >> logfile.log"
crontab -l | { cat; echo ${CRONENTRY}; } | crontab -
Each command in a rule executes in its own subshell; variables do not survive from one command to the next. So if you want to use a variable this way, you have to string the commands together.
addcron:
CRONENTRY="whatever" ; \
do_something_with $(CRONENTRY)
What about
addcron:
CRONENTRY=
{ crontab -l; echo "*/2 * * * * /usr/bin/node cronapp.js >> logfile.log" } | crontab -
there you have one less pipe element...