I try building an Bash Script to auto generate Apache VHosts, secure them and enable them. So far so good, the problem is that the script does not execute the full correct line.
This is the line in the Script
a2ensite /etc/apache2/sites-available/$Domain.conf
But it only executes
a2ensite /etc/apache2/sites-available/$Domain
This ends up that apache does not find the config file
The $Domain does get set like this Domain=$VAR_C and does work at other commands like I want it to.
Thanks for your help
Related
I have this crontab #reboot "/home/pi/Desktop/TV Scraper 2.0/run.sh" set up and for whatever reason it doesn't seem to run the bash file on reboot.
Typing "/home/pi/Desktop/TV Scraper 2.0/run.sh" on the terminal actually runs the script, so I know it's correct.
This is what's inside run.sh just in case:
#!/bin/bash
cd "/home/pi/Desktop/TV Scraper 2.0"
node ./app.js
I've also tried using #reboot root sh "/home/pi/Desktop/TV Scraper 2.0/run.sh" as well, but it doesn't work either.
How can I move forward with this? My knowledge of Linux is very limited. All I need is to have some Node and Python3 scripts run on every reboot. On Windows that's such an easy task: I've tried CRON, rc.local and autostart, nothing works.
My guess is that node is not available via cronjob, since its containing directory is not in your PATH environment variable. When you execute the script manually, it's probably available via PATH.
An easy fix for this is to use the full path, which you can get by executing which node. The result should be something like /usr/bin/node. Then you can use that, instead of just node.
For debugging purpose you can also redirect stdout and stderr to a file, so the last line in your script would look like this:
/usr/bin/node ./app.js &>/tmp/cron-debug.log
If that doesn't fix it, i would rename the directory "TV Scraper 2.0" and replace the whitespace characters with something like an underscore. Directory and file names are less likely to cause problems if you avoid whitespaces.
When I try to run a Perl script which is called via Linux script manually it works fine but not executable via CRON.
Linux_scrip.sh conatains Perl_script and the command is,
perl_path/perl Perl_script.pl
I got perl_path using which perl command.
Can anyone suggest why is it not executable via CRON entry.
Most likely suspects:
Current work directory isn't as expected.
Permission issues[1].
Environment variables aren't setup as expected.
Requirement of a terminal can't be met.
See the crontab tag wiki for more pitfalls and some debugging tips.
The first thing you should do is to read the error message.
This isn't likely to be an issue for you own cron job, but I've included it since it's quite a common problem for scripts run from other services.
Most probable cause is current working directory.
Before perl command execution, write a command to change directory.
Something like :
cd perl_path;
perl Perl_script.pl
I'm trying to get exiftool to work on my dedicated server. The issue is that PHP exec seems to run different than when a command is run as a user. Oddly enough, PHP shows up as the same user I log in with, but it does not behave the same with system commands.
Oddly enough everything works great on my localhost, but not on my server.
So as mentioned, running exiftool commands logged in via ssh is fine.
But running in a php testing script (note I've installed exiftool on each tested directory, and it runs through ssh), nothing is accessible, though it runs as user orangeman...
And it fails
Here is an update - having been on this all day:
On the shell:
-bash-4.1$ which exiftool -a
~/perl5/bin/exiftool
/usr/bin/exiftool
~/perl5/bin/exiftool
In PHP shell_exec('exiftool -a');
/usr/bin/exiftool
And here is what that file links to:
lrwxrwxrwx 1 root root 33 May 15 02:10 exiftool -> /home/orangeman/perl5/bin/exiftool
I've also tried creating symlinks of various sorts, tampering with the main $PATH variable via putenv(); in php ... I'm truly in the dark here. Works on localhost, not on dedicated server.
I've updated this with a bounty - its a serious issue in development.
I'm on a dedicated server, and the problem is as outlined above.
UPDATE
Per #gcb suggestion, I was able to print out the error that is occurring when php's exec() function runs the system command with no effect.
PHP
<?php
exec('exiftool 2>&1', $output, $r);
var_dump($output, $r);
?>
Output:
array(2) {
[0]=>
string(230) "Can't locate Image/ExifTool.pm in #INC (#INC contains: /bin/lib /usr/local/lib64/perl5 /usr/local/share/perl5 /usr/lib64/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5 .) at /bin/exiftool line 33."
[1]=>
string(59) "BEGIN failed--compilation aborted at /bin/exiftool line 33."
}
UPDATE
#gcb's solution worked. Thank you very much.
So you do not have a php problem now, but a perl one. your include path is bad.
Answer here.
You either have to install the ExifTool libraries in the
standard location (ie. somewhere in the #INC directories
listed in your post), or add the location to the include path,
something like this:
Code:
#!/usr/bin/perl
BEGIN { unshift #INC, "PATH_TO_DIRECTORY_CONTAINING_LIBRARIES" }
use Image::ExifTool;
You should be able to add "Image/ExifTool.pm" to the path you add
to find the ExifTool module.
- Phil
I still think using my suggestion #3 from the previous answer will fix it. If not and you really want to know the reason, create a new perl script that just outputs the contents of #INC and run it via the shell and via php. you will see the difference, then you need to find which login script is not being honored in php and open a bug against php for shell_exec not respecting it...
though the easier solution for your problem (as it does not look like you are too interested in explanations) is to just set the PERLLIB var before calling the script.
So, just do:
find / -name ExifTool.pm This will tell you where the lib is installed. let's say this returns /example/perl/Image/ExifTool.pm
append PERL5LIB=/example/perl/ to your exec() call.
exec("PERL5LIB=/example/perl/ /var/www/myscript/execscript.sh {$param}"); #and
0) you should look on your error log. usually under /var/log/apache/error
it will have messages such as "access denied" or something else.
1) you are clearly not getting enough output of that command to see any error. so try to run it with exiftool 2>&1. this will redirect stderr to stdout, so errors will appear on the output. not sure if that is relevant or php already does that. you may also want to use passthru instead of exec
2) safe-mode exec dir
your file may be out of your safe mode exec dir. read this up:
http://www.php.net/manual/en/ini.sect.safe-mode.php#ini.safe-mode-exec-dir
3) all else fails, run the command as a login shell, which should have the same scripts loaded as when you login in via ssh. just replace exec with shell_exec
...i'm pretty sure looking at the error log will solve your mistery.
One possibility is that on the command line, your $PATH had been set/modified by both your $HOME/.bashrc and your $HOME/.bash_profile because your command line is a log-in shell. When PHP is invoked by the Web server, it runs as "orangeman" BUT only as a shell, not a log-in shell, so its $PATH may not be the same, which is what you're seeing here.
Have you tried putting export PATH="what:you:want:your:PHP:path:to:be" in your $HOME/.bashrc?
I believe this is happening because you have a private installation of perl for you user.
Basically #INC is an array which perl uses for locate its library, and it does not contain the path for your installation library.
There are a couple of ways to change #INC which you can find on the below link:
http://perlmaven.com/how-to-change-inc-to-find-perl-modules-in-non-standard-locations
I hope this helps.
I'm trying to create a script to download a file daily with the older version overwritten.
I'm pretty sure I need a cron job, and a shell script with a wget line in it, but that is as far as I know. Also, I need to do all of this through ssh, unless there's another way I'm not aware of.
If I do it through SSH, what commands do I need to use through the various steps in the process? What will the cron and the shell files look like? If there's a better way, please enlighten!
Thanks!
Zeem
From your description, I'm picturing the following:
connect to the server via SSH
find the location of wget
which wget
(on my machine it's /usr/bin/wget)
add the following to your /etc/crontab (or cronjobs file) using a text editor, such as pico or vi:
#daily /usr/bin/wget http://remote-host.name/path/to/file.txt /local/path/to/file.txt
(If you add this to the /etc/crontab, you'll probably need the additional user parameter, but you can see crontab help for that.)
hope that helps.
Implement password-less ssh authentication between the hosts.
http://www.linuxproblem.org/art_9.html
So host A can create/implement an script or cronjob on host B using ssh.
To create a cronjob by using a script, your script create (for example) an textfile at /etc/cron.d/CronJobName. It is important, that the content of the file corresponds to the corn format: http://en.wikipedia.org/wiki/Cron#Examples
(I hope, I understand your question right)
Thanks for your answers, Thankfully it was much simpler. I was able to add a cron job via cpanel, and the wget line went straight in there.
I have tried exporting my paths and variables and crontab still will not run my script. I'm sure I am doing something wrong.
I have a shell script which runs a jar file. This is not working correctly.
After reading around I have read this is commonly due to incorrect paths due to cron running via its own shell instance and therefore does not have the same preferences setup as my profile does.
Here is what my script looks like today after several modifications:
#!/bin/bash --
. /root/.bash_profile
/usr/bin/java -jar Pharmagistics_auto.jar -o
...
those are the most important pieces of the script, the rest are straightforward shell based.
Can someone tell me what I am doing wrong?
Try specifying the full path to the jar file:
/usr/bin/java -jar /path/to/Pharmagistics_auto.jar -o
I would just tell you what you have already ruled out: Check your path and environment.
Since you have alredy done this, start debugging. Like write checkpoints into a logfile to see how far your script gets (if even started at all), check the cronjob log file for errors, check your mail (cron sends mails on errors) and so on ...
Not very specific, sorry.
"exporting my paths and variables" won't work since crontab runs in a different shell by a different user.
Also, not sure if this is a typo in how you entered the question, but I see:
usr/bin/java
...and I can't help but notice you're not specifying the fully qualified path. It's looking for a directory named "usr" in the current working directory. Oft times for crontab, the cwd is undefined, hence your reference goes nowhere.
Try specifying the full path from root, like so:
/usr/bin/java
Or, if you want to see an example of relative pathing in action, you could also try:
cd /
usr/bin/java
A few thoughts.
Remove the -- after the #!/bin/bash
Make sure to direct script output seen by cron to mail or somewhere else where you can view it (e.g. MAILTO=desiredUser)
Confirm that your script is running and not blocked by a different long-running script (e.g. on the second line, add touch /tmp/MY_SCRIPT_RAN && exit)
Debug the script using set -x and set -v once you know it's actually running
Do you define necessary paths and env vars in your personal .profile (or other script)? Have you tried sourcing that particular file (or is that what you're doing already with /root/.bash_profile?)
Another way of asking this is: are you certain that whatever necessary paths and env vars you expect are actually available?
If nothing else, have you tried echo'ing individual values or just using the "env" command in your script and then reviewing the stdout?
provide full paths to your jar file, and what user are you running the crontab in? If you set it up for a normal user, do you think that user has permission to source the root's profile?