Linux IO operator '>' - linux

I have cronjob as jstack > error.log every second to get the snapshot of error.
My problem is if I use > operator in linux does it close file also or keep the file open?

You're going to overwrite the file every second.
You might want jstack >> error.log.

Whats the problem, look for open files in system and check if the file is still open, lsof | grep <your filename>. You will get the answer.
Though it would be closed, just to be sure you can do that.
NOTE: I am sure if you are running it every sec, it wont run every sec.. Cron daemon comes to see cronjob every minute by default. So its too much to ask from cron.

Related

How to check if immediate previous run failed or not?

I have a perl script that runs and does some checks.
In some cases that script fails and stops processing and in others completes.
What I would like to do is to be able to check if the script run within 1 minute and if the run was successful somehow then exit.
I thought about saving some file or checking $?, as an indication but I thought there may be exist some standard clean approach for this.
Would like a solution that would work for both linux and mac
You could see if you script has ended after a minute by trying something like this :
sleep 60
ps -ae | grep yourScript.name
This has to be executed at the same time as your script(s). If it returns nothing, that means your script isn't running anymore, aka has ended.
For the final result, you could make your perl script write into a specific log file, and check the end of this log file if the ps -ae | grep yourScript.name returned nothing.
Hope it helped !

Python - read from a remote logfile that is updated frequently

I have a logfile that is written constantly on a remote networking device (F5 bigip). I have a Linux hopping station from where I can fetch that log file and parse it. I did find a solution that would implement a "tail -f" but I cannot use nice or similar to keep my script running after I log out. What I can do is to run a cronjob and copy over the file every 5 min let's say. I can process the file I downloaded but the next time I copy it it will contain a lot of common data, so how do I process only what is new? Any help or sugestions are welcome!
Two possible (non-python) solutions for your problems. If you want to keep a script running on your machine after logout, check nohup in combination with & like:
nohup my_program & > /dev/null
On a linux machine you can extract the difference between the two files with
grep -Fxv -f old.txt new.txt > dif.txt
This might be slow if the file is large. The dif.txt file will only contain the new stuff and can be inspected by your program. There also might be a solution involving diff.

Why does my crontab not work?

I am planning to run some bash scripts every minute, and I wrote:
* * * * * bash ~/Dropbox/temp_scripts/run_all_scripts
in crontab.
It was supposed to run every minute, but it did not work. Does anyone have idea why this happens?
Transferring a comment into an answer.
Add I/O redirection to the command line in the crontab entry:
>/tmp/run_all_scripts.out 2>/tmp/run_all_scripts.err
Review the contents of the files after a minute or two has passed. Consider recording the environment to see if that's part of the problem. And consider using bash -x instead of just bash.
If you still don't get anything (the files in /tmp are not created), then you've got issues with cron; the daemon isn't running, or your user does not have permission to use it (but crontab isn't telling you that), or you've not submitted your crontab to the program (what does crontab -l say?), or … whatever is really wrong.
Note, too, that the output from cron jobs is normally (well, at least sometimes — on Mac OS X for a system I currently use, and Solaris for another that I've used previously) emailed to the person whose job it is. You should review the email on the system.
Thank you! I have already fixed it! The reason why it does not work is I used "ls -a .sh" in the script, and when the crontab did not find any *.sh files in the folder it was executing. When modifying it to "ls -a $HOME/Dropbox/temp_scripts/.sh", everything works! This debugging technique is quite helpful!
It is, in many ways, the most basic of debugging techniques — make sure you see what is actually happening. If you're not sure why a shell script isn't working, make sure you can see that it is executing and what it is producing in the way of output, and (very often) make sure you can see what it is executing with bash -x or equivalent. (AFAIK, all shells support -x to trace the execution.)

How to restart background php process? (how to get pid)

I'm a PHP developer, and know very little about shell scripting... So I appreciate any help here.
I have four php scripts that I need running in the background on my server. I can launch them just fine - they work just fine - and I can kill them by looking up their PID.
The problem is I need my script to, from time to time, kill the processes and restart them, as they maintain long standing HTTP requests that sometimes are ended by the other side.
But I don't know how to write a command that'll find these processes and kill them without looking up the PID manually.
We'll start with one launch command :
/usr/local/php5/bin/php -f /home/path/to/php_script.php > /dev/null &
Is there a way to "assign" a PID so it's always the same? or give the process a name? and how would I go about writing that new command?
Thank you!
Nope, you can't "assign" the process PID; instead, you should do as "real" daemons do: make your script save its own PID in some file, and then read it from that file when you need to kill.
Alternative would be to use something like supervisor, that handles all that for you in a quite nice way.
Update - supervisor configuration
Since I mentioned supervisor, I'm also posting here a short supervisor configuration file that should do the job.
[program:yourscriptname]
command=/usr/local/php5/bin/php -f /home/path/to/php_script.php
Have a look here for more configuration options.
Then you can use it like this:
# supervisorctl status
to show the process(es) status.
# supervisorctl start yourscriptname
to start your script
# supervisorctl stop yourscriptname
to stop your script
Update - real world supervisor configuration example
First of all, make sure you have this in your /etc/supervisor/supervisord.conf.
[include]
files = /etc/supervisor/conf.d/*.conf
if not, just add those two lines and
mkdir /etc/supervisor/conf.d/
Then, create a configurtion file for each process you want to launch:
/etc/supervisor/conf.d/script1.conf
[program:script1]
command=/usr/local/php5/bin/php -f /home/path/to/php_script.php
stdout_logfile=/var/log/script1.log
stderr_logfile=/var/log/script1-error.log
/etc/supervisor/conf.d/script2.conf
[program:script2]
command=/usr/local/php5/bin/php -f /home/path/to/php_script2.php
stdout_logfile=/var/log/script2.log
stderr_logfile=/var/log/script2-error.log
...etc, etc.. for all your scripts.
(note that you don't need the trailing & as supervisor will handle all the daemonization thing for you; in fact you shouldn't execute programs that are self-daemonizing inside supervisor).
Then you can start 'em all with:
supervisorctl start all
or just one with something like:
supervisorctl start script1
Starting supervisor from php
Of course, you can start/stop the supervisor-controlled processes using the two commands above, even from inside a script.
Remember however that you'll need root privileges, and it's quite risky to allow eg. a web page to execute commands as root on the server..
If that's the case, I recommend you have a look at the instructions on how to run supervisor as a normal user (I never did that, but you should be able to run it as the www-data user too..).
The canonical way to solve this is to have the process write its PID into a file in a known location, and then any utility scripts can look up the file, read the PID, and manipulate that process. Add a command line argument to the script that gives the name of the PID file to write to.
A work around to this would be to use ps aux, this will show all of the processes with the command that called them. This presumes of course that the 4 scripts are different files, or can be uniquely identified by the command that called them. Pipe that through a grep and you're all set ps aux | grep runningscript.php
OK! so this has been a headache and a half for my who knows NOTHING about shell/bash whatever scripting...
#redShadow 's response would had been perfect, except my hosting provider will not give me access to the /etc/supervisor/ directory. as he said, you must be root - and even using sudo was an admin wouldn't let me make any chances there...
Here's what I came up with:
kill -9 `ps -ef | grep php | grep -v grep | awk '{print $2}'`
because the only types of commands I was executing showed up in the top command as php this command loops thru running processes, finds the php commands and their corresponding PIDs and KILLS them! woot!!
What I do is have my script check for a file that I name "run.txt". If it does not
exist, they exit. Then just br renaming that (empty) file, I can stop all my scripts.

/usr/bin/perl: bad interpreter: Text file busy

This is a new one for me: What does this error indicate?
/usr/bin/perl: bad interpreter: Text file busy
There were a couple of disk-intensive processes running at the time, but I've never seen that message before—in fact, this is the first time that I can remember getting an error when trying to run a Perl script. After a few seconds of waiting, I was able to run it, and haven't seen the issue since, but it would be nice to have an explanation for this.
Running Ubuntu 9.04, file system is ext3.
I'd guess you encountered this issue.
The Linux kernel will generate a bad interpreter: Text file busy error if your Perl script (or any other kind of script) is open for writing when you try to execute it.
You don't say what the disk-intensive processes were doing. Is it possible one of them had the script open for read+write access (even if it wasn't actually writing anything)?
This happens because the script file is open for writing, possibly by a rogue process which has not terminated.
Solution: Check what process is still accessing the file, and terminate it.
Eg:
# /root/wordpress_plugin_updater/updater.pl --wp-path=/var/www/virtual/joel.co.in/drjoel.in/htdocs
-bash: /root/wordpress_plugin_updater/updater.pl: /root/perl/bin/perl: bad interpreter: Text file busy
Run lsof (list open files command) on the script name:
# lsof | grep updater.pl
sftp-serv 4416 root 3r REG 144,103 11043 33046751 /root/wordpress_plugin_updater/updater.pl
Kill the process by its PID:
kill -9 4416
Now try running the script again. It works now.
# /root/wordpress_plugin_updater/updater.pl --wp-path=/www/htdocs
Wordpress Plugin Updater script v3.0.1.0.
Processing 24 plugins from
This always has to do with the perl interpreter (/usr/bin/perl) being inaccessible. In fact, it happens when a shell script is running or awk or whatever is on the #! line at the top of the script.
The cause can be many things ... perms, locked file, filesystem offline, and on and on.
It would obviously depend on what was happening at the exact moment you ran it when the problem occured. But I hope the answer is what you were looking for.
If the script was edited in Windows, or any other OS with different "native" line endings, it could be as simple as a CR(^M) "hiding" at the end of the first line. Vi improved can be set up to hide this non native line ending. In my case I simply re-typed the offending first line in VI and the error went away.
If you are using gnu parallel and you see this error then it may be because you are streaming a file in from the same place that you are writing the file out...
I had this same issue and grepping to see what was using the file didnt work. turns out i just needed to restart the droplet and viola script now works.

Resources