/usr/bin/perl: bad interpreter: Text file busy - linux

This is a new one for me: What does this error indicate?
/usr/bin/perl: bad interpreter: Text file busy
There were a couple of disk-intensive processes running at the time, but I've never seen that message before—in fact, this is the first time that I can remember getting an error when trying to run a Perl script. After a few seconds of waiting, I was able to run it, and haven't seen the issue since, but it would be nice to have an explanation for this.
Running Ubuntu 9.04, file system is ext3.

I'd guess you encountered this issue.
The Linux kernel will generate a bad interpreter: Text file busy error if your Perl script (or any other kind of script) is open for writing when you try to execute it.
You don't say what the disk-intensive processes were doing. Is it possible one of them had the script open for read+write access (even if it wasn't actually writing anything)?

This happens because the script file is open for writing, possibly by a rogue process which has not terminated.
Solution: Check what process is still accessing the file, and terminate it.
Eg:
# /root/wordpress_plugin_updater/updater.pl --wp-path=/var/www/virtual/joel.co.in/drjoel.in/htdocs
-bash: /root/wordpress_plugin_updater/updater.pl: /root/perl/bin/perl: bad interpreter: Text file busy
Run lsof (list open files command) on the script name:
# lsof | grep updater.pl
sftp-serv 4416 root 3r REG 144,103 11043 33046751 /root/wordpress_plugin_updater/updater.pl
Kill the process by its PID:
kill -9 4416
Now try running the script again. It works now.
# /root/wordpress_plugin_updater/updater.pl --wp-path=/www/htdocs
Wordpress Plugin Updater script v3.0.1.0.
Processing 24 plugins from

This always has to do with the perl interpreter (/usr/bin/perl) being inaccessible. In fact, it happens when a shell script is running or awk or whatever is on the #! line at the top of the script.
The cause can be many things ... perms, locked file, filesystem offline, and on and on.
It would obviously depend on what was happening at the exact moment you ran it when the problem occured. But I hope the answer is what you were looking for.

If the script was edited in Windows, or any other OS with different "native" line endings, it could be as simple as a CR(^M) "hiding" at the end of the first line. Vi improved can be set up to hide this non native line ending. In my case I simply re-typed the offending first line in VI and the error went away.

If you are using gnu parallel and you see this error then it may be because you are streaming a file in from the same place that you are writing the file out...

I had this same issue and grepping to see what was using the file didnt work. turns out i just needed to restart the droplet and viola script now works.

Related

Cding into directory hangs terminal

I am encountering a really weird issue when trying to "cd" into a specific directory (e.g. directory_A) along a path. Whenever I try to "cd", my linux terminal immediately hangs for at least 1hr. Upon entering successfully, the terminal is completely frozen and I cannot run any commands within the shell.
Additionally, while exiting the "cd" command during execution through "ctrl-c" does kill the "cd" call, it becomes impossible to run any additional command within the shell (i.e. "ls/cd/etc.." into directory_B causes the terminal to hang again). This happens despite the fact that cding into directory_B (without first trying to cd into directory A) causes no issues whatsoever. It appears that trying to enter directory_A at all causes immediate failure of the shell somehow.
What is more is that "lsing" directory A from its parent dir causes no issues. I can see all the files (and even open them! - e.g. through "vim directory_A/foo.txt), but "cding" causes massive problems.
I'm not sure if I just have the wrong keyword searches, but I haven't been able to find similar issues - though I acknowledge I am far from an expert with these things.
Has anyone seen such an issue before? Or may know potentially where to search for potential answers?
I'd be happy to provide any other information as well - thanks very much for any help/advice you may have!
A) type alias | grep cd to see if its aliased, or type cd to check wether its been re-defined as a function.
B) start a new shell without startup file: bash --noprofile --norc
C) use a different shell: sh, or whatever

When PHP exec function creates a process, where is it queued so that it can be removed programmatically or from command line?

I have a PHP script that runs the following code:
exec("ls $image_subdir | parallel -j8 tesseract $image_subdir/{} /Processed/OCR/{.} -l eng pdf",$output, $result_code);
The code runs, however, even after I terminate the PHP script and close the browser, it continues to create the pdf files (thousands). It has been 24 hrs and it is still running. When I run a ps command, it only shows the 8 current processes that were created.
How can I find where all the pending ones are running and kill them? I believe I can simply restart Apache/PHP, but I would like to know where these pending processes are and how they can be down or controlled. It seemed originally that the code waited a minute while it executed the above code, then proceeded to the next line of code in the PHP script. So it appears that it created the jobs somewhere and then proceeded to the next line of code.
Is it perhaps something peculiar to the parallel command? Any information is very much appreciated. Thank you.
The jobs appear to have been produced by a perl process:
perl /usr/bin/parallel -j8 tesseract {...basically the code from the exec() function call in the php script}
perl was invoked either by the gnu parallel command or php's exec function. In any event, htop would not allow killing of process and did not produce any error or status and so it may be a permission problem preventing htop from killing the process. So it was done with sudo on the command line which ultimately killed the process and stopped any further processes creation from the original PHP exec() call.

/bin/bash^M: bad interpreter: No such file or directory

I am facing
/bin/bash^M: bad interpreter: No such file or directory
issue and I have already got the solution for it from this stack flow answer
-bash: ./my_script: /bin/bash^M: bad interpreter: No such file or directory
which works fine.
My question is every time when I restart my ubuntu machine I have to redo everything
That is I execute
dos2unix -k -o filename
every time I start my system.
Is there any way this can be just once?
Please note: I had to create a new question because I was not able to ask the question or comment in the existing question due to less reputation
The first line of your bash script should be the Shebang (#!/bin/bash).
I see the error says: /bin/bash
But is should be changed to: #!/bin/bash
Then run:
$ dos2unix my_script
This will change all the line terminators from \r\n (Windows) to \n (Linux), this will modify the original my_script file so it will persist even after a reboot.
This is a very common problem of running a bash script from a file saved with Microsoft OS machine (a virtual machine maybe?) such as Windows or DOS.
So you know the fix to your problem.
Now you should prevent your problem reoccurring every time you login. Identify how the file is generated/copied/damaged by another resource. Like .bash_profile script or crontab script or any other management deamon.

delays just after bootup on CentOS7.5

I'm using CentOS 7.5.1804.
Right after booting-up, the operating system delays.
For example, when I try to write "python" in a terminal,
first I write "pyt" and press .
I have to wait a few seconds for the OS to interpolate to "python".
This phenomenon occurs just after booting-up.
After a few days later, this phenomenon goes away.
Does anyone know a clue to solve this problem?
The bit when you press pyt-"tab" is part of bash-completion package as the command completion happens after you typed the full command. So the cause has to be investigated starting with bash. My educated guess is that some process or I/O is keeping the system busy.
You can start with some generic system information tools as soon as the system start:
uptime to see the system load
vmstat -n 1 to check the status of the CPU
ps aux to check running processes
iotop to check for I/O
systemctl list-jobs to show running jobs in systemd
and based on the result of them perform deeper analysis.
Another thing might be the access to the disk slowing down the systemt at startup. Where is the machine running?
I don't know about fixing — there are all kinds of things that could go cause delays. But I can offer a few tips to investigate.
The first step to investigate is to run set -x to get a trace of the commands that the shell executes to generate the completions. Watch where it pauses.
do you have the issue with different auto-completion? if its only python you can time the execution of your command
time python
you can watch if you have some problems at launch with redirect standar output and error to a file.
strace python 2>&1 launch.log
take a strace at boot and one later then you can check if there is difference between:
diff -u delays.log delays2.log | grep ^+
hope it can help.

Linux IO operator '>'

I have cronjob as jstack > error.log every second to get the snapshot of error.
My problem is if I use > operator in linux does it close file also or keep the file open?
You're going to overwrite the file every second.
You might want jstack >> error.log.
Whats the problem, look for open files in system and check if the file is still open, lsof | grep <your filename>. You will get the answer.
Though it would be closed, just to be sure you can do that.
NOTE: I am sure if you are running it every sec, it wont run every sec.. Cron daemon comes to see cronjob every minute by default. So its too much to ask from cron.

Resources