How to avoid shell script execulated by multi thread - linux

I use python multi threading to execulate a shell script, but for some reason, this shell script can't execulate by multi thread, I want make this script avoid multi thread for foolproof, Is any way can do?
thanks!

Use a mutex. Say, create in the script check if a file is created (e.g. in /temp). If file exists, exit. If not, create it and do useful actions. In the end delete the file.

Related

Best way to program flow through a job loop

I see that Origen supports passing jobs to the program command in this video. What would be the preferred method to run the program command in a job loop (i.e. job == 'ws' then job == 'ft', etc.).
thx
The job is a runtime concept, not a compile/generate time concept, so it doesn't really make sense to run the program command (i.e. generate the program) against different settings of job.
Origen doesn't currently provide any mechanism to pass define-type arguments through to the program generator from the command line, though you could implement that in your app easily enough by overriding the program command - i.e. capture and store them somewhere in your app and then continue with the regular command.
The 'Origen-way' of doing things like this is to setup different target files with different variables set within them, then execute the program command for the different targets.

Is there a way to make a bash script process messages that have been sent to it using the write command

Is there a way to make a bash script process messages that have been sent to it using the "write" command? So for example, if a user wants to activate a feature in my script, could I make it so that they can send the script a command using the write command?
One possible method I thought of was to configure logging for a screen session and then have the bash script parse text through there, but I'm not sure if there would be a simpler or more efficient way to tackle this
EDIT: I was thinking as an alternative solution I could use a named pipe. I'm worried that it would break though if the tmp partition gets filled up completely (not sure if this would impact write as well?). I'm going to be running this script on a shared box, and every once in a while someone will completely fill up the /tmp partition and then just leave it like that until people start complaining
Hmm, you are trying to really circumvent a poor unix command to ask it something it was not specified for. From the man page (emphasize mine):
The write utility allows you to communicate with other users, by copying
lines from your terminal to theirs
That means that write is intended to copy line directly on terminals. As soon as you say, I will dump terminal output with screen, and then parse the dump file, you loose the simplicity of write (and also need disk space, with the problem of removing old lines from a sequencial file)
Worse, as your script lives on its own, it could (should?) be a daemon script attached to no terminal
So if I have correctly understood your question, your requirements are:
a script that does some tasks and should be able to respond to asynchronous requests - common usages are named pipes or network or unix domain sockets, less common are files in a dedicated folder with a optional signal to have immediate processing, adding lines to a sequential file while being possible is uncommon, because of a synchonization of access problem
a simple and convivial way for users to pass requests. Ok write is nice for that part, but much too hard to interface IMHO
If you do not want to waste time on that part by using standard tools, I would recommend the mail system. It is trivial to alias a mail address to a program that will be called with the mail message as input. But I am not sure it is worth it, because the user could directly call the program with the request as input or command line parameter.
So the client part could be simply a program that:
create a temporary file in a dedicated folder (mkstemp is your friend in C or C++, or mktemp in shell - but beware of race conditions)
write the request to that file
optionaly send a signal to a pid - provided the script write its own PID on startup to a dedicated file

file region locking using bash shell script

I am trying write some script with which i can try to lock a region of file using bash shell script.
I have used flock, but it locks the whole file and does not provide parameters to lock a region of a file like in C language you get with fcntl.
Will be helpful someone can provide some suggestions in this area?
As you use flock (1) (which is a C program, see http://util-linux.sourcearchive.com/documentation/2.17/flock_8c-source.html) to utilize flock (2), you would need a similar command that utilizes fcntl. If such a command doesn't exist yet, one would have to write it.

Executing external programs in Perl

I am executing a few external programs from a Perl script and want to automatically handle prompts from that program. I know what the prompts are, they are not error conditions, and I want the script to handle them and not the user.
What's best practice for this?
Thanks
My first stop would be the Expect module. I'm not sure if I'd need a second stop after that.

Use full processing power with perl

I have a perl script which is running correct but it is only using 1 core of my 2 core CPU, how can i make it utilise all cores.
I know that i can create threads using threads->new(); but how do i fit that into something like:
my $twig= new XML::Twig::XPath(TwigRoots => {TrdCaptRpt => \&top_level});
$twig->parsefile($file);
where the subroutine is being called by something else.
The standard approach with Perl is to not try to use multiple cores with one invocation of the script, but instead to run jobs in parallel on separate cores.
Yes, you can use threading with Perl, but Perl's threading is (very) heavyweight. To avoid potential race conditions, when you spawn a thread Perl simply copies everything that it does not want to explicitly share. Therefore using threading is likely to be much slower than not.
You would need to modify the code of XML::Twig. There is no canned answer of what would need to be done. if you find yourself having to run this script for multiple files, a better and very simple option, is to write your script so it can run for more than 1 file at the same time. You could do that with threads or you could do that with a wrapper script that executes 2 copies of your script at the same time (perhaps with xargs?).

Resources