'perlbrew list-modules' does return 'Perl' and nothing else - multithreading

I've ubuntu 16.04 and i've installed perl 5.8.7,5.18.2 both with threads activated, and 5.18.1 without threads.
The purpose was to use a version of Perl with threads instead of forks as i've mutiple scripts already done with threds and forks is not a proper multi-threading models (it just forks processes).
The first problem i get is when installing modules via cpanm -fi [name_of_module] command. As a matter of fact the command doesnt' return anything at all! just 'Perl'. The same fact happens when i'm traying to instal wathever modules i desire using in my scrips.
I think this problem is linked with the fact i'm able to use ''treads'' just when i run the scripts without sudo (e.g.perl [name_of_the_script]), while when i run it with 'sudo perl [name_of_the_script]' says 'the current version of Perl doesn't support threads'!
That's quite strange.
The perlbrew enviroment veriables are set up correctly and when i type ''which perl'', the system poits to the new-version directory as expected.
Dont' know how to proceed

I see that you posted several question in one paragraph. I'll try to answer the onse that I could.
cpanm -fi XXX does not "return anything at all"
I'm not sure I get this part. If XXX failed to be installed, there should probably be some error messages on the screen. The fact that perlbrew list-modules prints nothing but Perl implies that probably nothing is installed.
It could be that your cpanm execuateble is implicityl installing stuff for system perl instead. You could verify this by checking the first line of head =cpanm. If it is not #!/usr/bin/env perl, it is probably wrong. You want the one installed by: perlbrew install-cpanm
sudo perl
I wonder if you PATH is actually correctly set -- Running perl -V after perlbrew use 5.8.7 can show you enough version information and therefore tell you whether perlbrew itself is working properly.
You probably also need sudo -E perl instead. sudo reset env vars unless you ask it not to (the -E option), and PERL5LIB is probably needed.

Related

Checking whether a program exists

In the middle of my perl script I want to execute a bash command. The script takes a long time, so at the beginning of the script I want to see if the command exists. This answer says to just try and run it and this other answer suggests some bash commands to test if the program exists.
Is the latter option the best solution? Are there any better ways to do this check in perl?
My best guess is that you want to check for existence of an executable file that you want to run using system or qx//
But if you want your command line to behave the same way as the shell, then you can probably use File::Which
What if we assume that we don't know the command's location?
This means that syck's answer won't work, and zdim's answer is incomplete.
Try this function in perl:
sub check_exists_command {
my $check = `sh -c 'command -v $_[0]'`;
return $check;
}
# two examples
check_exists_command 'pgrep' or die "$0 requires pgrep";
check_exists_command 'readlink' or die "$0 requires readlink";
I just tested it, because I just wrote it.
With perl, you can test files for existence, readability, executability etc., take a look here.
Therefore just use
executeBashStuff() if -x $filename;
or stat it:
stat($filename);
executeBashStuff() if -x _;
To me a better check is to run the program at the beginning of the script (with -V say).
I'd use the same invocation as you use to run the job later (via shell or not, via execvp). Once at it, make sure to see whether it threw errors. This is also discussed in your link but I would in fact get the output back (not send it away) and check that. This is the surest way to see whether the thing actually runs out of your program and whether it is what you expect it to be.
Checking for the executable with -x (if you know the path) is useful, too, but it only tells you that a file with a given name is there and that it is executable.
The system's which seems to be beset with critism for its possible (mis)behavior, it may or may not be a shell-builtin (which complicates how exactly to use it), is an external utility, and its exact behavior is system dependent. The module File::Which pointed out in Borodin's answer would be better -- if it is indeed better than which. (What it may well be, I just don't know.)
Note. I am not sure what "bash command" means: a bash shell built-in, or the fact that you use bash when on terminal? Perl's qx and system use the sh shell, not bash (if they invoke the shell, which depends on how you use them). While sh is mostly a link, and often to bash, it may not be and there are differences, and you cannot rely on your shell configuration.
Can also actually run a shell, qx(/path/bash -c 'cmd args'), if you must. Mind the quotes. You may need to play with it to find the exact syntax on your system. See this page and links.

Bash: Unexpected parallel behavior when reading arguments from file using xargs

Previous
This is a follow-up to this question.
Specs
My system is a dedicated server running Ubuntu Desktop, Release 12.04 (precise) 64-bit, 3.14.32-xxxx-std-ipv6-64. Neither release or kernel can be upgraded, but I can install any package.
Problem
The problem discribed in the question above seems to be solved, however this doesn't work for me. I've installed the latest lftp and parallel packages and they seem to work fine for themselves.
Running lftp works fine.
Running ./job.sh ftp.microsoft.com works fine, but I needed to chmod -x the script
Running sed 's/|.*$//' end_unique.txt | xargs parallel -j20 ./job.sh ::: does not work and produces bash errors in the form of /bin/bash: <server>: command not found.
To simplify things, I cleaned the input file end_unique.txt, now it has the following format for each line:
<server>
Each line ends in a CRLF, because it is imported from a windows server.
Edit 1:
This is the job.sh script:
#/bin/sh
server="$1"
lftp -e "find .; exit" "$server" >"$server-files.txt"
Edit 2:
I took the file and ran it against fromdos. Now it should be standard unix format, one server per line. Keep in mind that the server in the file can vary in format:
ftp.server.com
www.server.com
server.com
123.456.789.190
etc. All of those servers are ftp servers, accessible by ftp://<serverfromfile>/.
With :::, parallel expects the list of arguments it needs to complete the commands it's going to run to appear on the command line, as in
parallel -j20 ./job.sh ::: server1 server2 server3
Without ::: it reads the arguments from stdin, which serves us better in this case. You can simply say
parallel -j20 ./job.sh < end_unique.txt
Addendum: Things that can go wrong
Make certain two things:
That you are using GNU parallel and not another version (such as the one from moreutils), because only (as far as I'm aware) the GNU version supports reading an argument list from stdin, and
That GNU parallel is not configured to disable the GNU extensions. It turned out, after a lengthy discussion in the comments, that they are disabled by default on Ubuntu 12.04, so it is not inconceivable that this sort of thing might be found elsewhere (particularly downstream from Ubuntu). Such a configuration can hide in
The environment variable $PARALLEL,
/etc/parallel/config, or
~/.parallel/config
If the GNU version of parallel is not available to you, and if your argument list is not too long for the shell and none of the arguments in it contain whitespaces, the same thing with the moreutils parallel is
parallel -j20 job.sh -- $(cat end_unique.txt)
This did not work for OP because the file contained more servers than the shell was willing to put into a command line, but it might work for others with similar problems.

Dry-run a potentially dangerous script?

A predecessor of mine installed a crappy piece of software on an old machine (running Linux) which I've inherited. Said crappy piece of software installed flotsam all over the place, and also is sufficiently bloated that I want it off ASAP -- it no longer has any functional purpose since we've moved on to better software.
Vendor provided an uninstall script. Not trusting the crappy piece of software, I opened the uninstall script in an editor (a 200+ line Bash monster), and it starts off something like this:
SWROOT=`cat /etc/vendor/path.conf`
...
rm -rf $SWROOT/bin
...
It turns out that /etc/vendor/path.conf is missing. Don't know why, don't know how, but it is. If I had run this lovely little script, it would have deleted the /bin folder, which would have had rather amusing implications. Of course this script required root to run!
I've dealt with this issue by just manually running all the install commands (guh) where sensible. This kind of sucked because I had to interpolate all the commands manually. In general, is there some sort of way I can "dry run" a script to have it dump out all the commands it would execute, without it actually executing them?
bash does not offer dry-run functionality (and neither do ksh, zsh, or any other shell I know).
It seems to me that offering such a feature in a shell would be next to impossible: state changes would have to be simulated and any command invoked - whether built in or external - would have to be aware of these simulations.
The closest thing that bash, ksh, and zsh offer is the ability to syntax-check a script without executing it, via option -n:
bash -n someScript # syntax-check a script, without executing it.
If there are no syntax errors, there will be no output, and the exit code will be 0.
If there are syntax errors, analysis will stop at the first error, an error message including the line number is written to stderr, and the exit code will be:
2 in bash
3 in ksh
1 in zsh
Separately, bash, ksh, and zsh offer debugging options:
-v to print each raw source code line[1]
to stderr before it is executed.
-x to print each expanded simple command to stderr before it is executed (env. var. PS4 allows tweaking the output format).
Combining -n with -v and/or -x offers little benefit:
With -n specified, -x has no effect at all, because nothing is being executed.
With -n specified, -v will effectively simply print the source code.
If there is a syntax error, there may be benefit in the source code getting print up to the point where the error occurs; keep in mind, though that the error message produced by
-n always includes the offending line number.
[1] Typically, it is individual lines that are printed, but the true unit is however many lines a given command - which may be a compound command such as while or a command list (such as a pipeline) - spans.
You could try running the script under Kornshell. When you execute a script with ksh -D, it reads the commands and checks them for syntax, but doesn't execute them. Combine that with set -xv, and you'll print out the commands that will be executed.
You can also use set -n for the same effect. Kornshell and BASH are fairly compatible with each other. If it's a pure Bourne shell script, both Kornshell and BASH will execute it pretty much the same.
You can also run ksh -u which will cause unset shell variables to cause the script to fail. However, that wouldn't have caught the catless cat of a nonexistent file. In that case, the shell variable was set. It was set to null.
Of course, you could run the script under a restricted shell too, but that's probably not going to uninstall the package.
That's the best you can probably do.

Finding if 'which' command is available on a System through BASH

While writing BASH scripts, I generally use the which command of a Linux machine (where Linux Machine refers to Desktop based Linux OS like Ubuntu, Fedora, OpenSUSE) for finding path or availability of other binaries. I understand that which can search for binaries (commands) which are present in the PATH variable set.
Now, I am unable to understand how to proceed in case the which command itself is not present on that machine.
My intention is to create a shell script (BASH) which can be run on a machine and in case the environment is not adequate (like some command being used in script is missing), it should be able to exit gracefully.
Does any one has any suggestions in this regard. I understand there can be ways like using locate or find etc - but again, what if even they are not available. Another option which I already know is that I look for existence of a which binary on standard path like /usr/bin/ or /bin/ or /usr/local/bin/. Is there any other possibility as well?
Thanks in advance.
type which
type is a bash built-in command, so it's always available in bash. See man bash for details on it.
Note, that this will also recognize aliases:
$ alias la='ls -l -a'
$ type la
la is aliased to 'ls -l -a'
(More of a comment because Boldewyn answered perfectly, but it is another take on the question that may be of interest to some.)
If you are worried that someone may have messed with your bash installation and somehow removed which, then I suppose in theory, when you actually invoked the command you would get an exit code of 127.
Consider
$ sdgsdg
-bash: sdgsdg: command not found
$ echo $?
127
Exit codes in bash: http://tldp.org/LDP/abs/html/exitcodes.html
Of course, if someone removed which, then I wouldn't trust the exit codes, either.

Automatically invoking gksudo like UAC

This is about me being stressed by playing the game "type a command and remember to prepend sudo or your fingers will get slapped".
I am wondering if it is possible somehow to configure my Linux system or shell such that when I forget to type e.g. "sudo apt-get install emacs", instead of just telling me that I did something wrong, gksudo would get launched, allowing me to acknowledge my credentials and get on moving. Just like UAC does on windows.
Googling hasn't helped me yet..
So is this possible? Did I miss something? Or am I asking for a square circle?
Edit 2010 July 25th: Thanks everyone for your interrest. Unfortunately, Daenyth and bmargulies answers and explanations are what I anticipated/feared since it was impossible for me to google-up a solution prior to submitting this question. I hope that some nice person will someday provide an effective solution for this.
BR,
Christian
Linux doesn't allow for this. Unlike Windows, where any program can launch a dialog box, and UAC is in the kernel, Linux programs aren't necessarily GUI-capable, and sudo is not, in this sense, in the kernel. A program cannot make a call to elevate privilege (unless it was launched with privilege to begin with and intentionally setuid'd down). sudo is a separate executable with setuid privilege, which checks for permission. If it likes what it sees, it forks the shell to execute the command line. This can't be turned inside out.
As suggested in other posts, you may be able to come up with some 'shell game' to arrange to run sudo for you for some enumerated list of commands, but that's all you are going to get.
You can do what you want with a preexec hook function, similar to the command-not-found package.
There's no way to do this given the current linux software stack. Additionally, MS has a patent on this behavior -- present a user interface identifying an account having a right to permit a task in response to the task being prohibited based on a user's current account not having that right.
I don't think this really works in a general way (automatically deciding which application needs admin rights). However you could make aliases like this for every application:
alias alias apt-get='gksudo apt-get'
If you now enter apt-get install firefox the gnome asks for the admin password. You can store the commands in ~./bashrc
You could use a shell script like the following:
#!/bin/bash
$#
if [ $? -ne 0 ]; then
sudo $# # or "gksudo $#"
fi
This will run a command given in the arguments with a sudo prefix if the command came back with a non-zero return code (i.e. if it failed).
Use it as in "SCRIPT_NAME apt-get install emacs" for example. You may save it somewhere in your $PATH and set it as an alias like this (if you saved it as do_sudo):
alias apt-get='do_sudo apt-get'
Edit: That does not work for programs like synaptic which do work for non-root users but will give them less privileges. However, if the application fails when invoked without root privileges (like apt-get does) this works fine.
In the case where you want to always run a command as root but might already be root, you can solve this by wrapping a little bash script around it:
#!/bin/bash
if [ $EUID = 0 ]; then
"$#"
else
gksudo "$#"
fi
If you call this something like alwaysroot.bash and place it in the right spot on your PATH, then you can call your other program like this:
alwaysroot.bash otherprogram -arguments...
It even handles arguments with spaces in correctly.

Resources