How can I get name of the user executing my Perl script? - linux

I have a script that needs to know what username it is run from.
When I run it from shell, I can easily use $ENV{"USER"}, which is provided by bash.
But apparently - then the same script is run from cron, also via bash - $ENV{"USER"} is not defined.
Of course, I can:
my $username = getpwuid( $< );
But it doesn't look nice - is there any better/nicer way? It doesn't have to be system-independent, as the script is for my personal use, and will be only run on Linux.

Try getting your answer from several places, first one wins:
my $username = $ENV{LOGNAME} || $ENV{USER} || getpwuid($<);

crontab sets $LOGNAME so you can use $ENV{"LOGNAME"}. $LOGNAME is also set in my environment by default (haven't looked where it gets set though) so you might be able to use only $LOGNAME instead of $USER.
Although I agree with hacker, don't know what's wrong about getpwuid.

Does this look prettier?
use English qw( −no_match_vars );
my $username = getpwuid $UID;

Sorry, why doesn't that "look nice"? That's the appropriate system call to use. If you're wanting an external program to invoke (e.g. something you could use from a bash script too), there are the tools /usr/bin/id and /usr/bin/whoami for use.

Apparently much has changed in Perl in recent years, because some of the answers given here do not work for fetching a clean version of "current username" in modern Perl.
For example, getpwuid($<) prints a whole bunch of stuff (as a list in list context, or pasted-together as a string in scalar context), not just the username, so you have to use (getpwuid($<))[0] instead if you want a clean version of the username.
Also, I'm surprised no one mentioned getlogin(), though that doesn't always work. For best chance of actually getting the username, I suggest:
my $username = getlogin() || (getpwuid($<))[0] || $ENV{LOGNAME} || $ENV{USER};

Related

NodeJS spawn does not escape bad strings

I want to download a url in a remote host using ssh, i was using exec(), it was working:
const cmd = `mkdir -p /home/username/test; wget --no-check-certificate -q -U \"\" -c \"${url}\" -O /home/username/test/img.jpg`;
const out = execSync(`ssh -o ConnectTimeout=8 -o StrictHostKeyChecking=no -p 2356 username#${ip} '${cmd}'`);
But it's usafe to use the url variable this way, the value of this variable is from user input, so i found some posts on stackoverflow saying that i need to use spawn:
const url = 'https://example.com/image.jpg';
const out = spawnSync('ssh', [
'-o', 'ConnectTimeout=8',
'-o', 'StrictHostKeyChecking=no',
'-p', '2356',
`username#${ip}`,
`mkdir -p /home/username/test; wget --no-check-certificate -q -U "" -c "${url}" -O /home/username/test/img.jpg`,
]);
What about if const url = 'https://example.com/image.jpg"; echo 5; "';, the echo will be executed, could someone tell me how to execute this code in a safe way?
There are two aspect. First, you are correct that execSync is unsafe. To quote from the documentation:
Never pass unsanitized user input to this function. Any input containing shell metacharacters may be used to trigger arbitrary command execution.
A solution is to use execSpawn, as you pointed out, for example, like in this related answer.
However, in your example, you are calling ssh and passing it a text, which will be executed by the shell on the remote system. Because of that, you are still vulnerable to the attack, as you showed in the example. But note that it is no longer a NodeJs related exploit, but an exploit on ssh and the shell.
To mitigate the attack, I would recommend to concentrate on the remote server. Instead of sending it a command over ssh, which it should trust and execute in the shell, you could provide a clear defined API from the server. In the HTTP interface, you can accept input and do a proper validation (instead of simplify trusting it). The advantage is that you do not need to deal with the subtleties of the shell.
If you are forced to work with ssh, you could validate the URL and only if it is safe, forward it to the server. It is still not ideal from a security perspective. First, the remote server will need to trust you completely (often it is better to avoid that and instead validate as locally as possible). Second, the validation itself is not straightforward. You will need to decide if a string looks like an URL (e.g. by using new URL(url)), but the more difficult aspect is to make sure that no exploits slip through. I don't have a concrete example, but I would be cautious to assume all strings that pass the URL parser will be safe to execute in a shell environment.
In summary, if possible try to avoid ssh with passing shell command in that situation (input data controlled by an attacker). Instead prefer a normal API like a HTTP interface (or other text or binary protocols). If it is not possible, try hard to sanitize the data before sending it out. Maybe you know in advance how a URL will look like (e.g. the list of allowed hostnames, allowed paths, etc). But realize that there might be hidden examples that you will overlook, and never underestimate the creativity of an attacker.

Allstar Node Programming

I'm almost completely new to Linux programming, and Bash Scripts. I build an amateur radio AllStar node.
I'm trying to create a script that looks at a certain variable and based on that info decides if it should connect or not. I can use a command: asterisk -rx "rpt showvars 47168. This returns a list of variables and their current values. I can store the whole list into a variable that I define, in my test script I just called it MYVAR but I can't seem to only get the value of one of the variables that's listed.
I talked to someone who knows a lot about Linux programming, and she suggested that I try CONNECTED="${MYVAR[3]}" but when I do this, CONNECTED just seems to become a blank variable.
What really frustrates me is I have written programs in other programming languages, and I've been told Bash scripts are easy to learn, but yet I can't seem to get this.
So any help would be great.
how did you assigned your variable?
It seems to me that you want to work with an array, then:
#!/bin/bash
myvar=( $( asterisk -rx "rpt showvars 47168 ) )
echo ${mywar[3]} # this is your fourth element
echo ${#myvar[#]} # this is the total of element in your array
be careful that index in an array starts at 0

returning values in a bash function

I'm working with a growing bash script and within this script I have a number of functions. One of these functions is supposed to return a variables value, but I am running into some issues with the syntax. Below is an example of the code.
ShowTags() {
local tag=0
read tag
echo "$tag"
}
selected_tag=$(ShowTags)
echo "$selected_tag"
pulled this code from a Linux Journal article, but the problem is it doesn't seem to work, or perhaps it does and im missing something. Essentially whenever the function is called the script hangs up and does not output anything, I need to CTRL+C to drop back to CLI.
The article in question is below.
http://www.linuxjournal.com/content/return-values-bash-functions
So my question is this the proper way to return a value? Is there a better or more dependable way of doing this? And if there is please give me an example so I can figure this out without using global variables.
EDIT:
The behavior of this is really getting to me now. I am using the following script.
ShowTags() {
echo "hi"
local tag=0
read tag
echo "$tag"
}
selected_tag=$(ShowTags)
echo "$selected_tag
Basically what happens is bash will act as if the read command is taking place before the echo tag at the top of the function. As soon as I pass something to read though it will run the top echo, and complete the rest of the script. I am not sure why this is happening. This is exactly what is happening in my main script.
Change echo "hi" to echo "hi" >/dev/tty.
The reason you're not seeing it immediately is that $(ShowTags) captures all the standard output of the function, and that gets assigned to selected_tag. So you don't see any of it until you echo that variable.
By redirecting the prompt to /dev/tty, it's always displayed immediately on the terminal, not sent to the function's stdout, so it doesn't get captured by the command substitution.
You are trying to define a function with Name { ... ]. You have to use name() { ... }:
ShowTags() { # add ()
local tag=0
read tag
echo "$tag"
} # End with }
selected_tag=$(ShowTags)
echo "$selected_tag"
It now lets the user type in a string and have it written back:
$ bash myscript
hello world # <- my input
hello world # script's output
You can add a prompt with read -p "Enter tag: " tag to make it more obvious when to write your input.
As #thatotherguy pointed out, your function declaration syntax is off; but I suspect that's a transcription error, as if it was wrong in the script you'd get different problems. I think what's going on is that the read tag command in the function is trying to read a value from standard input (by default that's the terminal), and pausing until you type something in. I'm not sure what it's intended to do, but as written I'd expect it to pause indefinitely until something's typed in.
Solution: either type something in, or use something other than read. You could also add a prompt (read -p "Enter a tag: " tag) to make it more clear what's going on.
BTW, I have a couple of objections to the linux journal article you linked. These aren't relevant to your script, but things you should be aware of.
First, the function keyword is a nonstandard bashism, and I recommend against using it. myfunc() ... is sufficient to introduce a function definition.
Second, and more serious, the article recommends using eval in an unsafe way. Actually, it's really hard to use eval safely (see BashFAQ #48). You can improve it a great deal just by changing the quoting, and even more by not using eval at all:
eval $__resultvar="'$myresult'" # BAD, can evaluate parts of $myresult as executable code
eval $__resultvar='"$myresult"' # better, is only vulnerable to executing $__resultvar
declare $__resultvar="$myresult" # better still
See BashFAQ #6 for more options and discussion.

How to create a bash variable like $RANDOM

I'm interest in some thing : every time I echo $RANDOM , the show value difference . I guess the RANDOM is special (When I read it , it may call a function , set a variable flag and return the RANDOM number . ) . I want to create a variable like this , how I can do it ? Every answer will be helpful .
The special behavior of $RANDOM is a built-in feature of bash. There is no mechanism for defining your own special variables.
You can write a function that prints a different value each time it's called, and then invoke it as $(func). For example:
now() {
date +%s
}
echo $(now)
Or you can set $PROMPT_COMMAND to a command that updates a specified variable. It runs just before printing each prompt.
i=0
PROMPT_COMMAND='((i++))'
This doesn't work in a script (since no prompt is printed), and it imposes an overhead whether you refer to the variable or not.
If you are BASH scripting there is a $RANDOM variable already internal to BASH.
This post explains the random variable $RANDOM:
http://tldp.org/LDP/abs/html/randomvar.html
It generates a number from 0 - 32767.
If you want to do different things then something like this:
case $RANDOM in
[1-10000])
Message="All is quiet."
;;
[10001-20000])
Message="Start thinking about cleaning out some stuff. There's a partition that is $space % full."
;;
[20001-32627])
Message="Better hurry with that new disk... One partition is $space % full."
;;
esac
I stumbled on this question a while ago and wasn't satisfied by the accepted answer: He wanted to create a variable just like $RANDOM (a variable with a dynamic value), thus I've wondered if we can do it without modifying bash itself.
Variables like $RANDOM are defined internally by bash using the dynamic_value field of the struct variable. If we don't want to patch bash to add our custom "dynamic values" variables, we still have few other alternatives.
An obscure feature of bash is loadable builtins (shell builtins loaded at runtime), providing a convenient way to dynamically load new symbols via the enable function:
$ enable --help|grep '\-f'
enable: enable [-a] [-dnps] [-f filename] [name ...]
-f Load builtin NAME from shared object FILENAME
-d Remove a builtin loaded with -f
We now have to write a loadable builtin providing the functions (written in C) that we want use as dynamic_value for our variables, then setting the dynamic_value field of our variables with a pointer to the chosen functions.
The production-ready way of doing this is using an another loadable builtin crafted on purpose to do the heavy-lifting, but one may abuse gdb if the ptrace call is available to do the same.
I've made a little demo using gdb, answering "How to create a bash variable like $RANDOM?":
$ ./bashful RANDOM dynamic head -c 8 /dev/urandom > /dev/null
$ echo $RANDOM
L-{Sgf

I want to run a script from another script, use the same version of perl, and reroute IO to a terminal-like textbox

I am somewhat familiar with various ways of calling a script from another one. I don't really need an overview of each, but I do have a few questions. Before that, though, I should tell you what my goal is.
I am working on a perl/tk program that: a) gathers information and puts it in a hash, and b) fires off other scripts that use the info hash, and some command line args. Each of these other scripts are available on the command line (using another command-line script) and need to stay that way. So I can't just put all that into a module and call it good.I do have the authority to alter the scripts, but, again, they must also be usable on the command line.
The current way of calling the other script is by using 'do', which means I can pass in the hash, and use the same version of perl (I think). But all the STDOUT (and STDERR too, I think) goes to the terminal.
Here's a simple example to demonstrate the output:
this_thing.pl
#!/usr/bin/env perl
use strict;
use warnings;
use utf8;
use Tk;
my $mw = MainWindow->new;
my $button = $mw->Button(
-text => 'start other thing',
-command => \&start,
)->pack;
my $text = $mw->Text()->pack;
MainLoop;
sub start {
my $script_path = 'this_other_thing.pl';
if (not my $read = do $script_path) {
warn "couldn't parse $script_path: $#" if $#;
warn "couldn't do $script_path: $!" unless defined $read;
warn "couldn't run $script_path" unless $read;
}
}
this_other_thing.pl
#!/usr/bin/env perl
use strict;
use warnings;
use utf8;
print "Hello World!\n";
How can I redirect the STDOUT and STDIN (for interactive scripts that need input) to the text box using the 'do' method? Is that even possible?
If I can't use the 'do' method, what method can redirect the STDIN and STDOUT, as well as enable passing the hash in and using the same version of perl?
Edit: I posted this same question at Perlmonks, at the link in the first comment. So far, the best response seems to use modules and have the child script just be a wrapper for the module. Other possible solutions are: ICP::Run(3) and ICP in general, Capture::Tiny and associated modules, and Tk::Filehandle. A solution was presented that redirects the output and error streams, but seems to not affect the input stream. It's also a bit kludgy and not recommended.
Edit 2: I'm posting this here because I can't answer my own question yet.
Thanks for your suggestions and advice. I went with a suggestion on Perlmonks. The suggestion was to turn the child scripts into modules, and use wrapper scripts around them for normal use. I would then simply be able to use the modules, and all the code is in one spot. This also ensures that I am not using different perls, I can route the output from the module anywhere I want, and passing that hash in is now very easy.
To have both STDIN & STDOUT of a subprocess redirected, you should read the "Bidirectional Communication with Another Process" section of the perlipc man page: http://search.cpan.org/~rjbs/perl-5.18.1/pod/perlipc.pod#Bidirectional_Communication_with_Another_Process
Using the same version of perl works by finding out the name of your perl interpreter, and calling it explicitly. $^X is probably what you want. It may or may not work on different operating systems.
Passing a hash into a subprocess does not work easily. You can print the contents of the hash into a file, and have the subprocess read & parse it. You might get away without using a file, by using the STDIN channel between the two processes, or you could open a separate pipe() for this purpose. Anyway, printing & parsing the data back cannot be avoided when using subprocesses, because the two processes use two perl interpreters, each having its own memory space, and not being able to see each other's variables.
You might avoid using a subprocess, by using fork() + eval() + require(). In that case, no separate perl interpreter will be involved, the forked interpreter will inherit the whole memory of your program with all variables, open file descriptors, sockets, etc. in it, including the hash to be passed. However, I don't see from where your second perl script could get its hash when started from CLI.

Resources