returning values in a bash function - linux

I'm working with a growing bash script and within this script I have a number of functions. One of these functions is supposed to return a variables value, but I am running into some issues with the syntax. Below is an example of the code.
ShowTags() {
local tag=0
read tag
echo "$tag"
}
selected_tag=$(ShowTags)
echo "$selected_tag"
pulled this code from a Linux Journal article, but the problem is it doesn't seem to work, or perhaps it does and im missing something. Essentially whenever the function is called the script hangs up and does not output anything, I need to CTRL+C to drop back to CLI.
The article in question is below.
http://www.linuxjournal.com/content/return-values-bash-functions
So my question is this the proper way to return a value? Is there a better or more dependable way of doing this? And if there is please give me an example so I can figure this out without using global variables.
EDIT:
The behavior of this is really getting to me now. I am using the following script.
ShowTags() {
echo "hi"
local tag=0
read tag
echo "$tag"
}
selected_tag=$(ShowTags)
echo "$selected_tag
Basically what happens is bash will act as if the read command is taking place before the echo tag at the top of the function. As soon as I pass something to read though it will run the top echo, and complete the rest of the script. I am not sure why this is happening. This is exactly what is happening in my main script.

Change echo "hi" to echo "hi" >/dev/tty.
The reason you're not seeing it immediately is that $(ShowTags) captures all the standard output of the function, and that gets assigned to selected_tag. So you don't see any of it until you echo that variable.
By redirecting the prompt to /dev/tty, it's always displayed immediately on the terminal, not sent to the function's stdout, so it doesn't get captured by the command substitution.

You are trying to define a function with Name { ... ]. You have to use name() { ... }:
ShowTags() { # add ()
local tag=0
read tag
echo "$tag"
} # End with }
selected_tag=$(ShowTags)
echo "$selected_tag"
It now lets the user type in a string and have it written back:
$ bash myscript
hello world # <- my input
hello world # script's output
You can add a prompt with read -p "Enter tag: " tag to make it more obvious when to write your input.

As #thatotherguy pointed out, your function declaration syntax is off; but I suspect that's a transcription error, as if it was wrong in the script you'd get different problems. I think what's going on is that the read tag command in the function is trying to read a value from standard input (by default that's the terminal), and pausing until you type something in. I'm not sure what it's intended to do, but as written I'd expect it to pause indefinitely until something's typed in.
Solution: either type something in, or use something other than read. You could also add a prompt (read -p "Enter a tag: " tag) to make it more clear what's going on.
BTW, I have a couple of objections to the linux journal article you linked. These aren't relevant to your script, but things you should be aware of.
First, the function keyword is a nonstandard bashism, and I recommend against using it. myfunc() ... is sufficient to introduce a function definition.
Second, and more serious, the article recommends using eval in an unsafe way. Actually, it's really hard to use eval safely (see BashFAQ #48). You can improve it a great deal just by changing the quoting, and even more by not using eval at all:
eval $__resultvar="'$myresult'" # BAD, can evaluate parts of $myresult as executable code
eval $__resultvar='"$myresult"' # better, is only vulnerable to executing $__resultvar
declare $__resultvar="$myresult" # better still
See BashFAQ #6 for more options and discussion.

Related

How to put special characters in variable and use them in string?

I'm wondering is there anyway to use for example , or ^ or % and so on, from variables in Bash ?
in instance I have three variables
var1='hello world'
var2=${var1:3}
var3='^'
I want to do this in bash ! please attention to my question I know it's very simple in other ways but how about this ?
echo ${var1:0:3}${var2$var3} # instead of echo ${var1:0:3}${var2^}
and finally output is :
heLlo world
In theory, eval can do execute arbitrary code, but has many security issues, so it should be used as last resort. Use it only when you trust the input 100%.
var1='hello world'
var2=${var1:3}
var3='^'
eval echo '${var1:0:3}${var2'$var3'}'

How to create a bash variable like $RANDOM

I'm interest in some thing : every time I echo $RANDOM , the show value difference . I guess the RANDOM is special (When I read it , it may call a function , set a variable flag and return the RANDOM number . ) . I want to create a variable like this , how I can do it ? Every answer will be helpful .
The special behavior of $RANDOM is a built-in feature of bash. There is no mechanism for defining your own special variables.
You can write a function that prints a different value each time it's called, and then invoke it as $(func). For example:
now() {
date +%s
}
echo $(now)
Or you can set $PROMPT_COMMAND to a command that updates a specified variable. It runs just before printing each prompt.
i=0
PROMPT_COMMAND='((i++))'
This doesn't work in a script (since no prompt is printed), and it imposes an overhead whether you refer to the variable or not.
If you are BASH scripting there is a $RANDOM variable already internal to BASH.
This post explains the random variable $RANDOM:
http://tldp.org/LDP/abs/html/randomvar.html
It generates a number from 0 - 32767.
If you want to do different things then something like this:
case $RANDOM in
[1-10000])
Message="All is quiet."
;;
[10001-20000])
Message="Start thinking about cleaning out some stuff. There's a partition that is $space % full."
;;
[20001-32627])
Message="Better hurry with that new disk... One partition is $space % full."
;;
esac
I stumbled on this question a while ago and wasn't satisfied by the accepted answer: He wanted to create a variable just like $RANDOM (a variable with a dynamic value), thus I've wondered if we can do it without modifying bash itself.
Variables like $RANDOM are defined internally by bash using the dynamic_value field of the struct variable. If we don't want to patch bash to add our custom "dynamic values" variables, we still have few other alternatives.
An obscure feature of bash is loadable builtins (shell builtins loaded at runtime), providing a convenient way to dynamically load new symbols via the enable function:
$ enable --help|grep '\-f'
enable: enable [-a] [-dnps] [-f filename] [name ...]
-f Load builtin NAME from shared object FILENAME
-d Remove a builtin loaded with -f
We now have to write a loadable builtin providing the functions (written in C) that we want use as dynamic_value for our variables, then setting the dynamic_value field of our variables with a pointer to the chosen functions.
The production-ready way of doing this is using an another loadable builtin crafted on purpose to do the heavy-lifting, but one may abuse gdb if the ptrace call is available to do the same.
I've made a little demo using gdb, answering "How to create a bash variable like $RANDOM?":
$ ./bashful RANDOM dynamic head -c 8 /dev/urandom > /dev/null
$ echo $RANDOM
L-{Sgf

I want to run a script from another script, use the same version of perl, and reroute IO to a terminal-like textbox

I am somewhat familiar with various ways of calling a script from another one. I don't really need an overview of each, but I do have a few questions. Before that, though, I should tell you what my goal is.
I am working on a perl/tk program that: a) gathers information and puts it in a hash, and b) fires off other scripts that use the info hash, and some command line args. Each of these other scripts are available on the command line (using another command-line script) and need to stay that way. So I can't just put all that into a module and call it good.I do have the authority to alter the scripts, but, again, they must also be usable on the command line.
The current way of calling the other script is by using 'do', which means I can pass in the hash, and use the same version of perl (I think). But all the STDOUT (and STDERR too, I think) goes to the terminal.
Here's a simple example to demonstrate the output:
this_thing.pl
#!/usr/bin/env perl
use strict;
use warnings;
use utf8;
use Tk;
my $mw = MainWindow->new;
my $button = $mw->Button(
-text => 'start other thing',
-command => \&start,
)->pack;
my $text = $mw->Text()->pack;
MainLoop;
sub start {
my $script_path = 'this_other_thing.pl';
if (not my $read = do $script_path) {
warn "couldn't parse $script_path: $#" if $#;
warn "couldn't do $script_path: $!" unless defined $read;
warn "couldn't run $script_path" unless $read;
}
}
this_other_thing.pl
#!/usr/bin/env perl
use strict;
use warnings;
use utf8;
print "Hello World!\n";
How can I redirect the STDOUT and STDIN (for interactive scripts that need input) to the text box using the 'do' method? Is that even possible?
If I can't use the 'do' method, what method can redirect the STDIN and STDOUT, as well as enable passing the hash in and using the same version of perl?
Edit: I posted this same question at Perlmonks, at the link in the first comment. So far, the best response seems to use modules and have the child script just be a wrapper for the module. Other possible solutions are: ICP::Run(3) and ICP in general, Capture::Tiny and associated modules, and Tk::Filehandle. A solution was presented that redirects the output and error streams, but seems to not affect the input stream. It's also a bit kludgy and not recommended.
Edit 2: I'm posting this here because I can't answer my own question yet.
Thanks for your suggestions and advice. I went with a suggestion on Perlmonks. The suggestion was to turn the child scripts into modules, and use wrapper scripts around them for normal use. I would then simply be able to use the modules, and all the code is in one spot. This also ensures that I am not using different perls, I can route the output from the module anywhere I want, and passing that hash in is now very easy.
To have both STDIN & STDOUT of a subprocess redirected, you should read the "Bidirectional Communication with Another Process" section of the perlipc man page: http://search.cpan.org/~rjbs/perl-5.18.1/pod/perlipc.pod#Bidirectional_Communication_with_Another_Process
Using the same version of perl works by finding out the name of your perl interpreter, and calling it explicitly. $^X is probably what you want. It may or may not work on different operating systems.
Passing a hash into a subprocess does not work easily. You can print the contents of the hash into a file, and have the subprocess read & parse it. You might get away without using a file, by using the STDIN channel between the two processes, or you could open a separate pipe() for this purpose. Anyway, printing & parsing the data back cannot be avoided when using subprocesses, because the two processes use two perl interpreters, each having its own memory space, and not being able to see each other's variables.
You might avoid using a subprocess, by using fork() + eval() + require(). In that case, no separate perl interpreter will be involved, the forked interpreter will inherit the whole memory of your program with all variables, open file descriptors, sockets, etc. in it, including the hash to be passed. However, I don't see from where your second perl script could get its hash when started from CLI.

Organize code in unix bash scripting

I am used to object oriented programming. Now, I have just started learning unix bash scripting via linux.
I have a unix script with me. I wanted to break it down into "modules" or preferably programs similar to "more", "ls", etc., and then use pipes to link all my programs together. E.g., "some input" myProg1 | myProg2 | myProg3.
I want to organize my code and make it look neater, instead of all in one script. Also, it will be easy to do testing and development.
Is it possible to do this, especially as a newbie ?
There are a few things you could take a look at, for example the usage of aliases in bash and storing them in either bashrc or a seperate file called by bashrc
that will make running commands easier..
take a look here for expanding commands into aliases (simple aliases are easy)
You can also look into using functions in your code (lots of bash scripts in above link's home folder to make sense of functions browse this site :) which has much better examples...
Take a look here for some piping tails into script
pipe tail output into another script
The thing with bash is its flexibility, so for example if something starts to get too messy for bash you could always write a perl/Java any lang and then call this from within your bash script, capture its output and do something else..
Unsure why all the pipes anyways here is something that may be of help:
./example.sh 20
function one starts with 20
In function 2 20 + 10 = 30
Function three returns 10 + 10 = 40
------------------------------------------------
------------------------------------------------
Local function variables global:
Result2: 30 - Result3: 40 - value2: 10 - value1: 20
The script:
example.sh
#!/bin/bash
input=$1;
source ./shared.sh
one
echo "------------------------------------------------"
echo "------------------------------------------------"
echo "Local function variables global:"
echo "Result2: $result2 - Result3: $result3 - value2: $value2 - value1: $value1"
shared.sh
function one() {
value1=$input
echo "function one starts with $value1"
two;
}
function two() {
value2=10;
result2=$(expr $value1 + $value2)
echo "In function 2 $value1 + $value2 = $result2"
three;
}
function three() {
local value3=10;
result3=$(expr $value2 + $result2;)
echo "Function three returns $value2 + $value3 = $result3"
}
I think the pipes you mean can actually be functions and each function can call one another.. and then you give the script the value which it passes through the functions..
bash is pretty flexible about passing values around, so long as the function being called before has the variable the next function being called by it can reuse it or it can be called from main program
I also split out the functions which can be sourced by another script to carry out the same functions
E2A Thanks for the upvote, I have also decided to include this link
http://tldp.org/LDP/abs/html/sample-bashrc.html
There is an awesome .bashrc to be reused, it has a lot of functions which will also give some insight into how to simplify a lot of daily repetitive commands such as that require piping, an alias can be written to do all of them for you..
You can do one thing.
Just as a C program can be divided into a header file and a source file for reducing complexity, you can divide your bash script into two scripts - a header and a main script but with some differences.
Header file - This will contain all the common variables defined and functions defined which will be used by your main script.
Your script - This will only contain function calls and other logic.You need to use "source <"header-file path">" in your script at starting to get all the functions and variables declared in the header available to your script.
Shell scripts have standard input and output like any other program on Unix, so you can use them in pipes. Splitting your scripts is a good solution because you can later use them in pipes with other commands.
I organize my Bash projects in the following way :
Each command is put in its own file
Reusable functions are kept in a library file which is just a classic script with only functions
All files are in the same directory, so commands can find the library with $(dirname $0)/library
Configuration is stored in another file as environment variables
To keep things clear, you should not use global variables to communicate between functions and main program.
I prepare a template for scripts with the following parts prepared :
Header with name and copyright
Read configuration with source
Load library with source
Check parameters
Function to display help, which is called if asked for or if parameters are wrong
My best advice is : always write the help function, as the next person who will need it is ... yourself !
To install your project you simply copy all files, and explain what to configure in the configuration file.

Bash, Concatenating 2 strings to reference a 3rd variable

I have a bash script I am having some issues with concatenating 2 variables to call a 3rd.
Here is a simplification of the script, but the syntax is eluding me after reading the docs.
server_list_all="server1 server2 server3";
var1 = "server";
var2 = "all";
echo $(($var1_list_$var2));
This is about as close as I get to the right answer, it acknowledges the string and tosses an error on tokenization.
syntax error in expression (error token is "server1 server2 server3....
Not really seeing anything in the docs for this, but it should be doable.
EDIT: Cleaned up a bit
The Bash Reference Manual explains how you can use a neat feature of parameter expansion to do some indirection. In your case, you're interested in finding the contents of a variable whose name is defined by two other variables:
server_list_all="server1 server2 server3"
var1=server
var2=all
combined=${var1}_list_${var2}
echo ${!combined}
The exclamation mark when referring to combined means "use the variable whose name is defined by the contents of combined"
The Advanced Bash Scripting Guide has the answer for you (http://tldp.org/LDP/abs/html/ivr.html). You have two options, the first is classic shell:
#!/bin/bash
server_list_all="server1 server2 server3";
var1="server";
var2="all";
server_var="${var1}_list_${var2}"
eval servers=\$$server_var;
echo $servers
Alternatively you can use the bash shortcut ${!var}
#!/bin/bash
server_list_all="server1 server2 server3";
var1="server";
var2="all";
server_var="${var1}_list_${var2}"
echo ${!server_var}
Either approach works.

Resources