bash: how to execute function instead of command with same name - linux

I'm about to learn bash scripting and wrote a little script like this for testing purposes:
#!/bin/bash
function time {
echo $(date)
}
time
However the function doesn't get executed, instead the command time is running.
So what do I have to do to execute the function instead?
I'm running bash 4.2.45

To run a function with the same name as the special keyword time, quote it e.g.:
function time {
echo "$(date)"
}
'time'

You need to add () to the function definition. I'm not sure you need the function text there
Following should work:
#!/bin/bash
get_time() {
echo $(date)
}
get_time
Edited: time seems to be a reserved keyword, so changed the function name

Related

Bash counting executed time

I want to write a script in bash that will save to file how long it have been executed
I want output to look like this:
1 minute
2 minute
...
Requirements
You need to install time (not the shell built-in).
To validate that you have the right one: $ which time
This is the expected output:
$ which time
/usr/bin/time
Solution
Assuming that you have a function called main, in which your main scripting code is included
function main() {
echo "Sleeping .."
sleep(5)
echo "This is the first arg: ${1}"
echo "This is the second arg: ${2}"
}
To time this function call, do as follows (arguments for reference):
main "HELLO" "WORLD" | $(which time) -o "OUTPUT_FILENAME_FOR_TIME" -f "%e" $(which bash)
Explanation
We are piping /usr/bin/time to the function call to time it. We are calling time using $(which time) because we do not want the shell built-in. After that we are passing the -o argument to define our output file and then the -f argument is used to define the time format. In my example, I used seconds. In the end, we pass which shell we are using, and in our case we are using bash, so $(which bash).
man time to read about other formats and of course the proper usage of the program
I always use seconds because it is easier to convert them to anything.
EDIT #1
You can use the GNU tool command instead of using the absolute path of time
$ command time
instead of
$ $(which time)

Set a Bash function on the environment

I need to define a Bash function in the Bash environment from a C/C++ program. Before the shellshock bug, I could define a function in this way:
my_func='() { echo "This is my function";}'
Or equivalent from a C program:
setenv("my_func", "() { echo \"This is my function\";}", 1);
Or
putenv("my_func=() { echo \"This is my function\";}");
But using a Bash version with shellshock fixed, I can't manage on how to define my functions in the environment.
The strange thing is, if I run env, I can see my function defined in the environment, but if I call it, Bash says that it doesn't exist.
Thanks in advance
For informational purposes only. Since it is not documented how functions are exported to the environment, you should treat this as an abuse of a private API that is subject to change in future versions of bash.
Functions are no longer exported using simply the name of the function in the environment string. To see this, run
$ my_func () { echo "foo"; }
$ export -f my_func
$ env | grep -A1 'my_func'
BASH_FUNC_my_func%%=() { echo "foo"
}
Since the name used in the environment is no longer a valid bash identifier, you would need to use the env command to modify the environment of the new process.
env 'BASH_FUNC_my_func%%=() { echo "This is my function"; }' bash
From C, you just need to adjust the name.​​​​​​
setenv("BASH_FUNC_my_func%%", "() { echo \"This is my function\";}", 1);
If you are invoking bash with execv (so that you are only invoking it once), you could replace (using execl for explanatory purposes):
execl("/bin/bash", "bash", "file_to_run", "arg1", "arg2", 0);
with
execl("/bin/bash", "bash", "-c", "f() {...} g() {...}\n. $0",
"file_to_run", "arg1", "arg2", 0);
and then you don't need to play games with the internal bash interface for defining functions. (If the bash script being run also needs the functions to be exported, for whatever reason, just add export -f <func> lines to the argument following -c.)
That has the advantage of working with both patched and unpatched bashes.
(I'm having to make a similar patch to various programs, so I share your pain.)

How can I run a function from a script in command line?

I have a script that has some functions.
Can I run one of the function directly from command line?
Something like this?
myScript.sh func()
Well, while the other answers are right - you can certainly do something else: if you have access to the bash script, you can modify it, and simply place at the end the special parameter "$#" - which will expand to the arguments of the command line you specify, and since it's "alone" the shell will try to call them verbatim; and here you could specify the function name as the first argument. Example:
$ cat test.sh
testA() {
echo "TEST A $1";
}
testB() {
echo "TEST B $2";
}
"$#"
$ bash test.sh
$ bash test.sh testA
TEST A
$ bash test.sh testA arg1 arg2
TEST A arg1
$ bash test.sh testB arg1 arg2
TEST B arg2
For polish, you can first verify that the command exists and is a function:
# Check if the function exists (bash specific)
if declare -f "$1" > /dev/null
then
# call arguments verbatim
"$#"
else
# Show a helpful error
echo "'$1' is not a known function name" >&2
exit 1
fi
If the script only defines the functions and does nothing else, you can first execute the script within the context of the current shell using the source or . command and then simply call the function. See help source for more information.
The following command first registers the function in the context, then calls it:
. ./myScript.sh && function_name
Briefly, no.
You can import all of the functions in the script into your environment with source (help source for details), which will then allow you to call them. This also has the effect of executing the script, so take care.
There is no way to call a function from a shell script as if it were a shared library.
Using case
#!/bin/bash
fun1 () {
echo "run function1"
[[ "$#" ]] && echo "options: $#"
}
fun2 () {
echo "run function2"
[[ "$#" ]] && echo "options: $#"
}
case $1 in
fun1) "$#"; exit;;
fun2) "$#"; exit;;
esac
fun1
fun2
This script will run functions fun1 and fun2 but if you start it with option
fun1 or fun2 it'll only run given function with args(if provided) and exit.
Usage
$ ./test
run function1
run function2
$ ./test fun2 a b c
run function2
options: a b c
I have a situation where I need a function from bash script which must not be executed before (e.g. by source) and the problem with #$ is that myScript.sh is then run twice, it seems... So I've come up with the idea to get the function out with sed:
sed -n "/^func ()/,/^}/p" myScript.sh
And to execute it at the time I need it, I put it in a file and use source:
sed -n "/^func ()/,/^}/p" myScript.sh > func.sh; source func.sh; rm func.sh
Edit: WARNING - seems this doesn't work in all cases, but works well on many public scripts.
If you have a bash script called "control" and inside it you have a function called "build":
function build() {
...
}
Then you can call it like this (from the directory where it is):
./control build
If it's inside another folder, that would make it:
another_folder/control build
If your file is called "control.sh", that would accordingly make the function callable like this:
./control.sh build
Solved post but I'd like to mention my preferred solution. Namely, define a generic one-liner script eval_func.sh:
#!/bin/bash
source $1 && shift && "#a"
Then call any function within any script via:
./eval_func.sh <any script> <any function> <any args>...
An issue I ran into with the accepted solution is that when sourcing my function-containing script within another script, the arguments of the latter would be evaluated by the former, causing an error.
The other answers here are nice, and much appreciated, but often I don't want to source the script in the session (which reads and executes the file in your current shell) or modify it directly.
I find it more convenient to write a one or two line 'bootstrap' file and run that. Makes testing the main script easier, doesn't have side effects on your shell session, and as a bonus you can load things that simulate other environments for testing. Example...
# breakfast.sh
make_donuts() {
echo 'donuts!'
}
make_bagels() {
echo 'bagels!'
}
# bootstrap.sh
source 'breakfast.sh'
make_donuts
Now just run ./bootstrap.sh.Same idea works with your python, ruby, or whatever scripts.
Why useful? Let's say you complicated your life for some reason, and your script may find itself in different environments with different states present. For example, either your terminal session, or a cloud provider's cool new thing. You also want to test cloud things in terminal, using simple methods. No worries, your bootstrap can load elementary state for you.
# breakfast.sh
# Now it has to do slightly different things
# depending on where the script lives!
make_donuts() {
if [[ $AWS_ENV_VAR ]]
then
echo '/donuts'
elif [[ $AZURE_ENV_VAR ]]
then
echo '\donuts'
else
echo '/keto_diet'
fi
}
If you let your bootstrap thing take an argument, you can load different state for your function to chew, still with one line in the shell session:
# bootstrap.sh
source 'breakfast.sh'
case $1 in
AWS)
AWS_ENV_VAR="arn::mumbo:jumbo:12345"
;;
AZURE)
AZURE_ENV_VAR="cloud::woo:_impress"
;;
esac
make_donuts # You could use $2 here to name the function you wanna, but careful if evaluating directly.
In terminal session you're just entering:
./bootstrap.sh AWS
Result:
# /donuts
you can call function from command line argument like below
function irfan() {
echo "Irfan khan"
date
hostname
}
function config() {
ifconfig
echo "hey"
}
$1
Once you defined the functions put $1 at the end to accept argument which function you want to call.
Lets say the above code is saved in fun.sh. Now you can call the functions like ./fun.sh irfan & ./fun.sh config in command line.

Bash - error message 'Syntax error: "(" unexpected'

For some reason, this function is working properly. The terminal is outputting
newbootstrap.sh: 2: Syntax error: "(" unexpected
Here is my code (line 2 is function MoveToTarget() {)
#!/bin/bash
function MoveToTarget() {
# This takes two arguments: source and target
cp -r -f "$1" "$2"
rm -r -f "$1"
}
function WaitForProcessToEnd() {
# This takes one argument. The PID to wait for
# Unlike the AutoIt version, this sleeps for one second
while [ $(kill -0 "$1") ]; do
sleep 1
done
}
function RunApplication() {
# This takes one application, the path to the thing to execute
exec "$1"
}
# Our main code block
pid="$1"
SourcePath="$2"
DestPath="$3"
ToExecute="$4"
WaitForProcessToEnd $pid
MoveToTarget $SourcePath, $DestPath
RunApplication $ToExecute
exit
You're using the wrong syntax to declare functions. Use this instead:
MoveToTarget() {
# Function
}
Or this:
function MoveToTarget {
# function
}
But not both.
Also, I see that later on you use commas to separate arguments (MoveToTarget $SourcePath, $DestPath). That is also a problem. Bash uses spaces to separate arguments, not commas. Remove the comma and you should be golden.
I'm also new to defining functions in Bash scripts. I'm using a Bash of version 4.3.11(1):-release (x86_64-pc-linux-gnu) on Ubuntu 14.04 (Trusty Tahr).
I don't know why, but the definition that starts with the keyword function never works for me.
A definition like the following
function check_and_start {
echo Hello
}
produces the error message:
Syntax error: "}" unexpected
If I put the { on a new line like:
function my_function
{
echo Hello.
}
It prints a Hello. when I run the script, even if I don't call this function at all, which is also what we want.
I don't know why this wouldn't work, because I also looked at many tutorials and they all put the open curly brace at the end of the first line. Maybe it's the version of Bash that we use?? Anyway, just put it here for your information.
I have to use the C-style function definition:
check_and_start() {
echo $1
}
check_and_start World!
check_and_start Hello,\ World!
and it works as expected.
If you encounter "Syntax error: "(" unexpected", then use "bash" instead of using "sh".
For example:
bash install.sh
I had the same issue. I was running scripts on Ubuntu sometimes using sh vs. Dash. It seems running scripts with sh causes the issue, but running scripts with Dash works fine.

Assigning values printed by PHP CLI to shell variables

I want the PHP equivalent of the solution given in assigning value to shell variable using a function return value from Python
In my php file, I read some constant values like this:-
$neededConstants = array("BASE_PATH","db_host","db_name","db_user","db_pass");
foreach($neededConstants as $each)
{
print constant($each);
}
And in my shell script I have this code so far:-
function getConfigVals()
{
php $PWD'/developer.php'
//How to collect the constant values here??
#echo "done - "$PWD'/admin_back/developer/developer.php'
}
cd ..
PROJECT_ROOT=$PWD
cd developer
# func1 parameters: a b
getConfigVals
I am able to execute the file through shell correctly.
To read further on what I am trying to do please check Cleanest way to read config settings from PHP file and upload entire project code using shell script
Updates
Corrected configs=getConfigVals replaced with getConfigVals
Solution
As answered by Fritschy, it works with this modification:-
PHP code -
function getConfigVals()
{
php $PWD'/developer.php'
#return $collected
#echo "done - "$PWD'/admin_back/developer/developer.php'
}
shell code -
result=$(getConfigVals)
echo $result
You have to execute the function and assign what is printed to that variable:
configs=$(getConfigVals)
See the manpage of that shell on expansion for more ;)

Resources