Extra quotes in a variable read from dialog in bash - linux

I need to configure user.email for git in bash script, and my problem is that I don't know how to get this line to run: git config --global user.email "user#example.com"
right now my code run this this command without double quotes - user#example.com
and I tried to escape " " in every way that I found and almost every time the code run '"user#example.com"'
a fragment of my code in bash:
function get_email(){
e_mail=/tmp/tmp.sh.$$
dialog --clear \
--title "EMAIL" \
--inputbox "Enter your email" 8 40 2>"${e_mail}"
email=$(<"${e_mail}")
git config --global user.email "$email"
}
Can someone help me solve this problem?

dialog can under some circumstances (man page goes into detail) put literal literal double quotes in its output (which can be replaced with literal single-quotes using --single-quoted).
To strip them back out, use parameter expansion -- as shown in the below:
get_email() {
local email
email=$(dialog --stdout --clear \
--title "EMAIL" \
--inputbox "Enter your email" 8 40
)
git config --global user.email "${email//'"'/}"
}
Some notes:
"${email//'"'/}" expands $email, replacing all instances of " with an empty string. This is the solution to your immediate problem. The syntax involved is parameter expansion, documented at http://wiki.bash-hackers.org/syntax/pe
Don't use the function keyword. It makes your code incompatible with POSIX shells for absolutely no benefit.
Declare your locals with local (this isn't POSIX-defined, but even ash and dash support it) to prevent leaking into the surrounding scope.
Use --stdout on dialog to allow capture of output with command substitution.

Related

Pass two variables sequentially

In Unix shell,git clone <url> will prompt user for username then password.
I defined $username and $password variables.
how could I pass two variables to the command in order.
I have tried
echo $password | echo $username | git clone <url>
,which did not work
There are several ways you can do this. What you probably should do, because it's more secure, is use a configuration where the script doesn't have to contain and pass the username and password. For example, you could set up ssh, or you could use a credential helper. (Details depend on your environment, so I'd recommend searching for existing questions and answers re: how to set those up.)
If you still want to have the script pass the values, you basically have two choices: You can use a form of the git command that takes the values on the command line (see brokenfoot's answer), or you can pass the values on STDIN (which is the approach you're attempting, but doesn't work quite the way you're attempting it).
When you use |, you're sending the "standard output" of the command on the left to the "standard input" of the command on the right. So when you chain commands like you show, the first echo is sending output to the second echo - which ignores it. That's not what you want.
You would need a single command that outputs the username, and end-of-line character, the password, and another end-of-line character. That's not easy to do with echo (at least, not portably). You could do something like
git clone *url* <<EOF
$username
$password
EOF
Let me pretend the question is neither git-related no security-related
and my answer to the literal question "How to pass two variables to a
program" is:
( echo $username; echo $password ) | git clone 'url'
That is, just output two strings separated by a newline (echo adds the newline); or do it in one
call to echo:
echo "$username
$password" | git clone 'url'
You can pass variable like so:
username="xyz"
password="123"
echo "git clone https://$username:$password#github.com/$username/repository.git"
Output:
git clone https://xyz:123#github.com/xyz/repository.git

Should I be using parameters or export environment variables?

I've always developed my shell scripts using parameters, on a daily-basis or even when developing some automation scripts. However, recently I've tried a different approach, exporting environment variables to my scripts.
#!/bin/bash
: ${USER?"Requires USER"}
: ${FIRST_NAME?"Requires FIRST_NAME"}
: ${LAST_NAME?"Requires LAST_NAME"}
: ${EMAIL?"Requires EMAIL"}
set -x
setup_git_account(){
su - "${USER}" -c "git config --global user.name '${FIRST_NAME} ${LAST_NAME}'"
su - "${USER}" -c "git config --global user.email '${EMAIL}'"
}
setup_git_account
This ensures a smaller code, easy checks if all the required variables are initialized and also, better understanding of what the script is doing, once all the variables are declared on outside.
export USER='john' && export FIRST_NAME='John' && export LAST_NAME='Doe' && export EMAIL='john.doe#email.com' && setup_git_account.sh
Which could be represented like this if implemented with receiving parameters:
setup_git_account.sh --user 'john' --firstname 'John' --lastname 'Doe' --email 'john.doe#email.com'
However, the last one, would need way more lines of code to implement the getopts switch case, check the passed parameters values, etc.
Anyway, I know we're used to the second approach, but I think the first approach also has several benefits. And I would like to hear more from you, if there's any downside between the presented approaches. And which one should I be using ?
Thanks!
A bit off-topic, the invocation syntax with environment variables for bash can be shorter, no need for export's:
USER='john' FIRST_NAME='John' LAST_NAME='Doe' EMAIL='john.doe#email.com' setup_git_account.sh
None of your values is optional; I would just use positional parameters.
: ${1?"Requires USER"}
: ${2?"Requires FIRST_NAME"}
: ${3?"Requires LAST_NAME"}
: ${4?"Requires EMAIL"}
sudo -u "$1" git config --global user.name "$2 $3" user.email "$4"
Providing the way for the user to specify values in an arbitrary order is just an unnecessary complication.
You would simply call the script with
setup_git_account.sh 'john' 'John' 'Doe' 'john.doe#email.com'
Reconsider whether the first and last names need to be separate arguments. They are combined into a single argument to git config by the script anyway; just take the name as a single argument as well.
setup_git_account.sh 'john' 'John Doe' 'john.doe#email.com'
(with the appropriate changes to the script as necessary).
I never use your approach. I think there are no drawbacks by using parameters. It's a common way to use parameters and if you are using longopts there are self-descriptive.
In my opinion env vars are a solution if you need data in different scripts.
Maybe you have problems to run such a script on systems where you don't be allowed to change the environment.
I've parameterized your variables using a guide I wrote a while back and even added --help.
This solution accepts environment variables as well as options (which will trump the variables):
while getopts e:f:hl:u:-: arg; do
case "$arg" in
e ) EMAIL="$OPTARG" ;;
f ) FIRST_NAME="$OPTARG" ;;
h ) do_help ;;
l ) LAST_NAME="$OPTARG" ;;
u ) USER_NAME="$OPTARG" ;;
- ) LONG_OPTARG="${OPTARG#*=}"
case $OPTARG in
email=?* ) EMAIL="$LONG_OPTARG" ;;
first*=?* ) FIRST_NAME="$LONG_OPTARG" ;;
help* ) do_help ;;
last*=?* ) LAST_NAME="$LONG_OPTARG" ;;
user=?* ) USER_NAME="$LONG_OPTARG" ;;
* ) echo "Illegal option/missing argument: --$OPTARG" >&2; exit 2 ;;
esac ;;
* ) exit 2 ;; # error messages for short options already given by getopts
esac
done
shift $((OPTIND-1))
HELP=" - see ${0##*/} --help"
: ${USER_NAME?"Requires USER_NAME$HELP"}
: ${FIRST_NAME?"Requires FIRST_NAME$HELP"}
: ${LAST_NAME?"Requires LAST_NAME$HELP"}
: ${EMAIL?"Requires EMAIL$HELP"}
su - "$USER_NAME" -c "git config --global user.name '$FIRST_NAME $LAST_NAME'"
su - "$USER_NAME" -c "git config --global user.email '$EMAIL'"
Note that I changed $USER to $USER_NAME to avoid conflicts with your local environment ($USER is your user name on your local Linux system!)
You can also extract the user's full name from the system:
FULL_NAME="$(getent passwd |awk -v u="$USER_NAME" -F: '$1 == u { print $5 }')"
(I see no reason to separate FIRST_NAME and LAST_NAME; what do you do for Jean Claude Van Damme? They're only used together anyway. Also note that not all users will have full names in the passwd file.)
This uses do_help to show the --help output. Here's an example of how that could look (I'd put this at the vary top of the script so somebody just reading it can get the synopsis; it's not in the above code block because I wanted to prevent the block from getting a scroll bar):
do_help() { cat <</help
Usage: ${0##*/} [OPTIONS]
-u USER_NAME, --user=USER_NAME
-f FIRST_NAME, --firstname=FIRST_NAME
-l LAST_NAME, --lastname=LAST_NAME
-e EMAIL, --email=EMAIL
Each option may also be passed through the environment as e.g. $EMAIL
Code taken from https://stackoverflow.com/a/41515444/519360
/help
}

Is it possible to write one script that runs in bash/shell and PowerShell?

I need to create ONE integrated script that sets some environment variables, downloads a file using wget and runs it.
The challenge is that it needs to be the SAME script that can run on both Windows PowerShell and also bash / shell.
This is the shell script:
#!/bin/bash
# download a script
wget http://www.example.org/my.script -O my.script
# set a couple of environment variables
export script_source=http://www.example.org
export some_value=floob
# now execute the downloaded script
bash ./my.script
This is the same thing in PowerShell:
wget http://www.example.org/my.script -O my.script.ps1
$env:script_source="http://www.example.org"
$env:some_value="floob"
PowerShell -File ./my.script.ps1
So I wonder if somehow these two scripts can be merged and run successfully on either platform?
I've been trying to find a way to put them in the same script and get bash and PowerShell.exe to ignore errors but have had no success doing so.
Any guesses?
It is possible; I don't know how compatible this is, but PowerShell treats strings as text and they end up on screen, Bash treats them as commands and tries to run them, and both support the same function definition syntax. So, put a function name in quotes and only Bash will run it, put "exit" in quotes and only Bash will exit. Then write PowerShell code after.
NB. this works because the syntax in both shells overlaps, and your script is simple - run commands and deal with variables. If you try to use more advanced script (if/then, for, switch, case, etc.) for either language, the other one will probably complain.
Save this as dual.ps1 so PowerShell is happy with it, chmod +x dual.ps1 so Bash will run it
#!/bin/bash
function DoBashThings {
wget http://www.example.org/my.script -O my.script
# set a couple of environment variables
export script_source=http://www.example.org
export some_value=floob
# now execute the downloaded script
bash ./my.script
}
"DoBashThings" # This runs the bash script, in PS it's just a string
"exit" # This quits the bash version, in PS it's just a string
# PowerShell code here
# --------------------
Invoke-WebRequest "http://www.example.org/my.script.ps1" -OutFile my.script.ps1
$env:script_source="http://www.example.org"
$env:some_value="floob"
PowerShell -File ./my.script.ps1
then
./dual.ps1
on either system.
Edit: You can include more complex code by commenting the code blocks with a distinct prefix, then having each language filter out its own code and eval it (usual security caveats apply with eval), e.g. with this approach (incorporating suggestion from Harry Johnston ):
#!/bin/bash
#posh $num = 200
#posh if (150 -lt $num) {
#posh write-host "PowerShell here"
#posh }
#bash thing="xyz"
#bash if [ "$thing" = "xyz" ]
#bash then
#bash echo "Bash here"
#bash fi
function RunBashStuff {
eval "$(grep '^#bash' $0 | sed -e 's/^#bash //')"
}
"RunBashStuff"
"exit"
((Get-Content $MyInvocation.MyCommand.Source) -match '^#posh' -replace '^#posh ') -join "`n" | Invoke-Expression
While the other answer is great (thank you TessellatingHeckler and Harry Johnston)
(and also thank you j-p-hutchins for fixing the error with true)
We can actually do way better
Work with more shells (e.g. work for Ubuntu's dash)
Less likely to break in future situations
No need to waste processing time re-reading/evaling the script
Waste less characters/lines on confusing syntax(we can get away with a mere 41 chars, and mere 3 lines)
Even Keep syntax highlighting functional
Copy Paste Code
Save this as your_thing.ps1 for it to run as powershell on Windows and run as shell on all other operating systems.
#!/usr/bin/env sh
echo --% >/dev/null;: ' | out-null
<#'
#
# sh part
#
echo "hello from bash/dash/zsh"
echo "do whatver you want just dont use #> directly"
echo "e.g. do #""> or something similar"
# end bash part
exit #>
#
# powershell part
#
echo "hello from powershell"
echo "you literally don't have to escape anything here"
How? (its actually simple)
We want to start a multi-line comment in powershell without causing an error in bash/shell.
Powershell has multi-line comments <# but as-is they would cause problems in bash/shell languages. We need to use a string like "<#" for bash, but we need it to NOT be a string in powershell.
Powershell has a stop-parsing arg --% lets write single quote without starting a string, e.g. echo --% ' blah ' will print out ' blah '. This is great because in shell/bash the single quotes do start a string, e.g. echo --% ' blah ' will print out blah
We need a command in order to use powershell's stop-parsing-args, lucky for us both powershell and bash have an echo command
So, in bash we can echo a string with <#, but powershell the same code finishes the echo command then starts a multi-line comment
Finally we add >/dev/null to bash so that it doesn't print out --% every time, and we add | out-null so that powershell doesn't print out >/dev/null;: ' every time.
The syntax highlighting tells the story more visually
Powershell Highlighting
All the green stuff is ignored by powershell (comments)
The gray --% is special
The | out-null is special
The white parts are just string-arguments without quotes
(even the single quote is equivlent to "'")
The <# is the start of a multi-line comment
Bash Highlighting
For bash its totally different.
Lime green + underline are the commands.
The --% isn't special, its just an argument
But the ; is special
The purple is output-redirection
Then : is just the standard "do nothing" shell command
Then the ' starts a string argument that ends on the next line
Caveats?
Almost almost none. Powershell legitimately has no downside. The Bash caveats are easy to fix, and are exceedingly rare
If you need #> in a bash string, you'll need to escape it somehow.
changing "#>" to "#"">"or from ' blah #> ' to ' blah #''> '.
If you have a comment #> and for some reason you CANNOT change that comment (this is what I mean by exceedingly rare), you can actually just use #>, you just have to add re-add those first two lines (eg true --% etc) right after your #> comment
One even more exceedingly rare case is where you are using the # to remove parts of a string (I bet most don't even know this is a bash feature). Example code below
https://man7.org/linux/man-pages/man1/bash.1.html#EXPANSION
var1=">blah"
echo ${var1#>}
# ^ removes the > from var1
To fix this one, well there are alternative ways of removeing chars from the begining of a string, use them instead.
Following up on Jeff Hykin's answer, I have found that the first line, while it is happy in bash, produces this output in PowerShell. Note that it is still fully functional, just noisy.
true : The term 'true' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling
of the name, or if a path was included, verify that the path is correct and try again.
At C:\Users\jp\scratch\envar.ps1:4 char:1
+ true --% ; : '
+ ~~~~
+ CategoryInfo : ObjectNotFound: (true:String) [], CommandNotFoundException
hello from powershell
I am experimenting with changing the first lines from:
true --% ; : '
<#'
to:
echo --% > /dev/null ; : ' | out-null
<#'
In very limited testing this seems to be working in bash and powershell. For reference, I am "sourcing" the scripts not "calling" them, e.g. . env.ps1 in bash and . ./env.ps1 in powershell.

Passing variable to `expect` in bash array

I am trying to use a FOR loop to iterate over IP addresses (in a bash array), logs in, runs a script and then exits. The array is called ${INSTANCE_IPS[#]}. The following code doesn't work though, as expect doesn't seem to be able to accept the variable $instance.
for instance in ${INSTANCE_IPS[#]}
do
echo $instance
/usr/bin/expect -c '
spawn ssh root#$instance;
expect "?assword: ";
send "<password>\r";
expect "# ";
send ". /usr/local/bin/bootstrap.sh\r";
expect "# ";
send "exit\r" '
done
However, expect complains with:
can't read "instance": no such variable
while executing
"spawn ssh root#$instance"
There is another question on stackoverflow located here, that uses environmental variables to achieve this, however it doesn't allow me to iterate through different IP addresses like I can in an array.
Any help is appreciated.
Cheers
The problem is with quoting. Single quotes surrounding the whole block don't let Bash expand variables ($instance).
You need to switch to double quotes. But then, double quotes inside double quotes are not allowed (unless you escape them), so we are better off using single quotes with expect strings.
Try instead:
for instance in ${INSTANCE_IPS[#]}
do
echo $instance
/usr/bin/expect -c "
spawn ssh root#$instance;
expect '?assword: ';
send '<password>\r';
expect '# ';
send '. /usr/local/bin/bootstrap.sh\r';
expect '# ';
send 'exit\r' "
done
for instance in ${INSTANCE_IPS[&]} ; do
echo $instance
/usr/bin/expect -c '
spawn ssh root#'$instance' "/usr/local/bin/bootstrap.sh"
expect "password:"
send "<password>\r"
expect eof'
done
From the ssh man page:
If command is specified, it is executed on the remote host instead of a login shell.
Specifying a command means expect doesn't have to wait for # to execute your program, then wait for another # just to send the command exit. Instead, when you specify a command to ssh, it executes that command; it exits when done; and then ssh automatically closes the connection.
Alternately, put the value in the environment and expect can find it there
for instance in ${INSTANCE_IPS[&]} ; do
echo $instance
the_host=$instance /usr/bin/expect -c '
spawn ssh root#$env(the_host) ...
Old thread, and one of many, but I've been working on expect for several days. For anyone who comes across this, I belive I've found a doable solution to the problem of passing bash variables inside an expect -c script:
#!/usr/bin/env bash
password="TopSecret"
read -d '' exp << EOF
set user "John Doe"
puts "\$user"
puts "$password"
EOF
expect -c "$exp"
Please note that escaping quotations are typically a cited issue (as #Roberto Reale stated above), which I've solved using a heredoc EOF method, before passing the bash-variable-evaluated string to expect -c. In contrast to escaping quotes, all native expect variables will need to be escaped with \$ (I'm not here to solve all first-world problems--my afternoon schedule is slightly crammed), but this should greatly simplify the problem with little effort. Let me know if you find any issues with this proof of concept.
tl;tr: Been creating an [expect] daemon script with user authentication and just figured this out after I spent a whole day creating separated bash/expect scripts, encrypting my prompted password (via bash) with a different /dev/random salt each iteration, saving the encrypted password to a temp file and passing the salt to the expect script (highly discouraging anyone from easily discovering the password via ps, but not preventative since the expect script could be replaced). Now I should be able to effectively keep it in memory instead.

perl exec screen with parameters

If I run the following:
system("screen -dmS $screenname");
it works as it should be but when I try to run a screen from perl and to execute a command (in this case tcpreplay) with some extra arguments it doesn't run as it's supposed to.
system("screen -dmS $screenname -X stuff \"`printf \"tcpreplay --intf1=eth0 s.cap\\r\"`\" ");
What am I doing wrong here?
Simo A's answer is probably right with regards to the issue, but I like to use the following when working with screen opposed to using the -X flag. Explicitly telling it the command language interpreter.
Why use -c you ask?
If the -c option is present, then commands are read from string. If there are arguments after the string, they are assigned to the positional parameters, starting with $0.
system("screen -dmS $screenname sh -c 'PRETTY MUCH ANYTHING WORKS'");
I figured I'd shared as I run alot of Perl system commands and the above always works for screen commands.
Try replacing single \" with \\\". That should do the trick.
Consider the same issue here:
system ("echo Quotation marks: \\\"here\\\" but \"not here\". ");
The output from the former line of code is: Quotation marks: "here" but not here.
Taking Simo A's answer as a starting point, I would use q( ) rather than " ".
system ( q(echo Quotation marks: \"here\" but "not here". ));
This means you don't need to escape the quote twice.

Resources