I want to echo the following line at the end of ~/.profile file using tee command:
export PATH="$HOME/.local/bin:$PATH"
To do this my bash script looks like this
#!/bin/bash
path_env="export PATH="$HOME/.local/bin:$PATH""
echo $path_env| sudo tee -a $HOME/.profile > /dev/null
But whenever I am executing the script it is also executing $PATH and $HOME value and inserts that in ~./profile file which I do not want. I only want the exact line to be passed by the bash script instead of replacing $PATH and $HOME with its own values.
I only want the exact line to be passed by the bash script instead of replacing $PATH and $HOME with its own values.
Och, right, so do not expand it. Quoting.
path_env='export PATH="$HOME/.local/bin:$PATH"'
echo "$path_env" | sudo tee -a "$HOME/.profile" > /dev/null
Related
a.sh
#! /bin/sh
export x=/usr/local
we can do source ./a in command-line. But I need to do the export through shell script.
b.sh
#! /bin/sh
. ~/a.sh
no error... but $x in command-line will show nothing. So it didn't get export.
Any idea how to make it work?
a.sh
#! /bin/sh
export x=/usr/local
-----------
admin#client: ./a.sh
admin#client: echo $x
admin#client: <insert ....>
You can put export statements in a shell script and then use the 'source' command to execute it in the current process:
source a.sh
You can't do an export through a shell script, because a shell script runs in a child shell process, and only children of the child shell would inherit the export.
The reason for using source is to have the current shell execute the commands
It's very common to place export commands in a file such as .bashrc which a bash will source on startup (or similar files for other shells)
Another idea is that you could create a shell script which generates an export command as it's output:
shell$ cat > script.sh
#!/bin/sh
echo export foo=bar
^D
chmod u+x script.sh
And then have the current shell execute that output
shell$ `./script.sh`
shell$ echo $foo
bar
shell$ /bin/sh
$ echo $foo
bar
(note above that the invocation of the script is surrounded by backticks, to cause the shell to execute the output of the script)
Answering my own question here, using the answers above: if I have more than one related variable to export which use the same value as part of each export, I can do this:
#!/bin/bash
export TEST_EXPORT=$1
export TEST_EXPORT_2=$1_2
export TEST_EXPORT_TWICE=$1_$1
and save as e.g. ~/Desktop/TEST_EXPORTING
and finally $chmod +x ~/Desktop/TEST_EXPORTING
--
After that, running it with source ~/Desktop/TEST_EXPORTING bob
and then checking with export | grep bob should show what you expect.
Exporting a variable into the environment only makes that variable visible to child processes. There is no way for a child to modify the environment of its parent.
Another way you can do it (to steal/expound upon the idea above), is to put the script in ~/bin and make sure ~/bin is in your PATH. Then you can access your variable globally. This is just an example I use to compile my Go source code which needs the GOPATH variable to point to the current directory (assuming you're in the directory you need to compile your source code from):
From ~/bin/GOPATH:
#!/bin/bash
echo declare -x GOPATH=$(pwd)
Then you just do:
#> $(GOPATH)
So you can now use $(GOPATH) from within your other scripts too, such as custom build scripts which can automatically invoke this variable and declare it on the fly thanks to $(pwd).
script1.sh
shell_ppid=$PPID
shell_epoch=$(grep se.exec_start "/proc/${shell_ppid}/sched" | sed 's/[[:space:]]//g' | cut -f2 -d: | cut -f1 -d.)
now_epoch=$(($(date +%s%N)/1000000))
shell_start=$(( (now_epoch - shell_epoch)/1000 ))
env_md5=$(md5sum <<<"${shell_ppid}-${shell_start}"| sed 's/[[:space:]]//g' | cut -f1 -d-)
tmp_dir="/tmp/ToD-env-${env_md5}"
mkdir -p "${tmp_dir}"
ENV_PROPS="${tmp_dir}/.env"
echo "FOO=BAR" > "${ENV_PROPS}"
script2.sh
shell_ppid=$PPID
shell_epoch=$(grep se.exec_start "/proc/${shell_ppid}/sched" | sed 's/[[:space:]]//g' | cut -f2 -d: | cut -f1 -d.)
now_epoch=$(($(date +%s%N)/1000000))
shell_start=$(( (now_epoch - shell_epoch)/1000 ))
env_md5=$(md5sum <<<"${shell_ppid}-${shell_start}"| sed 's/[[:space:]]//g' | cut -f1 -d-)
tmp_dir="/tmp/ToD-env-${env_md5}"
mkdir -p "${tmp_dir}"
ENV_PROPS="${tmp_dir}/.env"
source "${ENV_PROPS}"
echo $FOO
./script1.sh
./script2.sh
BAR
It persists for the scripts run in the same parent shell, and it prevents collisions.
I have very little experience working with bash. With that being said I need to create a bash script that takes your current directory path and saves it to a shell variable. I then need to be able to type "echo $shellvariable" and have that output the directory that I saved to that variable in the bash script. This is what I have so far.
#!/bin/bash
mypath=$(pwd)
cd $1
echo $mypath
exec bash
now when I go to command line and type "echo $mypath" it outputs nothing.
You can just run source <file_with_your_vars>, this will load your variables in yours script or command line session.
> cat source_vars.sh
my_var="value_of_my_var"
> echo $my_var
> source source_vars.sh
> echo $my_var
value_of_my_var
You have to export the variable for it to exist in the newly-execed shell:
#!/bin/bash
export mypath=$(pwd)
cd $1
echo $mypath
exec bash
Hello
'env -i' gives control what vars a shell/programm get...
#!/bin/bash
mypath=$(pwd)
cd $1
echo $mypath
env -i mypath=${mypath} exec bash
...i.e. with minimal environment.
As the title says, within linux how can I feed input to the bash when I do sudo bash
Lets say I have a bash script that reads the name.
The way I execute the script is through sudo using:
cat read-my-name-script.sh | sudo bash
Lets just say this is how I execute the script throught the network.
Now I want to fill the name automatically, is there a way to feed the input. I tried doing this: cat read-my-name-script.sh < name-input-file | sudo bash where the name-input-file is a file for the input that the user will be using to feed the script.
I am new to linux and learning to automate the input and wanted to create a file for input where the user can fill it and feed it to my script.
This is convoluted, but might do what you want.
sudo bash -c "$(cat read-my-name.sh)" <name-input-file
The -c says the next quoted argument are the commands to run (so, read the script as a string on the command line, instead of from a file), and the calling shell interpolates the contents of the file inside the double quotes before the sudo command gets evaluated. So if read-my-name.sh contains
#!/bin/bash
read -p "I want your name please"
then the command gets expanded into
sudo bash -c '#!/bin/bash
read -p "I want your name please"' <name-input-file
(where of course at this time the shell has actually removed the outer double quotes altogether; I put in single quotes in their place instead to show how this would look as actually executable, syntactically valid code).
I think you need that:
while read -r arg; do sudo bash read-my-name-script.sh "$arg";done <name-input-file
So each line of name-input-file will be passed as argument to sudo bash read-my-name-script.sh
If your argslist located on http server, you can do that:
while read -r arg; do sudo bash read-my-name-script.sh "$arg";done < <(wget -q -O- http://some/address/in/internet/name-input-file)
UPD
add [[ -f name-input-file ]] && readarray -t args <name-input-file
to read-my-name-script.sh
and use "${args[#]}" as arguments of command in the script.
For example echo "${args[#]}" or cmd "${args[0]}" "${args[1]}" ... "${args[100]}" in any order.
In this case you can use
wget -q -O- http://some/address/in/internet/read-my-name-script.sh | bash
for run your script with arguments from name-input-file whitout saving script to the local machine
I know what they do. I was just wondering what kind of command are they. How can you make one using shell scripting.
For example, command like:
ignoreError ls /Home/
ignoreError mkdir /Home/
ignoreError cat
ignoreError randomcommand
Hope you get the idea
The way to do it in a shell script is with the "$#" construct.
"$#" expands to a quoted list of all of the arguments you passed to your shell script. $1 would be the command you want your shell script to run, and $2 $3 etc are the arguments to that command.
The only example I have is from cygwin. Cygwin does not have sudo, but I have this script that emulates it:
#!/usr/bin/bash
cygstart --action=runas "$#"
So when I run a command like
$ sudo ls -l
my sudo script does whatever it needs to do (cygstart --action=runas) and calls the ls command with the -l argument.
Try this script:
#!/bin/sh
"$#"
Call it, for example, run, make it runnable chmod u+x run, and try it:
$ run ls -l #or ./run ls -l
...
output of ls
...
The idea is that the script takes the parameters specified on the command line and use them as a (sub)command... Modify the script this way:
#!/bin/sh
echo "Trying to run $*"
"$#"
and you will see.
The cronjob does not pipe the output from another script to a file but it works I execute it (not same user, chmod for both files is set to 777).
#! /bin/sh
. /disk2/etc/env_cron
SUBJ="Test"
TEXT=/disk2/home/user/mailtxt
ADDR="mail#domain.com"
echo -e `date` > $TEXT
echo -e "1\n\n\nq" | menu >> $TEXT
mutt -s "$SUBJ" -i $TEXT -- $ADDR < /dev/null
I want it to pipe "echo -e 1\n\n\nq" to the script Menu and in turn get the output in a file. The output from Menu will just be text.
The problem (as suggested) was that the cronjob did not have the script 'menu' in it's path. Changing "menu" in the script to the absolute path fixed it.
echo -e "1\n\n\nq" | /folder/folder/menu >> $TEXT
EDIT: Do not forget to set the correct permissions on the textfile if the cronjob is run by another user.