Question regarding scope of variables/environment variables in Linux - linux

I would like to get a better understanding about the theorical/technical reason of the following behaviour
On a Linux shell I run the following:
MY_VAR="foo" && python3 -c "import os; print('MY_VAR' in os.environ)"
And the result is False
I understand that this is due to the fact that in order to access a variable from a subprocess of the current shell (In this case Python), we need to export it, so when running like this
EXPORT MY_VAR="foo" && python3 -c "import os; print('MY_VAR' in os.environ)"
Result is True
I know this also happens with bash scripts that we call from terminal, in order for the script to have access to the variable it needs to be exported before
However when running something like the following:
MY_VAR_2="foo_2" && echo "This line matches foo_2" | grep ${MY_VAR_2} | wc -l
Result is 1 so there is match
My question is, in this case why MY_VAR_2 was "available" to grep with no need of using EXPORT ?
Isn't grep also a program and therefore a subprocess of the existing shell ?

With:
MY_VAR_2="foo_2" && echo "This line matches foo_2" | grep ${MY_VAR_2} | wc -l
You are executing commands in the same shell and hence the variable MY_VAR_2 will be available to all commands
If you change the line to:
MY_VAR_2="foo_2" && bash -c 'echo "This line matches foo_2"' | bash -c 'grep ${MY_VAR_2} | wc -l'
Because separate shells are opened, the variable will no longer be available unless you use export and so:
export MY_VAR_2="foo_2" && bash -c 'echo "This line matches foo_2"' | bash -c 'grep ${MY_VAR_2} | wc -l'
In the case of python, you will in effect be opening an new shell and so the same logic applies.

Related

bash command working from terminal but not from script [duplicate]

a.sh
#! /bin/sh
export x=/usr/local
we can do source ./a in command-line. But I need to do the export through shell script.
b.sh
#! /bin/sh
. ~/a.sh
no error... but $x in command-line will show nothing. So it didn't get export.
Any idea how to make it work?
a.sh
#! /bin/sh
export x=/usr/local
-----------
admin#client: ./a.sh
admin#client: echo $x
admin#client: <insert ....>
You can put export statements in a shell script and then use the 'source' command to execute it in the current process:
source a.sh
You can't do an export through a shell script, because a shell script runs in a child shell process, and only children of the child shell would inherit the export.
The reason for using source is to have the current shell execute the commands
It's very common to place export commands in a file such as .bashrc which a bash will source on startup (or similar files for other shells)
Another idea is that you could create a shell script which generates an export command as it's output:
shell$ cat > script.sh
#!/bin/sh
echo export foo=bar
^D
chmod u+x script.sh
And then have the current shell execute that output
shell$ `./script.sh`
shell$ echo $foo
bar
shell$ /bin/sh
$ echo $foo
bar
(note above that the invocation of the script is surrounded by backticks, to cause the shell to execute the output of the script)
Answering my own question here, using the answers above: if I have more than one related variable to export which use the same value as part of each export, I can do this:
#!/bin/bash
export TEST_EXPORT=$1
export TEST_EXPORT_2=$1_2
export TEST_EXPORT_TWICE=$1_$1
and save as e.g. ~/Desktop/TEST_EXPORTING
and finally $chmod +x ~/Desktop/TEST_EXPORTING
--
After that, running it with source ~/Desktop/TEST_EXPORTING bob
and then checking with export | grep bob should show what you expect.
Exporting a variable into the environment only makes that variable visible to child processes. There is no way for a child to modify the environment of its parent.
Another way you can do it (to steal/expound upon the idea above), is to put the script in ~/bin and make sure ~/bin is in your PATH. Then you can access your variable globally. This is just an example I use to compile my Go source code which needs the GOPATH variable to point to the current directory (assuming you're in the directory you need to compile your source code from):
From ~/bin/GOPATH:
#!/bin/bash
echo declare -x GOPATH=$(pwd)
Then you just do:
#> $(GOPATH)
So you can now use $(GOPATH) from within your other scripts too, such as custom build scripts which can automatically invoke this variable and declare it on the fly thanks to $(pwd).
script1.sh
shell_ppid=$PPID
shell_epoch=$(grep se.exec_start "/proc/${shell_ppid}/sched" | sed 's/[[:space:]]//g' | cut -f2 -d: | cut -f1 -d.)
now_epoch=$(($(date +%s%N)/1000000))
shell_start=$(( (now_epoch - shell_epoch)/1000 ))
env_md5=$(md5sum <<<"${shell_ppid}-${shell_start}"| sed 's/[[:space:]]//g' | cut -f1 -d-)
tmp_dir="/tmp/ToD-env-${env_md5}"
mkdir -p "${tmp_dir}"
ENV_PROPS="${tmp_dir}/.env"
echo "FOO=BAR" > "${ENV_PROPS}"
script2.sh
shell_ppid=$PPID
shell_epoch=$(grep se.exec_start "/proc/${shell_ppid}/sched" | sed 's/[[:space:]]//g' | cut -f2 -d: | cut -f1 -d.)
now_epoch=$(($(date +%s%N)/1000000))
shell_start=$(( (now_epoch - shell_epoch)/1000 ))
env_md5=$(md5sum <<<"${shell_ppid}-${shell_start}"| sed 's/[[:space:]]//g' | cut -f1 -d-)
tmp_dir="/tmp/ToD-env-${env_md5}"
mkdir -p "${tmp_dir}"
ENV_PROPS="${tmp_dir}/.env"
source "${ENV_PROPS}"
echo $FOO
./script1.sh
./script2.sh
BAR
It persists for the scripts run in the same parent shell, and it prevents collisions.

How can I feed input within bash [Executed through the Network]

As the title says, within linux how can I feed input to the bash when I do sudo bash
Lets say I have a bash script that reads the name.
The way I execute the script is through sudo using:
cat read-my-name-script.sh | sudo bash
Lets just say this is how I execute the script throught the network.
Now I want to fill the name automatically, is there a way to feed the input. I tried doing this: cat read-my-name-script.sh < name-input-file | sudo bash where the name-input-file is a file for the input that the user will be using to feed the script.
I am new to linux and learning to automate the input and wanted to create a file for input where the user can fill it and feed it to my script.
This is convoluted, but might do what you want.
sudo bash -c "$(cat read-my-name.sh)" <name-input-file
The -c says the next quoted argument are the commands to run (so, read the script as a string on the command line, instead of from a file), and the calling shell interpolates the contents of the file inside the double quotes before the sudo command gets evaluated. So if read-my-name.sh contains
#!/bin/bash
read -p "I want your name please"
then the command gets expanded into
sudo bash -c '#!/bin/bash
read -p "I want your name please"' <name-input-file
(where of course at this time the shell has actually removed the outer double quotes altogether; I put in single quotes in their place instead to show how this would look as actually executable, syntactically valid code).
I think you need that:
while read -r arg; do sudo bash read-my-name-script.sh "$arg";done <name-input-file
So each line of name-input-file will be passed as argument to sudo bash read-my-name-script.sh
If your argslist located on http server, you can do that:
while read -r arg; do sudo bash read-my-name-script.sh "$arg";done < <(wget -q -O- http://some/address/in/internet/name-input-file)
UPD
add [[ -f name-input-file ]] && readarray -t args <name-input-file
to read-my-name-script.sh
and use "${args[#]}" as arguments of command in the script.
For example echo "${args[#]}" or cmd "${args[0]}" "${args[1]}" ... "${args[100]}" in any order.
In this case you can use
wget -q -O- http://some/address/in/internet/read-my-name-script.sh | bash
for run your script with arguments from name-input-file whitout saving script to the local machine

Execute a find command with expression from a shell script [duplicate]

This question already has answers here:
Why does shell ignore quoting characters in arguments passed to it through variables? [duplicate]
(3 answers)
Closed 6 years ago.
I'm trying to write a database call from within a bash script and I'm having problems with a sub-shell stripping my quotes away.
This is the bones of what I am doing.
#---------------------------------------------
#! /bin/bash
export COMMAND='psql ${DB_NAME} -F , -t --no-align -c "${SQL}" -o ${EXPORT_FILE} 2>&1'
PSQL_RETURN=`${COMMAND}`
#---------------------------------------------
If I use an 'echo' to print out the ${COMMAND} variable the output looks fine:
echo ${COMMAND}
screen output:-
#---------------
psql drupal7 -F , -t --no-align -c "SELECT DISTINCT hostname FROM accesslog;" -o /DRUPAL/INTERFACES/EXPORTS/ip_list.dat 2>&1
#---------------
Also if I cut and paste this screen output it executes just fine.
However, when I try to execute the command as a variable within a sub-shell call, it gives an error message.
The error is from the psql client to the effect that the quotes have been removed from around the ${SQL} string.
The error suggests psql is trying to interpret the terms in the sql string as parameters.
So it seems the string and quotes are composed correctly but the quotes around the ${SQL} variable/string are being interpreted by the sub-shell during the execution call from the main script.
I've tried to escape them using various methods: \", \\", \\\", "", \"" '"', \'"\', ... ...
As you can see from my 'try it all' approach I am no expert and it's driving me mad.
Any help would be greatly appreciated.
Charlie101
Instead of storing command in a string var better to use BASH array here:
cmd=(psql ${DB_NAME} -F , -t --no-align -c "${SQL}" -o "${EXPORT_FILE}")
PSQL_RETURN=$( "${cmd[#]}" 2>&1 )
Rather than evaluating the contents of a string, why not use a function?
call_psql() {
# optional, if variables are already defined in global scope
DB_NAME="$1"
SQL="$2"
EXPORT_FILE="$3"
psql "$DB_NAME" -F , -t --no-align -c "$SQL" -o "$EXPORT_FILE" 2>&1
}
then you can just call your function like:
PSQL_RETURN=$(call_psql "$DB_NAME" "$SQL" "$EXPORT_FILE")
It's entirely up to you how elaborate you make the function. You might like to check for the correct number of arguments (using something like (( $# == 3 ))) before calling the psql command.
Alternatively, perhaps you'd prefer just to make it as short as possible:
call_psql() { psql "$1" -F , -t --no-align -c "$2" -o "$3" 2>&1; }
In order to capture the command that is being executed for debugging purposes, you can use set -x in your script. This will the contents of the function including the expanded variables when the function (or any other command) is called. You can switch this behaviour off using set +x, or if you want it on for the whole duration of the script you can change the shebang to #!/bin/bash -x. This saves you explicitly echoing throughout your script to find out what commands are being run; you can just turn on set -x for a section.
A very simple example script using the shebang method:
#!/bin/bash -x
ec() {
echo "$1"
}
var=$(ec 2)
Running this script, either directly after making it executable or calling it with bash -x, gives:
++ ec 2
++ echo 2
+ var=2
Removing the -x from the shebang or the invocation results in the script running silently.

Find the current shell of the user using a shell script [duplicate]

How can I determine the current shell I am working on?
Would the output of the ps command alone be sufficient?
How can this be done in different flavors of Unix?
There are three approaches to finding the name of the current shell's executable:
Please note that all three approaches can be fooled if the executable of the shell is /bin/sh, but it's really a renamed bash, for example (which frequently happens).
Thus your second question of whether ps output will do is answered with "not always".
echo $0 - will print the program name... which in the case of the shell is the actual shell.
ps -ef | grep $$ | grep -v grep - this will look for the current process ID in the list of running processes. Since the current process is the shell, it will be included.
This is not 100% reliable, as you might have other processes whose ps listing includes the same number as shell's process ID, especially if that ID is a small number (for example, if the shell's PID is "5", you may find processes called "java5" or "perl5" in the same grep output!). This is the second problem with the "ps" approach, on top of not being able to rely on the shell name.
echo $SHELL - The path to the current shell is stored as the SHELL variable for any shell. The caveat for this one is that if you launch a shell explicitly as a subprocess (for example, it's not your login shell), you will get your login shell's value instead. If that's a possibility, use the ps or $0 approach.
If, however, the executable doesn't match your actual shell (e.g. /bin/sh is actually bash or ksh), you need heuristics. Here are some environmental variables specific to various shells:
$version is set on tcsh
$BASH is set on bash
$shell (lowercase) is set to actual shell name in csh or tcsh
$ZSH_NAME is set on zsh
ksh has $PS3 and $PS4 set, whereas the normal Bourne shell (sh) only has $PS1 and $PS2 set. This generally seems like the hardest to distinguish - the only difference in the entire set of environment variables between sh and ksh we have installed on Solaris boxen is $ERRNO, $FCEDIT, $LINENO, $PPID, $PS3, $PS4, $RANDOM, $SECONDS, and $TMOUT.
ps -p $$
should work anywhere that the solutions involving ps -ef and grep do (on any Unix variant which supports POSIX options for ps) and will not suffer from the false positives introduced by grepping for a sequence of digits which may appear elsewhere.
Try
ps -p $$ -oargs=
or
ps -p $$ -ocomm=
If you just want to ensure the user is invoking a script with Bash:
if [ -z "$BASH" ]; then echo "Please run this script $0 with bash"; exit; fi
or ref
if [ -z "$BASH" ]; then exec bash $0 ; exit; fi
You can try:
ps | grep `echo $$` | awk '{ print $4 }'
Or:
echo $SHELL
$SHELL need not always show the current shell. It only reflects the default shell to be invoked.
To test the above, say bash is the default shell, try echo $SHELL, and then in the same terminal, get into some other shell (KornShell (ksh) for example) and try $SHELL. You will see the result as bash in both cases.
To get the name of the current shell, Use cat /proc/$$/cmdline. And the path to the shell executable by readlink /proc/$$/exe.
There are many ways to find out the shell and its corresponding version. Here are few which worked for me.
Straightforward
$> echo $0 (Gives you the program name. In my case the output was -bash.)
$> $SHELL (This takes you into the shell and in the prompt you get the shell name and version. In my case bash3.2$.)
$> echo $SHELL (This will give you executable path. In my case /bin/bash.)
$> $SHELL --version (This will give complete info about the shell software with license type)
Hackish approach
$> ******* (Type a set of random characters and in the output you will get the shell name. In my case -bash: chapter2-a-sample-isomorphic-app: command not found)
ps is the most reliable method. The SHELL environment variable is not guaranteed to be set and even if it is, it can be easily spoofed.
I have a simple trick to find the current shell. Just type a random string (which is not a command). It will fail and return a "not found" error, but at start of the line it will say which shell it is:
ksh: aaaaa: not found [No such file or directory]
bash: aaaaa: command not found
I have tried many different approaches and the best one for me is:
ps -p $$
It also works under Cygwin and cannot produce false positives as PID grepping. With some cleaning, it outputs just an executable name (under Cygwin with path):
ps -p $$ | tail -1 | awk '{print $NF}'
You can create a function so you don't have to memorize it:
# Print currently active shell
shell () {
ps -p $$ | tail -1 | awk '{print $NF}'
}
...and then just execute shell.
It was tested under Debian and Cygwin.
The following will always give the actual shell used - it gets the name of the actual executable and not the shell name (i.e. ksh93 instead of ksh, etc.). For /bin/sh, it will show the actual shell used, i.e. dash.
ls -l /proc/$$/exe | sed 's%.*/%%'
I know that there are many who say the ls output should never be processed, but what is the probability you'll have a shell you are using that is named with special characters or placed in a directory named with special characters? If this is still the case, there are plenty of other examples of doing it differently.
As pointed out by Toby Speight, this would be a more proper and cleaner way of achieving the same:
basename $(readlink /proc/$$/exe)
My variant on printing the parent process:
ps -p $$ | awk '$1 == PP {print $4}' PP=$$
Don't run unnecessary applications when AWK can do it for you.
Provided that your /bin/sh supports the POSIX standard and your system has the lsof command installed - a possible alternative to lsof could in this case be pid2path - you can also use (or adapt) the following script that prints full paths:
#!/bin/sh
# cat /usr/local/bin/cursh
set -eu
pid="$$"
set -- sh bash zsh ksh ash dash csh tcsh pdksh mksh fish psh rc scsh bournesh wish Wish login
unset echo env sed ps lsof awk getconf
# getconf _POSIX_VERSION # reliable test for availability of POSIX system?
PATH="`PATH=/usr/bin:/bin:/usr/sbin:/sbin getconf PATH`"
[ $? -ne 0 ] && { echo "'getconf PATH' failed"; exit 1; }
export PATH
cmd="lsof"
env -i PATH="${PATH}" type "$cmd" 1>/dev/null 2>&1 || { echo "$cmd not found"; exit 1; }
awkstr="`echo "$#" | sed 's/\([^ ]\{1,\}\)/|\/\1/g; s/ /$/g' | sed 's/^|//; s/$/$/'`"
ppid="`env -i PATH="${PATH}" ps -p $pid -o ppid=`"
[ "${ppid}"X = ""X ] && { echo "no ppid found"; exit 1; }
lsofstr="`lsof -p $ppid`" ||
{ printf "%s\n" "lsof failed" "try: sudo lsof -p \`ps -p \$\$ -o ppid=\`"; exit 1; }
printf "%s\n" "${lsofstr}" |
LC_ALL=C awk -v var="${awkstr}" '$NF ~ var {print $NF}'
My solution:
ps -o command | grep -v -e "\<ps\>" -e grep -e tail | tail -1
This should be portable across different platforms and shells. It uses ps like other solutions, but it doesn't rely on sed or awk and filters out junk from piping and ps itself so that the shell should always be the last entry. This way we don't need to rely on non-portable PID variables or picking out the right lines and columns.
I've tested on Debian and macOS with Bash, Z shell (zsh), and fish (which doesn't work with most of these solutions without changing the expression specifically for fish, because it uses a different PID variable).
If you just want to check that you are running (a particular version of) Bash, the best way to do so is to use the $BASH_VERSINFO array variable. As a (read-only) array variable it cannot be set in the environment,
so you can be sure it is coming (if at all) from the current shell.
However, since Bash has a different behavior when invoked as sh, you do also need to check the $BASH environment variable ends with /bash.
In a script I wrote that uses function names with - (not underscore), and depends on associative arrays (added in Bash 4), I have the following sanity check (with helpful user error message):
case `eval 'echo $BASH#${BASH_VERSINFO[0]}' 2>/dev/null` in
*/bash#[456789])
# Claims bash version 4+, check for func-names and associative arrays
if ! eval "declare -A _ARRAY && func-name() { :; }" 2>/dev/null; then
echo >&2 "bash $BASH_VERSION is not supported (not really bash?)"
exit 1
fi
;;
*/bash#[123])
echo >&2 "bash $BASH_VERSION is not supported (version 4+ required)"
exit 1
;;
*)
echo >&2 "This script requires BASH (version 4+) - not regular sh"
echo >&2 "Re-run as \"bash $CMD\" for proper operation"
exit 1
;;
esac
You could omit the somewhat paranoid functional check for features in the first case, and just assume that future Bash versions would be compatible.
None of the answers worked with fish shell (it doesn't have the variables $$ or $0).
This works for me (tested on sh, bash, fish, ksh, csh, true, tcsh, and zsh; openSUSE 13.2):
ps | tail -n 4 | sed -E '2,$d;s/.* (.*)/\1/'
This command outputs a string like bash. Here I'm only using ps, tail, and sed (without GNU extesions; try to add --posix to check it). They are all standard POSIX commands. I'm sure tail can be removed, but my sed fu is not strong enough to do this.
It seems to me, that this solution is not very portable as it doesn't work on OS X. :(
echo $$ # Gives the Parent Process ID
ps -ef | grep $$ | awk '{print $8}' # Use the PID to see what the process is.
From How do you know what your current shell is?.
This is not a very clean solution, but it does what you want.
# MUST BE SOURCED..
getshell() {
local shell="`ps -p $$ | tail -1 | awk '{print $4}'`"
shells_array=(
# It is important that the shells are listed in descending order of their name length.
pdksh
bash dash mksh
zsh ksh
sh
)
local suited=false
for i in ${shells_array[*]}; do
if ! [ -z `printf $shell | grep $i` ] && ! $suited; then
shell=$i
suited=true
fi
done
echo $shell
}
getshell
Now you can use $(getshell) --version.
This works, though, only on KornShell-like shells (ksh).
Do the following to know whether your shell is using Dash/Bash.
ls –la /bin/sh:
if the result is /bin/sh -> /bin/bash ==> Then your shell is using Bash.
if the result is /bin/sh ->/bin/dash ==> Then your shell is using Dash.
If you want to change from Bash to Dash or vice-versa, use the below code:
ln -s /bin/bash /bin/sh (change shell to Bash)
Note: If the above command results in a error saying, /bin/sh already exists, remove the /bin/sh and try again.
I like Nahuel Fouilleul's solution particularly, but I had to run the following variant of it on Ubuntu 18.04 (Bionic Beaver) with the built-in Bash shell:
bash -c 'shellPID=$$; ps -ocomm= -q $shellPID'
Without the temporary variable shellPID, e.g. the following:
bash -c 'ps -ocomm= -q $$'
Would just output ps for me. Maybe you aren't all using non-interactive mode, and that makes a difference.
Get it with the $SHELL environment variable. A simple sed could remove the path:
echo $SHELL | sed -E 's/^.*\/([aA-zZ]+$)/\1/g'
Output:
bash
It was tested on macOS, Ubuntu, and CentOS.
On Mac OS X (and FreeBSD):
ps -p $$ -axco command | sed -n '$p'
Grepping PID from the output of "ps" is not needed, because you can read the respective command line for any PID from the /proc directory structure:
echo $(cat /proc/$$/cmdline)
However, that might not be any better than just simply:
echo $0
About running an actually different shell than the name indicates, one idea is to request the version from the shell using the name you got previously:
<some_shell> --version
sh seems to fail with exit code 2 while others give something useful (but I am not able to verify all since I don't have them):
$ sh --version
sh: 0: Illegal option --
echo $?
2
One way is:
ps -p $$ -o exe=
which is IMO better than using -o args or -o comm as suggested in another answer (these may use, e.g., some symbolic link like when /bin/sh points to some specific shell as Dash or Bash).
The above returns the path of the executable, but beware that due to /usr-merge, one might need to check for multiple paths (e.g., /bin/bash and /usr/bin/bash).
Also note that the above is not fully POSIX-compatible (POSIX ps doesn't have exe).
Kindly use the below command:
ps -p $$ | tail -1 | awk '{print $4}'
This one works well on Red Hat Linux (RHEL), macOS, BSD and some AIXes:
ps -T $$ | awk 'NR==2{print $NF}'
alternatively, the following one should also work if pstree is available,
pstree | egrep $$ | awk 'NR==2{print $NF}'
You can use echo $SHELL|sed "s/\/bin\///g"
And I came up with this:
sed 's/.*SHELL=//; s/[[:upper:]].*//' /proc/$$/environ

"stdin: is not a tty" from cronjob

I'm getting the following mail every time I execute a specific cronjob. The called script runs fine when I'm calling it directly and even from cron. So the message I get is not an actual error, since the script does exactly what it is supposed to do.
Here is the cron.d entry:
* * * * * root /bin/bash -l -c "/opt/get.sh > /tmp/file"
and the get.sh script itself:
#!/bin/sh
#group and url
groups="foo"
url="https://somehost.test/get.php?groups=${groups}"
# encryption
pass='bar'
method='aes-256-xts'
pass=$(echo -n $pass | xxd -ps | sed 's/[[:xdigit:]]\{2\}/&/g')
encrypted=$(wget -qO- ${url})
decoded=$(echo -n $encrypted | awk -F '#' '{print $1}')
iv=$(echo $encrypted | awk -F '#' '{print $2}' |base64 --decode | xxd -ps | sed 's/[[:xdigit:]]\{2\}/&/g')
# base64 decode input and save to file
output=$(echo -n $decoded | base64 --decode | openssl enc -${method} -d -nosalt -nopad -K ${pass} -iv ${iv})
if [ ! -z "${output}" ]; then
echo "${output}"
else
echo "Error while getting information"
fi
When I'm not using the bash -l syntax the script hangs during the wget process. So my guess would be that it has something to do with wget and putting the output to stdout. But I have no idea how to fix it.
You actually have two questions here.
Why it prints stdin: is not a tty?
This warning message is printed by bash -l. The -l (--login) options asks bash to start the login shell, e.g. the one which is usually started when you enter your password. In this case bash expects its stdin to be a real terminal (e.g. the isatty(0) call should return 1), and it's not true if it is run by cron—hence this warning.
Another easy way to reproduce this warning, and the very common one, is to run this command via ssh:
$ ssh user#example.com 'bash -l -c "echo test"'
Password:
stdin: is not a tty
test
It happens because ssh does not allocate a terminal when called with a command as a parameter (one should use -t option for ssh to force the terminal allocation in this case).
Why it did not work without -l?
As correctly stated by #Cyrus in the comments, the list of files which bash loads on start depends on the type of the session. E.g. for login shells it will load /etc/profile, ~/.bash_profile, ~/.bash_login, and ~/.profile (see INVOCATION in manual bash(1)), while for non-login shells it will only load ~/.bashrc. It seems you defined your http_proxy variable only in one of the files loaded for login shells, but not in ~/.bashrc. You moved it to ~/.wgetrc and it's correct, but you could also define it in ~/.bashrc and it would have worked.
in your .profile, change
mesg n
to
if `tty -s`; then
mesg n
fi
I ended up putting the proxy configuration in the wgetrc. There is now no need to execute the script on a login shell anymore.
This is not a real answer to the actual problem, but it solved mine.
If you run into this problem check if you are getting all the environment variables set as you expect. Thanks to Cyrus for putting me to the right direction.

Resources